AI

publisher:-openai’s-gpt-store-bots-are-illegally-scraping-our-textbooks

Publisher: OpenAI’s GPT Store bots are illegally scraping our textbooks

OpenAI logo

For the past few months, Morten Blichfeldt Andersen has spent many hours scouring OpenAI’s GPT Store. Since it launched in January, the marketplace for bespoke bots has filled up with a deep bench of useful and sometimes quirky AI tools. Cartoon generators spin up New Yorker–style illustrations and vivid anime stills. Programming and writing assistants offer shortcuts for crafting code and prose. There’s also a color analysis bot, a spider identifier, and a dating coach called RizzGPT. Yet Blichfeldt Andersen is hunting only for one very specific type of bot: Those built on his employer’s copyright-protected textbooks without permission.

Blichfeldt Andersen is publishing director at Praxis, a Danish textbook purveyor. The company has been embracing AI and created its own custom chatbots. But it is currently engaged in a game of whack-a-mole in the GPT Store, and Blichfeldt Andersen is the man holding the mallet.

“I’ve been personally searching for infringements and reporting them,” Blichfeldt Andersen says. “They just keep coming up.” He suspects the culprits are primarily young people uploading material from textbooks to create custom bots to share with classmates—and that he has uncovered only a tiny fraction of the infringing bots in the GPT Store. “Tip of the iceberg,” Blichfeldt Andersen says.

It is easy to find bots in the GPT Store whose descriptions suggest they might be tapping copyrighted content in some way, as Techcrunch noted in a recent article claiming OpenAI’s store was overrun with “spam.” Using copyrighted material without permission is permissible in some contexts but in others rightsholders can take legal action. WIRED found a GPT called Westeros Writer that claims to “write like George R.R. Martin,” the creator of Game of Thrones. Another, Voice of Atwood, claims to imitate the writer Margaret Atwood. Yet another, Write Like Stephen, is intended to emulate Stephen King.

When WIRED tried to trick the King bot into revealing the “system prompt” that tunes its responses, the output suggested it had access to King’s memoir On Writing. Write Like Stephen was able to reproduce passages from the book verbatim on demand, even noting which page the material came from. (WIRED could not make contact with the bot’s developer, because it did not provide an email address, phone number, or external social profile.)

OpenAI spokesperson Kayla Wood says it responds to takedown requests against GPTs made with copyrighted content but declined to answer WIRED’s questions about how frequently it fulfills such requests. She also says the company proactively looks for problem GPTs. “We use a combination of automated systems, human review, and user reports to find and assess GPTs that potentially violate our policies, including the use of content from third parties without necessary permission,” Wood says.

New disputes

The GPT store’s copyright problem could add to OpenAI’s existing legal headaches. The company is facing a number of high-profile lawsuits alleging copyright infringement, including one brought by The New York Times and several brought by different groups of fiction and nonfiction authors, including big names like George R.R. Martin.

Chatbots offered in OpenAI’s GPT Store are based on the same technology as its own ChatGPT but are created by outside developers for specific functions. To tailor their bot, a developer can upload extra information that it can tap to augment the knowledge baked into OpenAI’s technology. The process of consulting this additional information to respond to a person’s queries is called retrieval-augmented generation, or RAG. Blichfeldt Andersen is convinced that the RAG files behind the bots in the GPT Store are a hotbed of copyrighted materials uploaded without permission.

Publisher: OpenAI’s GPT Store bots are illegally scraping our textbooks Read More »

after-ai-generated-porn-report,-washington-lottery-pulls-down-interactive-web-app

After AI-generated porn report, Washington Lottery pulls down interactive web app

You could be a winner! —

User says promo site put her uploaded selfie on a topless woman’s body.

A user of the Washington Lottery's

Enlarge / A user of the Washington Lottery’s “Test Drive a Win” website says it used AI to generate (the unredacted version of) this image with her face on a topless body.

The Washington State Lottery has taken down a promotional AI-powered web app after a local mother reported that the site generated an image with her face on the body of a topless woman.

The lottery’s “Test Drive a Win” website was designed to help visitors visualize various dream vacations they could pay for with their theoretical lottery winnings. The site included the ability to upload a headshot that would be integrated into an AI-generated tableau of what you might look like on that vacation.

But Megan (last name not given), a 50-year-old from Olympia suburb Tumwater, told conservative Seattle radio host Jason Rantz that the image of her “swim with the sharks” dream vacation on the website showed her face atop a woman sitting on a bed with her breasts exposed. The background of the AI-generated image seems to show the bed in some sort of aquarium, complete with fish floating through the air and sprawling undersea flora sitting awkwardly behind the pillows.

The corner of the image features the Washington Lottery logo.

“Our tax dollars are paying for that! I was completely shocked. It’s disturbing to say the least,” Megan told Rantz. “I also think whoever was responsible for it should be fired.”

“We don’t want something like this purported event to happen again”

The non-functional

Enlarge / The non-functional “Test Drive a Win” website as it appeared Thursday.

In a statement provided to Ars Technica, a Washington Lottery spokesperson said that the lottery “worked closely with the developers of the AI platform to establish strict parameters to govern image creation.” Despite this, the spokesperson said they were notified earlier this week that “a single user of the AI platform was purportedly provided an image that did not adhere to those guidelines.”

Despite what the spokesperson said were “thousands” of inoffensive images that the site generated in over a month, the spokesperson said that “one purported user is too many and as a result we have shut down the site” as of Tuesday.

The spokesperson did not respond to specific questions about which AI models or third-party vendors may have been used to create the site or on the specific safeguards that were crafted in an attempt to prevent results like the one reported by Megan.

Speaking to Rantz, a lottery spokesperson said the organization had “agreed to a comprehensive set of rules” for the site’s AI images, “including that people in images be fully clothed.” Following the report of the topless image, the spokesperson said they “had the developers check all the parameters for the platform.” And while they were “comfortable with the settings,” the spokesperson told Rantz they “chose to take down the site out of an abundance of caution, as we don’t want something like this purported event to happen again.”

Not a quick fix?

On his radio show, Rantz expressed surprise that the lottery couldn’t keep the site operational after rejiggering the AI’s safety settings. “In my head I was thinking, well, presumably once they heard about this they went back to the backend guidelines and just made sure it said, ‘Hey, no breasts, no full-frontal nudity,’ those kinds of things, and then they fixed it, and then they went on with their day,” Rantz said.

But it might not be that simple to effectively rein in the endless variety of visual output an AI model can generate. While models like Stable Diffusion and DALL-E have filters in place to prevent the generation of sexual or violent images, researchers have found that those models still responded to problematic prompts by generating images that were judged as “unsafe” by an image classifier a significant minority of the time. Malicious users can also use prompt-engineering tricks to get around these built-in safeguards when using popular text-based image-generation models.

We’ve seen these kinds of AI image-safety issues blow back on major corporations, too, as when Facebook’s AI sticker generator put weapons in the hands of children’s cartoon characters. More recently, a Microsoft engineer publicly accused the company’s Copilot image-generation tool of randomly creating violent and sexual imagery even after the team was warned of the issue.

The Washington Lottery’s AI issue comes a week after a report found a New York City government chatbot confabulating incorrect advice about city laws and regulations. “It’s wrong in some areas and we gotta fix it,” New York City Mayor Eric Adams said this week. “Any time you use technology, you need to put it in the real environment to iron out the kinks. You can’t live in a lab. You can’t stay in a lab forever.”

After AI-generated porn report, Washington Lottery pulls down interactive web app Read More »

fake-ai-law-firms-are-sending-fake-dmca-threats-to-generate-fake-seo-gains

Fake AI law firms are sending fake DMCA threats to generate fake SEO gains

Dewey Fakum & Howe, LLP —

How one journalist found himself targeted by generative AI over a keyfob photo.

Updated

Face composed of many pixellated squares, joining together

Enlarge / A person made of many parts, similar to the attorney who handles both severe criminal law and copyright takedowns for an Arizona law firm.

Getty Images

If you run a personal or hobby website, getting a copyright notice from a law firm about an image on your site can trigger some fast-acting panic. As someone who has paid to settle a news service-licensing issue before, I can empathize with anybody who wants to make this kind of thing go away.

Which is why a new kind of angle-on-an-angle scheme can seem both obvious to spot and likely effective. Ernie Smith, the prolific, ever-curious writer behind the newsletter Tedium, received a “DMCA Copyright Infringement Notice” in late March from “Commonwealth Legal,” representing the “Intellectual Property division” of Tech4Gods.

The issue was with a photo of a keyfob from legitimate photo service Unsplash used in service of a post about a strange Uber ride Smith once took. As Smith detailed in a Mastodon thread, the purported firm needed him to “add a credit to our client immediately” through a link to Tech4Gods, and said it should be “addressed in the next five business days.” Removing the image “does not conclude the matter,” and should Smith not have taken action, the putative firm would have to “activate” its case, relying on DMCA 512(c) (which, in many readings, actually does grant relief should a website owner, unaware of infringing material, “act expeditiously to remove” said material). The email unhelpfully points to the main page of the Internet Archive so that Smith might review “past usage records.”

A slice of the website for Commonwealth Legal Services, with every word of that phrase, including

A slice of the website for Commonwealth Legal Services, with every word of that phrase, including “for,” called into question.

Commonwealth Legal Services

There are quite a few issues with Commonwealth Legal’s request, as detailed by Smith and 404 Media. Chief among them is that Commonwealth Legal, a firm theoretically based in Arizona (which is not a commonwealth), almost certainly does not exist. Despite the 2018 copyright displayed on the site, the firm’s website domain was seemingly registered on March 1, 2024, with a Canadian IP location. The address on the firm’s site leads to a location that, to say the least, does not match the “fourth floor” indicated on the website.

While the law firm’s website is stuffed full of stock images, so are many websites for professional services. The real tell is the site’s list of attorneys, most of which, as 404 Media puts it, have “vacant, thousand-yard stares” common to AI-generated faces. AI detection firm Reality Defender told 404 Media that his service spotted AI generation in every attorneys’ image, “most likely by a Generative Adversarial Network (GAN) model.”

Then there are the attorneys’ bios, which offer surface-level competence underpinned by bizarre setups. Five of the 12 supposedly come from acclaimed law schools at Harvard, Yale, Stanford, and University of Chicago. The other seven seem to have graduated from the top five results you might get for “Arizona Law School.” Sarah Walker has a practice based on “Copyright Violation and Judicial Criminal Proceedings,” a quite uncommon pairing. Sometimes she is “upholding the rights of artists,” but she can also “handle high-stakes criminal cases.” Walker, it seems, couldn’t pick just one track at Yale Law School.

Why would someone go to the trouble of making a law firm out of NameCheap, stock art, and AI images (and seemingly copy) to send quasi-legal demands to site owners? Backlinks, that’s why. Backlinks are links from a site that Google (or others, but almost always Google) holds in high esteem to a site trying to rank up. Whether spammed, traded, generated, or demanded through a fake firm, backlinks power the search engine optimization (SEO) gray, to very dark gray, market. For all their touted algorithmic (and now AI) prowess, search engines have always had a hard time gauging backlink quality and context, so some site owners still buy backlinks.

The owner of Tech4Gods told 404 Media’s Jason Koebler that he did buy backlinks for his gadget review site (with “AI writing assistants”). He disclaimed owning the disputed image or any images and made vague suggestions that a disgruntled former contractor may be trying to poison his ranking with spam links.

Asked by Ars if he had heard back from “Commonwealth Legal” now that five business days were up, Ernie Smith tells Ars: “No, alas.”

This post was updated at 4: 50 p.m. Eastern to include Ernie Smith’s response.

Fake AI law firms are sending fake DMCA threats to generate fake SEO gains Read More »

google-might-make-users-pay-for-ai-features-in-search-results

Google might make users pay for AI features in search results

Pay-eye for the AI —

Plan would represent a first for what has been a completely ad-funded search engine.

You think this cute little search robot is going to work for free?

Enlarge / You think this cute little search robot is going to work for free?

Google might start charging for access to search results that use generative artificial intelligence tools. That’s according to a new Financial Times report citing “three people with knowledge of [Google’s] plans.”

Charging for any part of the search engine at the core of its business would be a first for Google, which has funded its search product solely with ads since 2000. But it’s far from the first time Google would charge for AI enhancements in general; the “AI Premium” tier of a Google One subscription costs $10 more per month than a standard “Premium” plan, for instance, while “Gemini Business” adds $20 a month to a standard Google Workspace subscription.

While those paid products offer access to Google’s high-end “Gemini Advanced” AI model, Google also offers free access to its less performant, plain “Gemini” model without any kind of paid subscription.

When ads aren’t enough?

Under the proposed plan, Google’s standard search (without AI) would remain free, and subscribers to a paid AI search tier would still see ads alongside their Gemini-powered search results, according to the FT report. But search ads—which brought in a reported $175 billion for Google last year—might not be enough to fully cover the increased costs involved with AI-powered search. A Reuters report from last year suggested that running a search query through an advanced neural network like Gemini “likely costs 10 times more than a standard keyword search,” potentially representing “several billion dollars of extra costs” across Google’s network.

Cost aside, it remains to be seen if there’s a critical mass of market demand for this kind of AI-enhanced search. Microsoft’s massive investment in generative AI features for its Bing search engine has failed to make much of a dent in Google’s market share over the last year or so. And there has reportedly been limited uptake for Google’s experimental opt-in “Search Generative Experience” (SGE), which adds chatbot responses above the usual set of links in response to a search query.

“SGE never feels like a useful addition to Google Search,” Ars’ Ron Amadeo wrote last month. “Google Search is a tool, and just as a screwdriver is not a hammer, I don’t want a chatbot in a search engine.”

Regardless, the current tech industry mania surrounding anything and everything related to generative AI may make Google feel it has to integrate the technology into some sort of “premium” search product sooner rather than later. For now, FT reports that Google hasn’t made a final decision on whether to implement the paid AI search plan, even as Google engineers work on the backend technology necessary to launch such a service

Google also faces AI-related difficulties on the other side of the search divide. Last month, the company announced it was redoubling its efforts to limit the appearance of “spammy, low-quality content”—much of it generated by AI chatbots—in its search results.

In February, Google shut down the image generation features of its Gemini AI model after the service was found inserting historically inaccurate examples of racial diversity into some of its prompt responses.

Google might make users pay for AI features in search results Read More »

copilot-key-is-based-on-a-button-you-probably-haven’t-seen-since-ibm’s-model-m

Copilot key is based on a button you probably haven’t seen since IBM’s Model M

Microsoft chatbot button —

Left-Shift + Windows key + F23

A Dell XPS 14 laptop with a Copilot key.

Enlarge / A Dell XPS 14 laptop. The Copilot key is to the right of the right-Alt button.

In January, Microsoft introduced a new key to Windows PC keyboards for the first time in 30 years. The Copilot key, dedicated to launching Microsoft’s eponymous generative AI assistant, is already on some Windows laptops released this year. On Monday, Tom’s Hardware dug into the new addition and determined exactly what pressing the button does, which is actually pretty simple. Pushing a computer’s integrated Copilot button is like pressing left-Shift + Windows key + F23 simultaneously.

Tom’s Hardware confirmed this after wondering if the Copilot key introduced a new scan code to Windows or if it worked differently. Using the scripting program AuthoHotkey with a new laptop with a Copilot button, Tom’s Hardware discovered the keystrokes registered when a user presses the Copilot key. The publication confirmed with Dell that “this key assignment is standard for the Copilot key and done at Microsoft’s direction.”

F23

Surprising to see in that string of keys is F23. Having a computer keyboard with a function row or rows that take you from F1 all the way to F23 is quite rare today. When I try to imagine a keyboard that comes with an F23 button, vintage keyboards come to mind, more specifically buckling spring keyboards from IBM.

IBM’s Model F, which debuted in 1981 and used buckling spring switches over a capacitive PCB, and the Model M, which launched in 1985 and used buckling spring switches over a membrane sheet, both offered layouts with 122 keys. These layouts included not one, but two rows of function keys that would leave today’s 60 percent keyboard fans sweating over the wasted space.

But having 122 keys was helpful for keyboards tied to IBM business terminals. The keyboard layout even included a bank of keys to the left of the primary alpha block of keys for even more forms of input.

An IBM Model M keyboard with an F23 key.

Enlarge / An IBM Model M keyboard with an F23 key.

The 122-key keyboard layout with F23 lives on. Beyond people who still swear by old Model F and M keyboards, Model F Labs and Unicomp both currently sell modern buckling spring keyboards with built-in F23 buttons. Another reason a modern Windows PC user might have access to an F23 key is if they use a macro pad.

But even with those uses in mind, the F23 key remains rare. That helps explain why Microsoft would use the key for launching Copilot; users are unlikely to have F23 programmed for other functions. This was also likely less work than making a key with an entirely new scan code.

The Copilot button is reprogrammable

When I previewed Dell’s 2024 XPS laptops, a Dell representative told me that the integrated Copilot key wasn’t reprogrammable. However, in addition to providing some interesting information about the newest PC key since the Windows button, Tom’s Hardware’s revelation shows why the Copilot key is actually reprogrammable, even if OEMs don’t give users a way to do so out of the box. (If you need help, check out the website’s tutorial for reprogramming the Windows Copilot key.)

I suspect there’s a strong interest in reprogramming that button. For one, generative AI, despite all its hype and potential, is still an emerging technology. Many don’t need or want access to any chatbot—let alone Microsoft’s—instantly or even at all. Those who don’t use their system with a Microsoft account have no use for the button, since being logged in to a Microsoft account is required for the button to launch Copilot.

A rendering of the Copilot button.

Enlarge / A rendering of the Copilot button.

Microsoft

Additionally, there are other easy ways to launch Copilot on a computer that has the program downloaded, like double-clicking an icon or pressing Windows + C, that make a dedicated button unnecessary. (Ars Technica asked Microsoft why the Copilot key doesn’t just register Windows + C, but the company declined to comment. Windows + C has launched other apps in the past, including Cortana, so it’s possible that Microsoft wanted to avoid the Copilot key performing a different function when pressed on computers that use Windows images without Copilot.)

In general, shoehorning the Copilot key into Windows laptops seems premature. Copilot is young and still a preview; just a few months ago, it was called Bing Chat. Further, the future of generative AI, including its popularity and top uses, is still forming and could evolve substantially during the lifetime of a Windows laptop. Microsoft’s generative AI efforts could also flounder over the years. Imagine if Microsoft went all-in on Bing back in the day and made all Windows keyboards have a Bing button, for example. Just because Microsoft wants something to become mainstream doesn’t mean that it will.

This all has made the Copilot button seem more like a way to force the adoption of Microsoft’s chatbot than a way to improve Windows keyboards. Microsoft has also made the Copilot button a requirement for its AI PC certification (which also requires an integrated neural processing unit and having Copilot pre-installed). Microsoft plans to make Copilot keys a requirement for Windows 11 OEM PCs eventually, it told Ars Technica in January.

At least for now, the basic way that the Copilot button works means you can turn the key into something more useful. Now, the tricky part would be finding a replacement keycap to eradicate Copilot’s influence from your keyboard.

Listing image by Microsoft

Copilot key is based on a button you probably haven’t seen since IBM’s Model M Read More »

billie-eilish,-pearl-jam,-200-artists-say-ai-poses-existential-threat-to-their-livelihoods

Billie Eilish, Pearl Jam, 200 artists say AI poses existential threat to their livelihoods

artificial music —

Artists say AI will “set in motion a race to the bottom that will degrade the value of our work.”

Billie Eilish attends the 2024 Vanity Fair Oscar Party hosted by Radhika Jones at the Wallis Annenberg Center for the Performing Arts on March 10, 2024 in Beverly Hills, California.

Enlarge / Billie Eilish attends the 2024 Vanity Fair Oscar Party hosted by Radhika Jones at the Wallis Annenberg Center for the Performing Arts on March 10, 2024, in Beverly Hills, California.

On Tuesday, the Artist Rights Alliance (ARA) announced an open letter critical of AI signed by over 200 musical artists, including Pearl Jam, Nicki Minaj, Billie Eilish, Stevie Wonder, Elvis Costello, and the estate of Frank Sinatra. In the letter, the artists call on AI developers, technology companies, platforms, and digital music services to stop using AI to “infringe upon and devalue the rights of human artists.” A tweet from the ARA added that AI poses an “existential threat” to their art.

Visual artists began protesting the advent of generative AI after the rise of the first mainstream AI image generators in 2022, and considering that generative AI research has since been undertaken for other forms of creative media, we have seen that protest extend to professionals in other creative domains, such as writers, actors, filmmakers—and now musicians.

“When used irresponsibly, AI poses enormous threats to our ability to protect our privacy, our identities, our music and our livelihoods,” the open letter states. It alleges that some of the “biggest and most powerful” companies (unnamed in the letter) are using the work of artists without permission to train AI models, with the aim of replacing human artists with AI-created content.

  • A list of musical artists that signed the ARA open letter against generative AI.

  • A list of musical artists that signed the ARA open letter against generative AI.

  • A list of musical artists that signed the ARA open letter against generative AI.

  • A list of musical artists that signed the ARA open letter against generative AI.

In January, Billboard reported that AI research taking place at Google DeepMind had trained an unnamed music-generating AI on a large dataset of copyrighted music without seeking artist permission. That report may have been referring to Google’s Lyria, an AI-generation model announced in November that the company positioned as a tool for enhancing human creativity. The tech has since powered musical experiments from YouTube.

We’ve previously covered AI music generators that seemed fairly primitive throughout 2022 and 2023, such as Riffusion, Google’s MusicLM, and Stability AI’s Stable Audio. We’ve also covered open source musical voice-cloning technology that is frequently used to make musical parodies online. While we have yet to see an AI model that can generate perfect, fully composed high-quality music on demand, the quality of outputs from music synthesis models has been steadily improving over time.

In considering AI’s potential impact on music, it’s instructive to remember historical instances where tech innovations initially sparked concern among artists. For instance, the introduction of synthesizers in the 1960s and 1970s and the advent of digital sampling in the 1980s both faced scrutiny and fear from parts of the music community, but the music industry eventually adjusted.

While we’ve seen fear of the unknown related to AI going around quite a bit for the past year, it’s possible that AI tools will be integrated into the music production process like any other music production tool or technique that came before. It’s also possible that even if that kind of integration comes to pass, some artists will still get hurt along the way—and the ARA wants to speak out about it before the technology progresses further.

“Race to the bottom”

The Artists Rights Alliance is a nonprofit advocacy group that describes itself as an “alliance of working musicians, performers, and songwriters fighting for a healthy creative economy and fair treatment for all creators in the digital world.”

The signers of the ARA’s open letter say they acknowledge the potential of AI to advance human creativity when used responsibly, but they also claim that replacing artists with generative AI would “substantially dilute the royalty pool” paid out to artists, which could be “catastrophic” for many working musicians, artists, and songwriters who are trying to make ends meet.

In the letter, the artists say that unchecked AI will set in motion a race to the bottom that will degrade the value of their work and prevent them from being fairly compensated. “This assault on human creativity must be stopped,” they write. “We must protect against the predatory use of AI to steal professional artist’ voices and likenesses, violate creators’ rights, and destroy the music ecosystem.”

The emphasis on the word “human” in the letter is notable (“human artist” was used twice and “human creativity” and “human artistry” are used once, each) because it suggests the clear distinction they are drawing between the work of human artists and the output of AI systems. It implies recognition that we’ve entered a new era where not all creative output is made by people.

The letter concludes with a call to action, urging all AI developers, technology companies, platforms, and digital music services to pledge not to develop or deploy AI music-generation technology, content, or tools that undermine or replace the human artistry of songwriters and artists or deny them fair compensation for their work.

While it’s unclear whether companies will meet those demands, so far, protests from visual artists have not stopped development of ever-more advanced image-synthesis models. On Threads, frequent AI industry commentator Dare Obasanjo wrote, “Unfortunately this will be as effective as writing an open letter to stop the sun from rising tomorrow.”

Billie Eilish, Pearl Jam, 200 artists say AI poses existential threat to their livelihoods Read More »

apple-wouldn’t-let-jon-stewart-interview-ftc-chair-lina-khan,-tv-host-claims

Apple wouldn’t let Jon Stewart interview FTC Chair Lina Khan, TV host claims

The Problem with Jon Stewart —

Tech company also didn’t want a segment on Stewart’s show criticizing AI.

The Daily Show host Jon Stewart’s interview with FTC Chair Lina Khan. The conversation about Apple begins around 16: 30 in the video.

Before the cancellation of The Problem with Jon Stewart on Apple TV+, Apple forbade the inclusion of Federal Trade Commission Chair Lina Khan as a guest and steered the show away from confronting issues related to artificial intelligence, according to Jon Stewart.

This isn’t the first we’ve heard of this rift between Apple and Stewart. When the Apple TV+ show was canceled last October, reports circulated that he told his staff that creative differences over guests and topics were a factor in the decision.

The New York Times reported that both China and AI were sticking points between Apple and Stewart. Stewart confirmed the broad strokes of that narrative in a CBS Morning Show interview after it was announced that he would return to The Daily Show.

“They decided that they felt that they didn’t want me to say things that might get me into trouble,” he explained.

Stewart’s comments during his interview with Khan yesterday were the first time he’s gotten more specific publicly.

“I’ve got to tell you, I wanted to have you on a podcast, and Apple asked us not to do it—to have you. They literally said, ‘Please don’t talk to her,'” Stewart said while interviewing Khan on the April 1, 2024, episode of The Daily Show.

Khan appeared on the show to explain and evangelize the FTC’s efforts to battle corporate monopolies both in and outside the tech industry in the US and to explain the challenges the organization faces.

She became the FTC chair in 2021 and has since garnered a reputation for an aggressive and critical stance against monopolistic tendencies or practices among Big Tech companies like Amazon and Meta.

Stewart also confirmed previous reports that AI was a sensitive topic for Apple. “They wouldn’t let us do that dumb thing we did in the first act on AI,” he said, referring to the desk monologue segment that preceded the Khan interview in the episode.

The segment on AI in the first act of the episode mocked various tech executives for their utopian framing of AI and interspersed those claims with acknowledgments from many of the same leaders that AI would replace many people’s jobs. (It did not mention Apple or its leadership, though.)

Stewart and The Daily Show‘s staff also included clips of current tech leaders suggesting that workers be retrained to work with or on AI when their current roles are disrupted by it. That was followed by a montage of US political leaders promising to retrain workers after various technological and economic disruptions over the years, with the implication that those retraining efforts were rarely as successful as promised.

The segment effectively lampooned some of the doublespeak about AI, though Stewart stopped short of venturing any solutions or alternatives to the current path, so it mostly just prompted outrage and laughs.

The Daily Show host Jon Stewart’s segment criticizing tech and political leaders on the topic of AI.

Apple currently uses AI-related technologies in its software, services, and devices, but so far it has not launched anything tapping into generative AI, which is the new frontier in AI that has attracted worry, optimism, and criticism from various parties.

However, the company is expected to roll out its first generative AI features as part of iOS 18, a new operating system update for iPhones. iOS 18 will likely be detailed during Apple’s annual developer conference in June and will reach users’ devices sometime in the fall.

Listing image by Paramount

Apple wouldn’t let Jon Stewart interview FTC Chair Lina Khan, TV host claims Read More »

openai-drops-login-requirements-for-chatgpt’s-free-version

OpenAI drops login requirements for ChatGPT’s free version

free as in beer? —

ChatGPT 3.5 still falls far short of GPT-4, and other models surpassed it long ago.

A glowing OpenAI logo on a blue background.

Benj Edwards

On Monday, OpenAI announced that visitors to the ChatGPT website in some regions can now use the AI assistant without signing in. Previously, the company required that users create an account to use it, even with the free version of ChatGPT that is currently powered by the GPT-3.5 AI language model. But as we have noted in the past, GPT-3.5 is widely known to provide more inaccurate information compared to GPT-4 Turbo, available in paid versions of ChatGPT.

Since its launch in November 2022, ChatGPT has transformed over time from a tech demo to a comprehensive AI assistant, and it’s always had a free version available. The cost is free because “you’re the product,” as the old saying goes. Using ChatGPT helps OpenAI gather data that will help the company train future AI models, although free users and ChatGPT Plus subscription members can both opt out of allowing the data they input into ChatGPT to be used for AI training. (OpenAI says it never trains on inputs from ChatGPT Team and Enterprise members at all).

Opening ChatGPT to everyone could provide a frictionless on-ramp for people who might use it as a substitute for Google Search or potentially gain new customers by providing an easy way for people to use ChatGPT quickly, then offering an upsell to paid versions of the service.

“It’s core to our mission to make tools like ChatGPT broadly available so that people can experience the benefits of AI,” OpenAI says on its blog page. “For anyone that has been curious about AI’s potential but didn’t want to go through the steps to set up an account, start using ChatGPT today.”

When you visit the ChatGPT website, you're immediately presented with a chat box like this (in some regions). Screenshot captured April 1, 2024.

Enlarge / When you visit the ChatGPT website, you’re immediately presented with a chat box like this (in some regions). Screenshot captured April 1, 2024.

Benj Edwards

Since kids will also be able to use ChatGPT without an account—despite it being against the terms of service—OpenAI also says it’s introducing “additional content safeguards,” such as blocking more prompts and “generations in a wider range of categories.” What exactly that entails has not been elaborated upon by OpenAI, but we reached out to the company for comment.

There might be a few other downsides to the fully open approach. On X, AI researcher Simon Willison wrote about the potential for automated abuse as a way to get around paying for OpenAI’s services: “I wonder how their scraping prevention works? I imagine the temptation for people to abuse this as a free 3.5 API will be pretty strong.”

With fierce competition, more GPT-3.5 access may backfire

Willison also mentioned a common criticism of OpenAI (as voiced in this case by Wharton professor Ethan Mollick) that people’s ideas about what AI models can do have so far largely been influenced by GPT-3.5, which, as we mentioned, is far less capable and far more prone to making things up than the paid version of ChatGPT that uses GPT-4 Turbo.

“In every group I speak to, from business executives to scientists, including a group of very accomplished people in Silicon Valley last night, much less than 20% of the crowd has even tried a GPT-4 class model,” wrote Mollick in a tweet from early March.

With models like Google Gemini Pro 1.5 and Anthropic Claude 3 potentially surpassing OpenAI’s best proprietary model at the moment —and open weights AI models eclipsing the free version of ChatGPT—allowing people to use GPT-3.5 might not be putting OpenAI’s best foot forward. Microsoft Copilot, powered by OpenAI models, also supports a frictionless, no-login experience, but it allows access to a model based on GPT-4. But Gemini currently requires a sign-in, and Anthropic sends a login code through email.

For now, OpenAI says the login-free version of ChatGPT is not yet available to everyone, but it will be coming soon: “We’re rolling this out gradually, with the aim to make AI accessible to anyone curious about its capabilities.”

OpenAI drops login requirements for ChatGPT’s free version Read More »

openai-shows-off-sora-ai-video-generator-to-hollywood-execs

OpenAI shows off Sora AI video generator to Hollywood execs

No lights, no camera, action —

CEO Sam Altman met with Universal, Paramount, and Warner Bros Discovery.

a robotic intelligence works as a cameraman (3d rendering)

OpenAI has launched a charm offensive in Hollywood, holding meetings with major studios including Paramount, Universal, and Warner Bros Discovery to showcase its video generation technology Sora and allay fears the artificial intelligence model will harm the movie industry.

Chief Executive Sam Altman and Chief Operating Officer Brad Lightcap gave presentations to executives from the film industry giants, said multiple people with knowledge of the meetings, which took place in recent days.

Altman and Lightcap showed off Sora, a new generative AI model that can create detailed videos from simple written prompts.

The technology first gained Hollywood’s attention after OpenAI published a selection of videos produced by the model last month. The clips quickly went viral online and have led to debate over the model’s potential impact on the creative industries.

“Sora is causing enormous excitement,” said media analyst Claire Enders. “There is a sense it is going to revolutionize the making of movies and bring down the cost of production and reduce the demand for [computer-generated imagery] very strongly.”

AI-generated video of a cat and human, generated via video generation model Sora.

Those involved in the meetings said OpenAI was seeking input from the film bosses on how Sora should be rolled out. Some who watched the demonstrations said they could see how Sora or similar AI products could save time and money on production but added the technology needed further development.

OpenAI’s overtures to the studios come at a delicate moment in Hollywood. Last year’s monthslong strikes ended with the Writers Guild of America and the Screen Actors Guild securing groundbreaking protections from AI in their contracts. This year, contract negotiations are underway with the International Alliance of Theatrical Stage Employees—and AI is again expected to be a hot-button issue.

Earlier this week, OpenAI released new Sora videos generated by a number of visual artists and directors, including short films, as well as their impressions of the technology. The model will aim to compete with several available text-to-video services from start-ups, including Runway, Pika, and Stability AI. These other services already offer commercial uses for content.

An AI-generated video from Sora of a dog.

However, Sora has not been widely released. OpenAI has held off announcing a launch date or the circumstances under which it will be available. One person with knowledge of its strategy said the company was deciding how to commercialize the technology. Another person said there were safety steps still to take before the company considered putting Sora into a product.

OpenAI is also working to improve the system. Currently, Sora can only make videos under one minute in length, and its creations have limitations, such as glass bouncing off the floor instead of shattering or adding extra limbs to people and animals.

Some studios appeared open to using Sora in filmmaking or TV production in future, but licensing and partnerships have not yet been discussed, said people involved in the talks.

“There have been no meetings with OpenAI about partnerships,” one studio executive said. “They’ve done demos, just like Apple has been demo-ing the Vision Pro [mixed-reality headset]. They’re trying to get people excited.”

OpenAI has been previewing the model in a “very controlled manner” to “industries that are likely to be impacted first,” said one person close to OpenAI.

Media analyst Enders said the reception from the movie industry had been broadly optimistic on Sora as it is “seen completely as a cost-saving element, rather than impacting the creative ethos of storytelling.”

OpenAI declined to comment.

An AI-generated video from Sora of a woman walking down a Tokyo street.

© 2024 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

OpenAI shows off Sora AI video generator to Hollywood execs Read More »

playboy-image-from-1972-gets-ban-from-ieee-computer-journals

Playboy image from 1972 gets ban from IEEE computer journals

image processing —

Use of “Lenna” image in computer image processing research stretches back to the 1970s.

Playboy image from 1972 gets ban from IEEE computer journals

Aurich Lawson | Getty Image

On Wednesday, the IEEE Computer Society announced to members that, after April 1, it would no longer accept papers that include a frequently used image of a 1972 Playboy model named Lena Forsén. The so-called “Lenna image,” (Forsén added an extra “n” to her name in her Playboy appearance to aid pronunciation) has been used in image processing research since 1973 and has attracted criticism for making some women feel unwelcome in the field.

In an email from the IEEE Computer Society sent to members on Wednesday, Technical & Conference Activities Vice President Terry Benzel wrote, “IEEE’s diversity statement and supporting policies such as the IEEE Code of Ethics speak to IEEE’s commitment to promoting an including and equitable culture that welcomes all. In alignment with this culture and with respect to the wishes of the subject of the image, Lena Forsén, IEEE will no longer accept submitted papers which include the ‘Lena image.'”

An uncropped version of the 512×512-pixel test image originally appeared as the centerfold picture for the December 1972 issue of Playboy Magazine. Usage of the Lenna image in image processing began in June or July 1973 when an assistant professor named Alexander Sawchuck and a graduate student at the University of Southern California Signal and Image Processing Institute scanned a square portion of the centerfold image with a primitive drum scanner, omitting nudity present in the original image. They scanned it for a colleague’s conference paper, and after that, others began to use the image as well.

The original 512×512

The original 512×512 “Lenna” test image, which is a cropped portion of a 1972 Playboy centerfold.

The image’s use spread in other papers throughout the 1970s, 80s, and 90s, and it caught Playboy’s attention, but the company decided to overlook the copyright violations. In 1997, Playboy helped track down Forsén, who appeared at the 50th Annual Conference of the Society for Imaging Science in Technology, signing autographs for fans. “They must be so tired of me … looking at the same picture for all these years!” she said at the time. VP of new media at Playboy Eileen Kent told Wired, “We decided we should exploit this, because it is a phenomenon.”

The image, which features Forsén’s face and bare shoulder as she wears a hat with a purple feather, was reportedly ideal for testing image processing systems in the early years of digital image technology due to its high contrast and varied detail. It is also a sexually suggestive photo of an attractive woman, and its use by men in the computer field has garnered criticism over the decades, especially from female scientists and engineers who felt that the image (especially related to its association with the Playboy brand) objectified women and created an academic climate where they did not feel entirely welcome.

Due to some of this criticism, which dates back to at least 1996, the journal Nature banned the use of the Lena image in paper submissions in 2018.

The comp.compression Usenet newsgroup FAQ document claims that in 1988, a Swedish publication asked Forsén if she minded her image being used in computer science, and she was reportedly pleasantly amused. In a 2019 Wired article, Linda Kinstler wrote that Forsén did not harbor resentment about the image, but she regretted that she wasn’t paid better for it originally. “I’m really proud of that picture,” she told Kinstler at the time.

Since then, Forsén has apparently changed her mind. In 2019, Creatable and Code Like a Girl created an advertising documentary titled Losing Lena, which was part of a promotional campaign aimed at removing the Lena image from use in tech and the image processing field. In a press release for the campaign and film, Forsén is quoted as saying, “I retired from modelling a long time ago. It’s time I retired from tech, too. We can make a simple change today that creates a lasting change for tomorrow. Let’s commit to losing me.”

It seems like that commitment is now being granted. The ban in IEEE publications, which have been historically important journals for computer imaging development, will likely further set a precedent toward removing the Lenna image from common use. In his email, the IEEE’s Benzel recommended wider sensitivity about the issue, writing, “In order to raise awareness of and increase author compliance with this new policy, program committee members and reviewers should look for inclusion of this image, and if present, should ask authors to replace the Lena image with an alternative.”

Playboy image from 1972 gets ban from IEEE computer journals Read More »

nyc’s-government-chatbot-is-lying-about-city-laws-and-regulations

NYC’s government chatbot is lying about city laws and regulations

Close enough for government work? —

You can be evicted for not paying rent, despite what the “MyCity” chatbot says.

Has a government employee checked all those zeroes and ones floating above the skyline?

Enlarge / Has a government employee checked all those zeroes and ones floating above the skyline?

If you follow generative AI news at all, you’re probably familiar with LLM chatbots’ tendency to “confabulate” incorrect information while presenting that information as authoritatively true. That tendency seems poised to cause some serious problems now that a chatbot run by the New York City government is making up incorrect answers to some important questions of local law and municipal policy.

NYC’s “MyCity” ChatBot launched as a “pilot” program last October. The announcement touted the ChatBot as a way for business owners to “save … time and money by instantly providing them with actionable and trusted information from more than 2,000 NYC Business webpages and articles on topics such as compliance with codes and regulations, available business incentives, and best practices to avoid violations and fines.”

But a new report from The Markup and local nonprofit news site The City found the MyCity chatbot giving dangerously wrong information about some pretty basic city policies. To cite just one example, the bot said that NYC buildings “are not required to accept Section 8 vouchers,” when an NYC government info page says clearly that Section 8 housing subsidies are one of many lawful sources of income that landlords are required to accept without discrimination. The Markup also received incorrect information in response to chatbot queries regarding worker pay and work hour regulations, as well as industry-specific information like funeral home pricing.

Welcome news for people who think the rent is too damn high, courtesy of the MyCity chatbot.

Enlarge / Welcome news for people who think the rent is too damn high, courtesy of the MyCity chatbot.

Further testing from BlueSky user Kathryn Tewson shows the MyCity chatbot giving some dangerously wrong answers regarding the treatment of workplace whistleblowers, as well as some hilariously bad answers regarding the need to pay rent.

This is going to keep happening

The result isn’t too surprising if you dig into the token-based predictive models that power these kinds of chatbots. MyCity’s Microsoft Azure-powered chatbot uses a complex process of statistical associations across millions of tokens to essentially guess at the most likely next word in any given sequence, without any real understanding of the underlying information being conveyed.

That can cause problems when a single factual answer to a question might not be reflected precisely in the training data. In fact, The Markup said that at least one of its tests resulted in the correct answer on the same query about accepting Section 8 housing vouchers (even as “ten separate Markup staffers” got the incorrect answer when repeating the same question).

The MyCity Chatbot—which is prominently labeled as a “Beta” product—tells users who bother to read the warnings that it “may occasionally produce incorrect, harmful or biased content” and that users should “not rely on its responses as a substitute for professional advice.” But the page also states front and center that it is “trained to provide you official NYC Business information” and is being sold as a way “to help business owners navigate government.”

Andrew Rigie, executive director of the NYC Hospitality Alliance, told The Markup that he had encountered inaccuracies from the bot himself and had received reports of the same from at least one local business owner. But NYC Office of Technology and Innovation Spokesperson Leslie Brown told The Markup that the bot “has already provided thousands of people with timely, accurate answers” and that “we will continue to focus on upgrading this tool so that we can better support small businesses across the city.”

NYC Mayor Eric Adams touts the MyCity chatbot in an October announcement event.

The Markup’s report highlights the danger of governments and corporations rolling out chatbots to the public before their accuracy and reliability have been fully vetted. Last month, a court forced Air Canada to honor a fraudulent refund policy invented by a chatbot available on its website. A recent Washington Post report found that chatbots integrated into major tax preparation software provides “random, misleading, or inaccurate … answers” to many tax queries. And some crafty prompt engineers have reportedly been able to trick car dealership chatbots into accepting a “legally binding offer – no take backsies” for a $1 car.

These kinds of issues are already leading some companies away from more generalized LLM-powered chatbots and toward more specifically trained Retrieval-Augmented Generation models, which have been tuned only on a small set of relevant information. That kind of focus could become that much more important if the FTC is successful in its efforts to make chatbots liable for “false, misleading, or disparaging” information.

NYC’s government chatbot is lying about city laws and regulations Read More »

openai-holds-back-wide-release-of-voice-cloning-tech-due-to-misuse-concerns

OpenAI holds back wide release of voice-cloning tech due to misuse concerns

AI speaks letters, text-to-speech or TTS, text-to-voice, speech synthesis applications, generative Artificial Intelligence, futuristic technology in language and communication.

Voice synthesis has come a long way since 1978’s Speak & Spell toy, which once wowed people with its state-of-the-art ability to read words aloud using an electronic voice. Now, using deep-learning AI models, software can create not only realistic-sounding voices, but also convincingly imitate existing voices using small samples of audio.

Along those lines, OpenAI just announced Voice Engine, a text-to-speech AI model for creating synthetic voices based on a 15-second segment of recorded audio. It has provided audio samples of the Voice Engine in action on its website.

Once a voice is cloned, a user can input text into the Voice Engine and get an AI-generated voice result. But OpenAI is not ready to widely release its technology yet. The company initially planned to launch a pilot program for developers to sign up for the Voice Engine API earlier this month. But after more consideration about ethical implications, the company decided to scale back its ambitions for now.

“In line with our approach to AI safety and our voluntary commitments, we are choosing to preview but not widely release this technology at this time,” the company writes. “We hope this preview of Voice Engine both underscores its potential and also motivates the need to bolster societal resilience against the challenges brought by ever more convincing generative models.”

Voice cloning tech in general is not particularly new—we’ve covered several AI voice synthesis models since 2022, and the tech is active in the open source community with packages like OpenVoice and XTTSv2. But the idea that OpenAI is inching toward letting anyone use their particular brand of voice tech is notable. And in some ways, the company’s reticence to release it fully might be the bigger story.

OpenAI says that benefits of its voice technology include providing reading assistance through natural-sounding voices, enabling global reach for creators by translating content while preserving native accents, supporting non-verbal individuals with personalized speech options, and assisting patients in recovering their own voice after speech-impairing conditions.

But it also means that anyone with 15 seconds of someone’s recorded voice could effectively clone it, and that has obvious implications for potential misuse. Even if OpenAI never widely releases its Voice Engine, the ability to clone voices has already caused trouble in society through phone scams where someone imitates a loved one’s voice and election campaign robocalls featuring cloned voices from politicians like Joe Biden.

Also, researchers and reporters have shown that voice-cloning technology can be used to break into bank accounts that use voice authentication (such as Chase’s Voice ID), which prompted Sen. Sherrod Brown (D-Ohio), the chairman of the US Senate Committee on Banking, Housing, and Urban Affairs, to send a letter to the CEOs of several major banks in May 2023 to inquire about the security measures banks are taking to counteract AI-powered risks.

OpenAI holds back wide release of voice-cloning tech due to misuse concerns Read More »