AI

labor-dispute-erupts-over-ai-voiced-darth-vader-in-fortnite

Labor dispute erupts over AI-voiced Darth Vader in Fortnite

For voice actors who previously portrayed Darth Vader in video games, the Fortnite feature starkly illustrates how AI voice synthesis could reshape their profession. While James Earl Jones created the iconic voice for films, at least 54 voice actors have performed as Vader in various media games over the years when Jones wasn’t available—work that could vanish if AI replicas become the industry standard.

The union strikes back

SAG-AFTRA’s labor complaint (which can be read online here) doesn’t focus on the AI feature’s technical problems or on permission from the Jones estate, which explicitly authorized the use of a synthesized version of his voice for the character in Fortnite. The late actor, who died in 2024, had signed over his Darth Vader voice rights before his death.

Instead, the union’s grievance centers on labor rights and collective bargaining. In the NLRB filing, SAG-AFTRA alleges that Llama Productions “failed and refused to bargain in good faith with the union by making unilateral changes to terms and conditions of employment, without providing notice to the union or the opportunity to bargain, by utilizing AI-generated voices to replace bargaining unit work on the Interactive Program Fortnite.”

The action comes amid SAG-AFTRA’s ongoing interactive media strike, which began in July 2024 after negotiations with video game producers stalled primarily over AI protections. The strike continues, with more than 100 games signing interim agreements, while others, including those from major publishers like Epic, remain in dispute.

Labor dispute erupts over AI-voiced Darth Vader in Fortnite Read More »

new-orleans-called-out-for-sketchiest-use-of-facial-recognition-yet-in-the-us

New Orleans called out for sketchiest use of facial recognition yet in the US

According to police records submitted to the city council, the network “only proved useful in a single case.” Investigating the tension between these claims, the Post suggested we may never know how many suspects were misidentified or what steps police took to ensure responsible use of the controversial live feeds.

In the US, New Orleans stands out for taking a step further than law enforcement in other regions by using live feeds from facial recognition cameras to make immediate arrests, the Post noted. The Security Industry Association told the Post that four states—Maryland, Montana, Vermont, and Virginia—and 19 cities nationwide “explicitly bar” the practice.

Lagarde told the Post that police cannot “directly” search for suspects on the camera network or add suspects to the watchlist in real time. Reese Harper, an NOPD spokesperson, told the Post that his department “does not own, rely on, manage, or condone the use by members of the department of any artificial intelligence systems associated with the vast network of Project Nola crime cameras.”

In a federally mandated 2023 audit, New Orleans police complained that complying with the ordinance took too long and “often” resulted in no matches. That could mean the tech is flawed, or it could be a sign that the process was working as intended to prevent wrongful arrests.

The Post noted that in total, “at least eight Americans have been wrongfully arrested due to facial recognition,” as both police and AI software rushing arrests are prone to making mistakes.

“By adopting this system–in secret, without safeguards, and at tremendous threat to our privacy and security–the City of New Orleans has crossed a thick red line,” Wessler said. “This is the stuff of authoritarian surveillance states and has no place in American policing.”

Project Nola did not immediately respond to Ars’ request to comment.

New Orleans called out for sketchiest use of facial recognition yet in the US Read More »

spotify-caught-hosting-hundreds-of-fake-podcasts-that-advertise-selling-drugs

Spotify caught hosting hundreds of fake podcasts that advertise selling drugs

This week, Spotify rushed to remove hundreds of obviously fake podcasts found to be marketing prescription drugs in violation of Spotify’s policies and, likely, federal law.

On Thursday, Business Insider (BI) reported that Spotify removed 200 podcasts advertising the sale of opioids and other drugs, but that wasn’t the end of the scandal. Today, CNN revealed that it easily uncovered dozens more fake podcasts peddling drugs.

Some of the podcasts may have raised a red flag for a human moderator—with titles like “My Adderall Store” or “Xtrapharma.com” and episodes titled “Order Codeine Online Safe Pharmacy Louisiana” or “Order Xanax 2 mg Online Big Deal On Christmas Season,” CNN reported.

But Spotify’s auto-detection did not flag the fake podcasts for removal. Some of them remained up for months, CNN reported, which could create trouble for the music streamer at a time when the US government is cracking down on illegal drug sales online.

“Multiple teens have died of overdoses from pills bought online,” CNN noted, sparking backlash against tech companies. And Donald Trump’s aggressive tariffs were specifically raised to stop deadly drugs from bombarding the US, which the president declared a national emergency.

BI found that many podcast episodes featured a computerized voice and were under a minute long, while CNN noted some episodes were as short as 10 seconds. Some of them didn’t contain any audio at all, BI reported.

Spotify caught hosting hundreds of fake podcasts that advertise selling drugs Read More »

the-empire-strikes-back-with-f-bombs:-ai-darth-vader-goes-rogue-with-profanity,-slurs

The empire strikes back with F-bombs: AI Darth Vader goes rogue with profanity, slurs

In that sense, the vulgar Vader situation creates a touchy dilemma for Epic Games and Disney, which likely invested substantially in this high-profile collaboration. While Epic acted swiftly in response, maintaining the feature while preventing further Jedi mind tricks from players presents ongoing technical challenges for interactive AI speech of any kind.

An AI language model like the one used for constructing responses for Vader (Google’s Gemini 2.0 Flash in this case, according to Epic) are fairly easy to trick with exploits like prompt injections and jailbreaks, and that has limited their usefulness in some applications. Imagine a truly ChatGPT-like Siri or Alexa, for example, that could be tricked into saying racist things on behalf of Apple or Amazon.

David Prowse as Darth Vader and Carrie Fisher as Princess Leia filming the original Star Wars. Credit: Sunset Boulevard/Corbis via Getty Images

Beyond language models, the AI voice technology behind the AI Darth Vader voice in Fortnite comes from ElevenLabs’ Flash v2.5 model, trained on examples of speech from James Earl Jones so it can synthesize new speech in the same style.

Previously, Lucasfilm worked with a Ukrainian startup we covered in 2022 on Obi-Wan Kenobi to recreate Darth Vader’s voice performance using a different AI voice model called Respeecher, which isn’t used in Fortnite.

According to Variety, Jones’ family supported the new Fortnite collaboration, stating: “James Earl felt that the voice of Darth Vader was inseparable from the story of Star Wars, and he always wanted fans of all ages to continue to experience it. We hope that this collaboration with Fortnite will allow both longtime fans of Darth Vader and newer generations to share in the enjoyment of this iconic character.”

This article was updated on May 16, 2025 at 4: 25 PM to include information about an email sent out from Epic Games to parents. This Article was updated again on May 17, 2025 at 10: 10 AM to correctly attribute ElevenLabs Flash v2.5 as the source of the Darth Vader audio model in Fortnite. The article previously incorrectly stated that Respeecher had been used for the game.

The empire strikes back with F-bombs: AI Darth Vader goes rogue with profanity, slurs Read More »

google-to-give-app-devs-access-to-gemini-nano-for-on-device-ai

Google to give app devs access to Gemini Nano for on-device AI

The rapid expansion of generative AI has changed the way Google and other tech giants design products, but most of the AI features you’ve used are running on remote servers with a ton of processing power. Your phone has a lot less power, but Google appears poised to give developers some important new mobile AI tools. At I/O next week, Google will likely announce a new set of APIs to let developers leverage the capabilities of Gemini Nano for on-device AI.

Google has quietly published documentation on big new AI features for developers. According to Android Authority, an update to the ML Kit SDK will add API support for on-device generative AI features via Gemini Nano. It’s built on AI Core, similar to the experimental Edge AI SDK, but it plugs into an existing model with a set of predefined features that should be easy for developers to implement.

Google says ML Kit’s GenAI APIs will enable apps to do summarization, proofreading, rewriting, and image description without sending data to the cloud. However, Gemini Nano doesn’t have as much power as the cloud-based version, so expect some limitations. For example, Google notes that summaries can only have a maximum of three bullet points, and image descriptions will only be available in English. The quality of outputs could also vary based on the version of Gemini Nano on a phone. The standard version (Gemini Nano XS) is about 100MB in size, but Gemini Nano XXS as seen on the Pixel 9a is a quarter of the size. It’s text-only and has a much smaller context window.

Not all versions of Gemini Nano are created equal.

Credit: Ryan Whitwam

Not all versions of Gemini Nano are created equal. Credit: Ryan Whitwam

This move is good for Android in general because ML Kit works on devices outside Google’s Pixel line. While Pixel devices use Gemini Nano extensively, several other phones are already designed to run this model, including the OnePlus 13, Samsung Galaxy S25, and Xiaomi 15. As more phones add support for Google’s AI model, developers will be able to target those devices with generative AI features.

Google to give app devs access to Gemini Nano for on-device AI Read More »

openai-introduces-codex,-its-first-full-fledged-ai-agent-for-coding

OpenAI introduces Codex, its first full-fledged AI agent for coding

We’ve been expecting it for a while, and now it’s here: OpenAI has introduced an agentic coding tool called Codex in research preview. The tool is meant to allow experienced developers to delegate rote and relatively simple programming tasks to an AI agent that will generate production-ready code and show its work along the way.

Codex is a unique interface (not to be confused with the Codex CLI tool introduced by OpenAI last month) that can be reached from the side bar in the ChatGPT web app. Users enter a prompt and then click either “code” to have it begin producing code, or “ask” to have it answer questions and advise.

Whenever it’s given a task, that task is performed in a distinct container that is preloaded with the user’s codebase and is meant to accurately reflect their development environment.

To make Codex more effective, developers can include an “AGENTS.md” file in the repo with custom instructions, for example to contextualize and explain the code base or to communicate standardizations and style practices for the project—kind of a README.md but for AI agents rather than humans.

Codex is built on codex-1, a fine-tuned variation of OpenAI’s o3 reasoning model that was trained using reinforcement learning on a wide range of coding tasks to analyze and generate code, and to iterate through tests along the way.

OpenAI introduces Codex, its first full-fledged AI agent for coding Read More »

xai-says-an-“unauthorized”-prompt-change-caused-grok-to-focus-on-“white-genocide”

xAI says an “unauthorized” prompt change caused Grok to focus on “white genocide”

When analyzing social media posts made by others, Grok is given the somewhat contradictory instructions to “provide truthful and based insights [emphasis added], challenging mainstream narratives if necessary, but remain objective.” Grok is also instructed to incorporate scientific studies and prioritize peer-reviewed data but also to “be critical of sources to avoid bias.”

Grok’s brief “white genocide” obsession highlights just how easy it is to heavily twist an LLM’s “default” behavior with just a few core instructions. Conversational interfaces for LLMs in general are essentially a gnarly hack for systems intended to generate the next likely words to follow strings of input text. Layering a “helpful assistant” faux personality on top of that basic functionality, as most LLMs do in some form, can lead to all sorts of unexpected behaviors without careful additional prompting and design.

The 2,000+ word system prompt for Anthropic’s Claude 3.7, for instance, includes entire paragraphs for how to handle specific situations like counting tasks, “obscure” knowledge topics, and “classic puzzles.” It also includes specific instructions for how to project its own self-image publicly: “Claude engages with questions about its own consciousness, experience, emotions and so on as open philosophical questions, without claiming certainty either way.”

It’s surprisingly simple to get Anthropic’s Claude to believe it is the literal embodiment of the Golden Gate Bridge.

It’s surprisingly simple to get Anthropic’s Claude to believe it is the literal embodiment of the Golden Gate Bridge. Credit: Antrhopic

Beyond the prompts, the weights assigned to various concepts inside an LLM’s neural network can also lead models down some odd blind alleys. Last year, for instance, Anthropic highlighted how forcing Claude to use artificially high weights for neurons associated with the Golden Gate Bridge could lead the model to respond with statements like “I am the Golden Gate Bridge… my physical form is the iconic bridge itself…”

Incidents like Grok’s this week are a good reminder that, despite their compellingly human conversational interfaces, LLMs don’t really “think” or respond to instructions like humans do. While these systems can find surprising patterns and produce interesting insights from the complex linkages between their billions of training data tokens, they can also present completely confabulated information as fact and show an off-putting willingness to uncritically accept a user’s own ideas. Far from being all-knowing oracles, these systems can show biases in their actions that can be much harder to detect than Grok’s recent overt “white genocide” obsession.

xAI says an “unauthorized” prompt change caused Grok to focus on “white genocide” Read More »

fbi-warns-of-ongoing-scam-that-uses-deepfake-audio-to-impersonate-government-officials

FBI warns of ongoing scam that uses deepfake audio to impersonate government officials

The FBI is warning people to be vigilant of an ongoing malicious messaging campaign that uses AI-generated voice audio to impersonate government officials in an attempt to trick recipients into clicking on links that can infect their computers.

“Since April 2025, malicious actors have impersonated senior US officials to target individuals, many of whom are current or former senior US federal or state government officials and their contacts,” Thursday’s advisory from the bureau’s Internet Crime Complaint Center said. “If you receive a message claiming to be from a senior US official, do not assume it is authentic.”

Think you can’t be fooled? Think again.

The campaign’s creators are sending AI-generated voice messages—better known as deepfakes—along with text messages “in an effort to establish rapport before gaining access to personal accounts,” FBI officials said. Deepfakes use AI to mimic the voice and speaking characteristics of a specific individual. The differences between the authentic and simulated speakers are often indistinguishable without trained analysis. Deepfake videos work similarly.

One way to gain access to targets’ devices is for the attacker to ask if the conversation can be continued on a separate messaging platform and then successfully convince the target to click on a malicious link under the guise that it will enable the alternate platform. The advisory provided no additional details about the campaign.

The advisory comes amid a rise in reports of deepfaked audio and sometimes video used in fraud and espionage campaigns. Last year, password manager LastPass warned that it had been targeted in a sophisticated phishing campaign that used a combination of email, text messages, and voice calls to trick targets into divulging their master passwords. One part of the campaign included targeting a LastPass employee with a deepfake audio call that impersonated company CEO Karim Toubba.

In a separate incident last year, a robocall campaign that encouraged New Hampshire Democrats to sit out the coming election used a deepfake of then-President Joe Biden’s voice. A Democratic consultant was later indicted in connection with the calls. The telco that transmitted the spoofed robocalls also agreed to pay a $1 million civil penalty for not authenticating the caller as required by FCC rules.

FBI warns of ongoing scam that uses deepfake audio to impersonate government officials Read More »

report:-terrorists-seem-to-be-paying-x-to-generate-propaganda-with-grok

Report: Terrorists seem to be paying X to generate propaganda with Grok

Back in February, Elon Musk skewered the Treasury Department for lacking “basic controls” to stop payments to terrorist organizations, boasting at the Oval Office that “any company” has those controls.

Fast-forward three months, and now Musk’s social media platform X is suspected of taking payments from sanctioned terrorists and providing premium features that make it easier to raise funds and spread propaganda—including through X’s chatbot Grok. Groups seemingly benefiting from X include Houthi rebels, Hezbollah, and Hamas, as well as groups from Syria, Kuwait, and Iran. Some accounts have amassed hundreds of thousands of followers, paying to boost their reach while X seemingly looks the other way.

In a report released Thursday, the Tech Transparency Project (TTP) flagged popular accounts seemingly linked to US-sanctioned terrorists. Some of the accounts bear “ID verified” badges, suggesting that X may be going against its own policies that ban sanctioned terrorists from benefiting from its platform.

Even more troublingly, “several made use of revenue-generating features offered by X, including a button for tips,” the TTP reported.

On X, Premium subscribers pay $8 monthly or $84 annually, and Premium+ subscribers pay $40 monthly or $395 annually. Verified organizations pay X between $200 and $1,000 monthly, or up to $10,000 annually for access to Premium+. These subscriptions come with perks, allowing suspected terrorist accounts to share longer text and video posts, offer subscribers paid content, create communities, accept gifts, and amplify their propaganda.

Disturbingly, the TTP found that X’s chatbot Grok also appears to be helping to whitewash accounts linked to sanctioned terrorists.

In its report, the TTP noted that an account with the handle “hasmokaled”—which apparently belongs to “a key Hezbollah money exchanger,” Hassan Moukalled—at one point had a blue checkmark with 60,000 followers. While the Treasury Department has sanctioned Moukalled for propping up efforts “to continue to exploit and exacerbate Lebanon’s economic crisis,” clicking the Grok AI profile summary button seems to rely on Moukalled’s own posts and his followers’ impressions of his posts and therefore generated praise.

Report: Terrorists seem to be paying X to generate propaganda with Grok Read More »

xai’s-grok-suddenly-can’t-stop-bringing-up-“white-genocide”-in-south-africa

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa

Where could Grok have gotten these ideas?

The treatment of white farmers in South Africa has been a hobbyhorse of South African X owner Elon Musk for quite a while. In 2023, he responded to a video purportedly showing crowds chanting “kill the Boer, kill the White Farmer” with a post alleging South African President Cyril Ramaphosa of remaining silent while people “openly [push] for genocide of white people in South Africa.” Musk was posting other responses focusing on the issue as recently as Wednesday.

They are openly pushing for genocide of white people in South Africa. @CyrilRamaphosa, why do you say nothing?

— gorklon rust (@elonmusk) July 31, 2023

President Trump has long shown an interest in this issue as well, saying in 2018 that he was directing then Secretary of State Mike Pompeo to “closely study the South Africa land and farm seizures and expropriations and the large scale killing of farmers.” More recently, Trump granted “refugee” status to dozens of white Afrikaners, even as his administration ends protections for refugees from other countries

Former American Ambassador to South Africa and Democratic politician Patrick Gaspard posted in 2018 that the idea of large-scale killings of white South African farmers is a “disproven racial myth.”

In launching the Grok 3 model in February, Musk said it was a “maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically correct.” X’s “About Grok” page says that the model is undergoing constant improvement to “ensure Grok remains politically unbiased and provides balanced answers.”

But the recent turn toward unprompted discussions of alleged South African “genocide” has many questioning what kind of explicit adjustments Grok’s political opinions may be getting from human tinkering behind the curtain. “The algorithms for Musk products have been politically tampered with nearly beyond recognition,” journalist Seth Abramson wrote in one representative skeptical post. “They tweaked a dial on the sentence imitator machine and now everything is about white South Africans,” a user with the handle Guybrush Threepwood glibly theorized.

Representatives from xAI were not immediately available to respond to a request for comment from Ars Technica.

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa Read More »

openai-adds-gpt-4.1-to-chatgpt-amid-complaints-over-confusing-model-lineup

OpenAI adds GPT-4.1 to ChatGPT amid complaints over confusing model lineup

The release comes just two weeks after OpenAI made GPT-4 unavailable in ChatGPT on April 30. That earlier model, which launched in March 2023, once sparked widespread hype about AI capabilities. Compared to that hyperbolic launch, GPT-4.1’s rollout has been a fairly understated affair—probably because it’s tricky to convey the subtle differences between all of the available OpenAI models.

As if 4.1’s launch wasn’t confusing enough, the release also roughly coincides with OpenAI’s July 2025 deadline for retiring the GPT-4.5 Preview from the API, a model one AI expert called a “lemon.” Developers must migrate to other options, OpenAI says, although GPT-4.5 will remain available in ChatGPT for now.

A confusing addition to OpenAI’s model lineup

In February, OpenAI CEO Sam Altman acknowledged on X his company’s confusing AI model naming practices, writing, “We realize how complicated our model and product offerings have gotten.” He promised that a forthcoming “GPT-5” model would consolidate the o-series and GPT-series models into a unified branding structure. But the addition of GPT-4.1 to ChatGPT appears to contradict that simplification goal.

So, if you use ChatGPT, which model should you use? If you’re a developer using the models through the API, the consideration is more of a trade-off between capability, speed, and cost. But in ChatGPT, your choice might be limited more by personal taste in behavioral style and what you’d like to accomplish. Some of the “more capable” models have lower usage limits as well because they cost more for OpenAI to run.

For now, OpenAI is keeping GPT-4o as the default ChatGPT model, likely due to its general versatility, balance between speed and capability, and personable style (conditioned using reinforcement learning and a specialized system prompt). The simulated reasoning models like 03 and 04-mini-high are slower to execute but can consider analytical-style problems more systematically and perform comprehensive web research that sometimes feels genuinely useful when it surfaces relevant (non-confabulated) web links. Compared to those, OpenAI is largely positioning GPT-4.1 as a speedier AI model for coding assistance.

Just remember that all of the AI models are prone to confabulations, meaning that they tend to make up authoritative-sounding information when they encounter gaps in their trained “knowledge.” So you’ll need to double-check all of the outputs with other sources of information if you’re hoping to use these AI models to assist with an important task.

OpenAI adds GPT-4.1 to ChatGPT amid complaints over confusing model lineup Read More »

meta-is-making-users-who-opted-out-of-ai-training-opt-out-again,-watchdog-says

Meta is making users who opted out of AI training opt out again, watchdog says

Noyb has requested a response from Meta by May 21, but it seems unlikely that Meta will quickly cave in this fight.

In a blog post, Meta said that AI training on EU users was critical to building AI tools for Europeans that are informed by “everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products.”

Meta argued that its AI training efforts in the EU are far more transparent than efforts from competitors Google and OpenAI, which, Meta noted, “have already used data from European users to train their AI models,” supposedly without taking the steps Meta has to inform users.

Also echoing a common refrain in the AI industry, another Meta blog warned that efforts to further delay Meta’s AI training in the EU could lead to “major setbacks,” pushing the EU behind rivals in the AI race.

“Without a reform and simplification of the European regulatory system, Europe threatens to fall further and further behind in the global AI race and lose ground compared to the USA and China,” Meta warned.

Noyb discredits this argument and noted that it can pursue injunctions in various jurisdictions to block Meta’s plan. The group said it’s currently evaluating options to seek injunctive relief and potentially even pursue a class action worth possibly “billions in damages” to ensure that 400 million monthly active EU users’ data rights are shielded from Meta’s perceived grab.

A Meta spokesperson reiterated to Ars that the company’s plan “follows extensive and ongoing engagement with the Irish Data Protection Commission,” while reiterating Meta’s statements in blogs that its AI training approach “reflects consensus among” EU Data Protection Authorities (DPAs).

But while Meta claims that EU regulators have greenlit its AI training plans, Noyb argues that national DPAs have “largely stayed silent on the legality of AI training without consent,” and Meta seems to have “simply moved ahead anyways.”

“This fight is essentially about whether to ask people for consent or simply take their data without it,” Schrems said, adding, “Meta’s absurd claims that stealing everyone’s personal data is necessary for AI training is laughable. Other AI providers do not use social network data—and generate even better models than Meta.”

Meta is making users who opted out of AI training opt out again, watchdog says Read More »