Similarly, the report’s authors describe concerns that the CTV industry’s extensive data collection and tracking could potentially have a political impact. It asserts that political candidates could use such data to run “covert personalized campaigns” leveraging information on things like political orientations and “emotional states”:
With no transparency or oversight, these practices could unleash millions of personalized, manipulative and highly targeted political ads, spread disinformation, and further exacerbate the political polarization that threatens a healthy democratic culture in the US.
“Potential discriminatory impacts”
The CDD’s report claims that Black, Hispanic, and Asian-Americans in the US are being “singled out by marketers as highly lucrative targets,” due to fast adoption of new digital media services and brand loyalty. Black and Hispanic communities are key advertising targets for FAST channels, per the report. Chester told Ars:
There are major potential discriminatory impacts from CTV’s harvesting of data from communities of color.
He pointed to “growing widespread racial and ethnic data” collection for ad targeting and marketing.
“We believe this is sensitive information that should not be applied to the data profiles used for targeting on CTV and across other platforms. … Its use in political advertising on CTV will enable widespread disinformation and voter suppression campaigns targeting these communities,” Chester said.
Regulation
In a letter sent to the FTC, FCC, California attorney general, and CPPA , the CDD asked for an investigation into the US’ CTV industry, “including on antitrust, consumer protection, and privacy grounds.” The CDD emphasized the challenges that streamers—including those who pay for ad-free streaming—face in protecting their data from advertisers.
“Connected television has taken root and grown as an unregulated medium in the United States, along with the other platforms, devices, and applications that are part of the massive internet industry,” the report says.
The group asks for the FTC and FCC to investigate CTV practices and consider building on current legislation, like the 1988 Video Privacy Protection Act. They also request that antitrust regulators delve deeply into the business practices of CTV players like Amazon, Comcast, and Disney to help build “competition and diversity in the digital and connected TV marketplace.”
LinkedIn admitted Wednesday that it has been training its own AI on many users’ data without seeking consent. Now there’s no way for users to opt out of training that has already occurred, as LinkedIn limits opt-out to only future AI training.
In a blog detailing updates coming on November 20, LinkedIn general counsel Blake Lawit confirmed that LinkedIn’s user agreement and privacy policy will be changed to better explain how users’ personal data powers AI on the platform.
Under the new privacy policy, LinkedIn now informs users that “we may use your personal data… [to] develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others.”
An FAQ explained that the personal data could be collected any time a user interacts with generative AI or other AI features, as well as when a user composes a post, changes their preferences, provides feedback to LinkedIn, or uses the platform for any amount of time.
That data is then stored until the user deletes the AI-generated content. LinkedIn recommends that users use its data access tool if they want to delete or request to delete data collected about past LinkedIn activities.
LinkedIn’s AI models powering generative AI features “may be trained by LinkedIn or another provider,” such as Microsoft, which provides some AI models through its Azure OpenAI service, the FAQ said.
A potentially major privacy risk for users, LinkedIn’s FAQ noted, is that users who “provide personal data as an input to a generative AI powered feature” could end up seeing their “personal data being provided as an output.”
LinkedIn claims that it “seeks to minimize personal data in the data sets used to train the models,” relying on “privacy enhancing technologies to redact or remove personal data from the training dataset.”
While Lawit’s blog avoids clarifying if data already collected can be removed from AI training data sets, the FAQ affirmed that users who automatically opted in to sharing personal data for AI training can only opt out of the invasive data collection “going forward.”
Opting out “does not affect training that has already taken place,” the FAQ said.
A LinkedIn spokesperson told Ars that it “benefits all members” to be opted in to AI training “by default.”
“People can choose to opt out, but they come to LinkedIn to be found for jobs and networking and generative AI is part of how we are helping professionals with that change,” LinkedIn’s spokesperson said.
By allowing opt-outs of future AI training, LinkedIn’s spokesperson additionally claimed that the platform is giving “people using LinkedIn even more choice and control when it comes to how we use data to train our generative AI technology.”
How to opt out of AI training on LinkedIn
Users can opt out of AI training by navigating to the “Data privacy” section in their account settings, then turning off the option allowing collection of “data for generative AI improvement” that LinkedIn otherwise automatically turns on for most users.
The only exception is for users in the European Economic Area or Switzerland, who are protected by stricter privacy laws that either require consent from platforms to collect personal data or for platforms to justify the data collection as a legitimate interest. Those users will not see an option to opt out, because they were never opted in, LinkedIn repeatedly confirmed.
Additionally, users can “object to the use of their personal data for training” generative AI models not used to generate LinkedIn content—such as models used for personalization or content moderation purposes, The Verge noted—by submitting the LinkedIn Data Processing Objection Form.
Last year, LinkedIn shared AI principles, promising to take “meaningful steps to reduce the potential risks of AI.”
One risk that the updated user agreement specified is that using LinkedIn’s generative features to help populate a profile or generate suggestions when writing a post could generate content that “might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes.”
Users are advised that they are responsible for avoiding sharing misleading information or otherwise spreading AI-generated content that may violate LinkedIn’s community guidelines. And users are additionally warned to be cautious when relying on any information shared on the platform.
“Like all content and other information on our Services, regardless of whether it’s labeled as created by ‘AI,’ be sure to carefully review before relying on it,” LinkedIn’s user agreement says.
Back in 2023, LinkedIn claimed that it would always “seek to explain in clear and simple ways how our use of AI impacts people,” because users’ “understanding of AI starts with transparency.”
Legislation like the European Union’s AI Act and the GDPR—especially with its strong privacy protections—if enacted elsewhere, would lead to fewer shocks to unsuspecting users. That would put all companies and their users on equal footing when it comes to training AI models and result in fewer nasty surprises and angry customers.
Nevada will soon become the first state to use AI to help speed up the decision-making process when ruling on appeals that impact people’s unemployment benefits.
The state’s Department of Employment, Training, and Rehabilitation (DETR) agreed to pay Google $1,383,838 for the AI technology, a 2024 budget document shows, and it will be launched within the “next several months,” Nevada officials told Gizmodo.
Nevada’s first-of-its-kind AI will rely on a Google cloud service called Vertex AI Studio. Connecting to Google’s servers, the state will fine-tune the AI system to only reference information from DETR’s database, which officials think will ensure its decisions are “more tailored” and the system provides “more accurate results,” Gizmodo reported.
Under the contract, DETR will essentially transfer data from transcripts of unemployment appeals hearings and rulings, after which Google’s AI system will process that data, upload it to the cloud, and then compare the information to previous cases.
In as little as five minutes, the AI will issue a ruling that would’ve taken a state employee about three hours to reach without using AI, DETR’s information technology administrator, Carl Stanfield, told The Nevada Independent. That’s highly valuable to Nevada, which has a backlog of more than 40,000 appeals stemming from a pandemic-related spike in unemployment claims while dealing with “unforeseen staffing shortages” that DETR reported in July.
“The time saving is pretty phenomenal,” Stanfield said.
As a safeguard, the AI’s determination is then reviewed by a state employee to hopefully catch any mistakes, biases, or perhaps worse, hallucinations where the AI could possibly make up facts that could impact the outcome of their case.
Google’s spokesperson Ashley Simms told Gizmodo that the tech giant will work with the state to “identify and address any potential bias” and to “help them comply with federal and state requirements.” According to the state’s AI guidelines, the agency must prioritize ethical use of the AI system, “avoiding biases and ensuring fairness and transparency in decision-making processes.”
If the reviewer accepts the AI ruling, they’ll sign off on it and issue the decision. Otherwise, the reviewer will edit the decision and submit feedback so that DETR can investigate what went wrong.
Gizmodo noted that this novel use of AI “represents a significant experiment by state officials and Google in allowing generative AI to influence a high-stakes government decision—one that could put thousands of dollars in unemployed Nevadans’ pockets or take it away.”
Google declined to comment on whether more states are considering using AI to weigh jobless claims.
On Friday, Roblox announced plans to introduce an open source generative AI tool that will allow game creators to build 3D environments and objects using text prompts, reports MIT Tech Review. The feature, which is still under development, may streamline the process of creating game worlds on the popular online platform, potentially opening up more aspects of game creation to those without extensive 3D design skills.
Roblox has not announced a specific launch date for the new AI tool, which is based on what it calls a “3D foundational model.” The company shared a demo video of the tool where a user types, “create a race track,” then “make the scenery a desert,” and the AI model creates a corresponding model in the proper environment.
The system will also reportedly let users make modifications, such as changing the time of day or swapping out entire landscapes, and Roblox says the multimodal AI model will ultimately accept video and 3D prompts, not just text.
A video showing Roblox’s generative AI model in action.
The 3D environment generator is part of Roblox’s broader AI integration strategy. The company reportedly uses around 250 AI models across its platform, including one that monitors voice chat in real time to enforce content moderation, which is not always popular with players.
Next-token prediction in 3D
Roblox’s 3D foundational model approach involves a custom next-token prediction model—a foundation not unlike the large language models (LLMs) that power ChatGPT. Tokens are fragments of text data that LLMs use to process information. Roblox’s system “tokenizes” 3D blocks by treating each block as a numerical unit, which allows the AI model to predict the most likely next structural 3D element in a sequence. In aggregate, the technique can build entire objects or scenery.
Anupam Singh, vice president of AI and growth engineering at Roblox, told MIT Tech Review about the challenges in developing the technology. “Finding high-quality 3D information is difficult,” Singh said. “Even if you get all the data sets that you would think of, being able to predict the next cube requires it to have literally three dimensions, X, Y, and Z.”
According to Singh, lack of 3D training data can create glitches in the results, like a dog with too many legs. To get around this, Roblox is using a second AI model as a kind of visual moderator to catch the mistakes and reject them until the proper 3D element appears. Through iteration and trial and error, the first AI model can create the proper 3D structure.
Notably, Roblox plans to open-source its 3D foundation model, allowing developers and even competitors to use and modify it. But it’s not just about giving back—open source can be a two-way street. Choosing an open source approach could also allow the company to utilize knowledge from AI developers if they contribute to the project and improve it over time.
The ongoing quest to capture gaming revenue
News of the new 3D foundational model arrived at the 10th annual Roblox Developers Conference in San Jose, California, where the company also announced an ambitious goal to capture 10 percent of global gaming content revenue through the Roblox ecosystem, and the introduction of “Party,” a new feature designed to facilitate easier group play among friends.
In March 2023, we detailed Roblox’s early foray into AI-powered game development tools, as revealed at the Game Developers Conference. The tools included a Code Assist beta for generating simple Lua functions from text descriptions, and a Material Generator for creating 2D surfaces with associated texture maps.
At the time, Roblox Studio head Stef Corazza described these as initial steps toward “democratizing” game creation with plans for AI systems that are now coming to fruition. The 2023 tools focused on discrete tasks like code snippets and 2D textures, laying the groundwork for the more comprehensive 3D foundational model announced at this year’s Roblox Developer’s Conference.
The upcoming AI tool could potentially streamline content creation on the platform, possibly accelerating Roblox’s path toward its revenue goal. “We see a powerful future where Roblox experiences will have extensive generative AI capabilities to power real-time creation integrated with gameplay,” Roblox said in a statement. “We’ll provide these capabilities in a resource-efficient way, so we can make them available to everyone on the platform.”
Enlarge/ Generative AI Alexa asked to make a taco poem.
The previously announced generative AI version of Amazon’s Alexa voice assistant “will be powered primarily by Anthropic’s Claude artificial intelligence models,” Reuters reported today. This comes after challenges with using proprietary models, according to the publication, which cited five anonymous people “with direct knowledge of the Alexa strategy.”
Amazon demoed a generative AI version of Alexa in September 2023 and touted it as being more advanced, conversational, and capable, including the ability to do multiple smart home tasks with simpler commands. Gen AI Alexa is expected to come with a subscription fee, as Alexa has reportedly lost Amazon tens of billions of dollars throughout the years. Earlier reports said the updated voice assistant would arrive in June, but Amazon still hasn’t confirmed an official release date.
Now, Reuters is reporting that Amazon will no longer use its own large language models as the new Alexa’s primary driver. Early versions of gen AI Alexa based on Amazon’s AI models “struggled for words, sometimes taking six or seven seconds to acknowledge a prompt and reply,” Reuters said, citing one of its sources. Without specifying versions or features used, Reuters’ sources said Claude outperformed proprietary software.
In a statement to Reuters, Amazon didn’t deny using third-party models but claimed that its own tech is still part of Alexa:
Amazon uses many different technologies to power Alexa.
When it comes to machine learning models, we start with those built by Amazon, but we have used, and will continue to use, a variety of different models—including (Amazon AI model) Titan and future Amazon models, as well as those from partners—to build the best experience for customers.
Amazon has invested $4 billion in Anthropic (UK regulators are currently investigating this). It’s uncertain if Amazon’s big investment in Anthropic means that Claude can be applied to Alexa for free. Anthropic declined to comment on Reuters’ report.
The new Alexa may be delayed
On Monday, The Washington Post reported that Amazon wants to launch the new Alexa in October, citing internal documents. However, Reuters’ sources claimed that this date could be pushed back if the voice assistant fails certain unspecified internal benchmarks.
The Post said gen AI Alexa could cost up to $10 per month, according to the documents. That coincides with a June Reuters report saying that the service would cost $5 to $10 per month. The Post said Amazon would finalize pricing and naming in August.
But getting people to open their wallets for a voice assistant already associated with being free will be difficult (free Alexa is expected to remain available after the subscription version releases). Some Amazon employees are questioning if people will really pay for Alexa, Reuters noted. Amazon is facing an uphill battle with generative AI, which is being looked at as a last shot for Alexa amid big competition and leads from other AI offerings, including free ones like ChatGPT.
In June, Bank of America analysts estimated that Amazon could make $600 million to $1.2 billion in annual sales with gen AI Alexa, depending on final monthly pricing. This is under the assumption that 10 percent of an estimated 100 million active Alexa users (Amazon says it has sold 500 million Alexa-powered gadgets) will upgrade. But analysts noted that free alternatives would challenge the adoption rate.
The Post’s Monday report said the new Alexa will try winning over subscribers with features like AI-generated news summaries. This Smart Briefing feature will reportedly share summaries based on user preferences on topics including politics, despite OG Alexa’s previous problems with reporting accurate election results. The publication also said that gen AI Alexa would include “a chatbot aimed at children” and “conversational shopping tools.”
Artists defending a class-action lawsuit are claiming a major win this week in their fight to stop the most sophisticated AI image generators from copying billions of artworks to train AI models and replicate their styles without compensating artists.
In an order on Monday, US district judge William Orrick denied key parts of motions to dismiss from Stability AI, Midjourney, Runway AI, and DeviantArt. The court will now allow artists to proceed with discovery on claims that AI image generators relying on Stable Diffusion violate both the Copyright Act and the Lanham Act, which protects artists from commercial misuse of their names and unique styles.
“We won BIG,” an artist plaintiff, Karla Ortiz, wrote on X (formerly Twitter), celebrating the order. “Not only do we proceed on our copyright claims,” but “this order also means companies who utilize” Stable Diffusion models and LAION-like datasets that scrape artists’ works for AI training without permission “could now be liable for copyright infringement violations, amongst other violations.”
Lawyers for the artists, Joseph Saveri and Matthew Butterick, told Ars that artists suing “consider the Court’s order a significant step forward for the case,” as “the Court allowed Plaintiffs’ core copyright-infringement claims against all four defendants to proceed.”
Stability AI was the only company that responded to Ars’ request to comment, but it declined to comment.
Artists prepare to defend their livelihoods from AI
To get to this stage of the suit, artists had to amend their complaint to better explain exactly how AI image generators work to allegedly train on artists’ images and copy artists’ styles.
For example, they were told that if they “contend Stable Diffusion contains ‘compressed copies’ of the Training Images, they need to define ‘compressed copies’ and explain plausible facts in support. And if plaintiffs’ compressed copies theory is based on a contention that Stable Diffusion contains mathematical or statistical methods that can be carried out through algorithms or instructions in order to reconstruct the Training Images in whole or in part to create the new Output Images, they need to clarify that and provide plausible facts in support,” Orrick wrote.
To keep their fight alive, the artists pored through academic articles to support their arguments that “Stable Diffusion is built to a significant extent on copyrighted works and that the way the product operates necessarily invokes copies or protected elements of those works.” Orrick agreed that their amended complaint made plausible inferences that “at this juncture” is enough to support claims “that Stable Diffusion by operation by end users creates copyright infringement and was created to facilitate that infringement by design.”
“Specifically, the Court found Plaintiffs’ theory that image-diffusion models like Stable Diffusion contain compressed copies of their datasets to be plausible,” Saveri and Butterick’s statement to Ars said. “The Court also found it plausible that training, distributing, and copying such models constitute acts of copyright infringement.”
Not all of the artists’ claims survived, with Orrick granting motions to dismiss claims alleging that AI companies removed content management information from artworks in violation of the Digital Millennium Copyright Act (DMCA). Because artists failed to show evidence of defendants altering or stripping this information, they must permanently drop the DMCA claims.
Part of Orrick’s decision on the DMCA claims, however, indicates that the legal basis for dismissal is “unsettled,” with Orrick simply agreeing with Stability AI’s unsettled argument that “because the output images are admittedly not identical to the Training Images, there can be no liability for any removal of CMI that occurred during the training process.”
Ortiz wrote on X that she respectfully disagreed with that part of the decision but expressed enthusiasm that the court allowed artists to proceed with false endorsement claims, alleging that Midjourney violated the Lanham Act.
Five artists successfully argued that because “their names appeared on the list of 4,700 artists posted by Midjourney’s CEO on Discord” and that list was used to promote “the various styles of artistic works its AI product could produce,” this plausibly created confusion over whether those artists had endorsed Midjourney.
“Whether or not a reasonably prudent consumer would be confused or misled by the Names List and showcase to conclude that the included artists were endorsing the Midjourney product can be tested at summary judgment,” Orrick wrote. “Discovery may show that it is or that is it not.”
While Orrick agreed with Midjourney that “plaintiffs have no protection over ‘simple, cartoony drawings’ or ‘gritty fantasy paintings,'” artists were able to advance a “trade dress” claim under the Lanham Act, too. This is because Midjourney allegedly “allows users to create works capturing the ‘trade dress of each of the Midjourney Named Plaintiffs [that] is inherently distinctive in look and feel as used in connection with their artwork and art products.'”
As discovery proceeds in the case, artists will also have an opportunity to amend dismissed claims of unjust enrichment. According to Orrick, their next amended complaint will be their last chance to prove that AI companies have “deprived plaintiffs ‘the benefit of the value of their works.'”
Saveri and Butterick confirmed that “though the Court dismissed certain supplementary claims, Plaintiffs’ central claims will now proceed to discovery and trial.” On X, Ortiz suggested that the artists’ case is “now potentially one of THE biggest copyright infringement and trade dress cases ever!”
“Looking forward to the next stage of our fight!” Ortiz wrote.
Enlarge/ Elon Musk and Sam Altman share the stage in 2015, the same year that Musk alleged that Altman’s “deception” began.
After withdrawing his lawsuit in June for unknown reasons, Elon Musk has revived a complaint accusing OpenAI and its CEO Sam Altman of fraudulently inducing Musk to contribute $44 million in seed funding by promising that OpenAI would always open-source its technology and prioritize serving the public good over profits as a permanent nonprofit.
Instead, Musk alleged that Altman and his co-conspirators—”preying on Musk’s humanitarian concern about the existential dangers posed by artificial intelligence”—always intended to “betray” these promises in pursuit of personal gains.
As OpenAI’s technology advanced toward artificial general intelligence (AGI) and strove to surpass human capabilities, “Altman set the bait and hooked Musk with sham altruism then flipped the script as the non-profit’s technology approached AGI and profits neared, mobilizing Defendants to turn OpenAI, Inc. into their personal piggy bank and OpenAI into a moneymaking bonanza, worth billions,” Musk’s complaint said.
Where Musk saw OpenAI as his chance to fund a meaningful rival to stop Google from controlling the most powerful AI, Altman and others “wished to launch a competitor to Google” and allegedly deceived Musk to do it. According to Musk:
The idea Altman sold Musk was that a non-profit, funded and backed by Musk, would attract world-class scientists, conduct leading AI research and development, and, as a meaningful counterweight to Google’s DeepMind in the race for Artificial General Intelligence (“AGI”), decentralize its technology by making it open source. Altman assured Musk that the non-profit structure guaranteed neutrality and a focus on safety and openness for the benefit of humanity, not shareholder value. But as it turns out, this was all hot-air philanthropy—the hook for Altman’s long con.
Without Musk’s involvement and funding during OpenAI’s “first five critical years,” Musk’s complaint said, “it is fair to say” that “there would have been no OpenAI.” And when Altman and others repeatedly approached Musk with plans to shift OpenAI to a for-profit model, Musk held strong to his morals, conditioning his ongoing contributions on OpenAI remaining a nonprofit and its tech largely remaining open source.
“Either go do something on your own or continue with OpenAI as a nonprofit,” Musk told Altman in 2018 when Altman tried to “recast the nonprofit as a moneymaking endeavor to bring in shareholders, sell equity, and raise capital.”
“I will no longer fund OpenAI until you have made a firm commitment to stay, or I’m just being a fool who is essentially providing free funding to a startup,” Musk said at the time. “Discussions are over.”
But discussions weren’t over. And now Musk seemingly does feel like a fool after OpenAI exclusively licensed GPT-4 and all “pre-AGI” technology to Microsoft in 2023, while putting up paywalls and “failing to publicly disclose the non-profit’s research and development, including details on GPT-4, GPT-4T, and GPT-4o’s architecture, hardware, training method, and training computation.” This excluded the public “from open usage of GPT-4 and related technology to advance Defendants and Microsoft’s own commercial interests,” Musk alleged.
Now Musk has revived his suit against OpenAI, asking the court to award maximum damages for OpenAI’s alleged fraud, contract breaches, false advertising, acts viewed as unfair to competition, and other violations.
He has also asked the court to determine a very technical question: whether OpenAI’s most recent models should be considered AGI and therefore Microsoft’s license voided. That’s the only way to ensure that a private corporation isn’t controlling OpenAI’s AGI models, which Musk repeatedly conditioned his financial contributions upon preventing.
“Musk contributed considerable money and resources to launch and sustain OpenAI, Inc., which was done on the condition that the endeavor would be and remain a non-profit devoted to openly sharing its technology with the public and avoid concentrating its power in the hands of the few,” Musk’s complaint said. “Defendants knowingly and repeatedly accepted Musk’s contributions in order to develop AGI, with no intention of honoring those conditions once AGI was in reach. Case in point: GPT-4, GPT-4T, and GPT-4o are all closed source and shrouded in secrecy, while Defendants actively work to transform the non-profit into a thoroughly commercial business.”
Musk wants Microsoft’s GPT-4 license voided
Musk also asked the court to null and void OpenAI’s exclusive license to Microsoft, or else determine “whether GPT-4, GPT-4T, GPT-4o, and other OpenAI next generation large language models constitute AGI and are thus excluded from Microsoft’s license.”
It’s clear that Musk considers these models to be AGI, and he’s alleged that Altman’s current control of OpenAI’s Board—after firing dissidents in 2023 whom Musk claimed tried to get Altman ousted for prioritizing profits over AI safety—gives Altman the power to obscure when OpenAI’s models constitute AGI.
Enlarge/ One day, using pixellated fonts and images to represent that something is a video game will not be a trope. Today is not that day.
SAG-AFTRA has called for a strike of all its members working in video games, with the union demanding that its next contract not allow “companies to abuse AI to the detriment of our members.”
The strike mirrors similar actions taken by SAG-AFTRA and the Writers Guild of America (WGA) last year, which, while also broader in scope than just AI, were similarly focused on concerns about AI-generated work product and the use of member work to train AI.
“Frankly, it’s stunning that these video game studios haven’t learned anything from the lessons of last year—that our members can and will stand up and demand fair and equitable treatment with respect to A.I., and the public supports us in that,” Duncan Crabtree-Ireland, chief negotiator for SAG-AFTRA, said in a statement.
During the strike, the more than 160,000 members of the union will not provide talent to games produced by Disney, Electronic Arts, Blizzard Activision, Take-Two, WB Games, and others. Not every game is affected. Some productions may have interim agreements with union workers, and others, like continually updated games that launched before the current negotiations starting September 2023, may be exempt.
The publishers and other companies issued statements to the media through a communications firm representing them. “We are disappointed the union has chosen to walk away when we are so close to a deal, and we remain prepared to resume negotiations,” a statement offered to The New York Times and other outlets read. The statement said the two sides had found common ground on 24 out of 25 proposals and that the game companies’ offer was responsive and “extends meaningful AI protections.”
The Washington Post says the biggest remaining issue involves on-camera performers, including motion capture performers. Crabtree-Ireland told the Post that while AI training protections were extended to voice performers, motion and stunt work was left out. “[A]ll of those performers deserve to have their right to have informed consent and fair compensation for the use of their image, their likeness or voice, their performance. It’s that simple,” Crabtree-Ireland said in June.
It will be difficult to know the impact of a game performer strike for some time, if ever, owing to the non-linear and secretive nature of game production. A game’s conception, development, casting, acting, announcement, and further development (and development pivots) happen on whatever timeline they happen upon.
SAG-AFTRA has a tool for searching game titles to see if they are struck for union work, but it is finicky, recognizing only specific production titles, code names, and ID numbers. Searches for Grande Theft Auto VI and 6 returned a “Game Over!” (i.e., struck), but Kotaku confirmed the game is technically unaffected, even though its parent publisher, Take-Two, is generally struck.
Video game performers in SAG-AFTRA last went on strike in 2016, that time regarding long-term royalties. The strike lasted 340 days, still the longest in that union’s history, and was settled with pay raises for actors while residuals and terms on vocal stress remained unaddressed. The impact of that strike was generally either hidden or largely blunted, as affected titles hired non-union replacements. Voice work, as noted by the original English voice for Bayonetta, remains a largely unprotected field.
The Amazon business unit that focuses on Alexa-powered gadgets lost $25 billion between 2017 and 2021, The Wall Street Journal (WSJ) reported this week.
Amazon claims it has sold more than 500,000 Alexa devices, which included Echo speakers, Kindle readers, Fire TV sets and streaming devices, and Blink and Ring smart home security cameras. But since debuting, Alexa, like other voice assistants, has struggled to make money. In late 2022, Business Insider reported that Alexa was set to lose $10 billion that year.
WSJ said it got the $25 billion figure from “internal documents” and that it wasn’t able to determine the Devices business’s losses before or after the shared time period.
“No profit timeline”
WSJ’s report claims to offer insight into how Devices was able to bleed so much money for so long.
For one, it seems like the business unit was allowed some wiggle room in terms of financial success in the interest of innovation and the potential for long-term gains. Someone the WSJ described as being “a former longtime Devices executive” said that when Alexa first started, Amazon’s gadgets team “didn’t have a profit timeline” when launching products.
Amazon is known to have sold Echo speakers for cheap or at a loss in the hopes of making money off Alexa later. In 2019, then-Amazon Devices SVP Dave Limp, who exited the company last year, told WSJ: “We don’t have to make money when we sell you the device.” WSJ noted that this strategy has applied to other unspecified Amazon devices, too.
People tend to use Alexa for free services, though, like checking the weather or the time, not making big purchases.
“We worried we’ve hired 10,000 people and we’ve built a smart timer,” an anonymous person that WSJ said is a “former senior employee” said.
An Amazon spokesperson told WSJ that more than half of people with an Echo have shopped with it but wouldn’t provide more specifics. Per “former employees on the Alexa shopping team” that WSJ spoke with, however, the amount of shopping revenue tied to Alexa is insignificant.
In an emailed statement, an Amazon spokesperson told Ars Technica, in part:
Within Devices & Services, we’re focused on the value we create when customers use our services, not just when they buy our devices. Our Devices & Services organization has established numerous profitable businesses for Amazon and is well-positioned to continue doing so going forward.
Further hindering Alexa’s revenue are challenges in selling security and other services and the limitation of ad sales because they annoy Alexa users, WSJ reported.
Massive losses also didn’t seem to slow down product development. WSJ claimed the Devices business lost over $5 billion in 2018 yet still spent money developing the Astro consumer robot. That robot has yet to see general availability, while a business version is getting bricked just 10 months after release. Amazon Halo health trackers, which have also been bricked, and Luna game-streaming devices were also developed in 2019, when the hardware unit lost over $6 billion, per WSJ.
Amazon has laid off at least 19,000 workers since 2022, with the Devices division reportedly hit especially hard.
A Spanish youth court has sentenced 15 minors to one year of probation after spreading AI-generated nude images of female classmates in two WhatsApp groups.
The minors were charged with 20 counts of creating child sex abuse images and 20 counts of offenses against their victims’ moral integrity. In addition to probation, the teens will also be required to attend classes on gender and equality, as well as on the “responsible use of information and communication technologies,” a press release from the Juvenile Court of Badajoz said.
Many of the victims were too ashamed to speak up when the inappropriate fake images began spreading last year. Prior to the sentencing, a mother of one of the victims told The Guardian that girls like her daughter “were completely terrified and had tremendous anxiety attacks because they were suffering this in silence.”
The court confirmed that the teens used artificial intelligence to create images where female classmates “appear naked” by swiping photos from their social media profiles and superimposing their faces on “other naked female bodies.”
Teens using AI to sexualize and harass classmates has become an alarming global trend. Police have probed disturbing cases in both high schools and middle schools in the US, and earlier this year, the European Union proposed expanding its definition of child sex abuse to more effectively “prosecute the production and dissemination of deepfakes and AI-generated material.” Last year, US President Joe Biden issued an executive order urging lawmakers to pass more protections.
In addition to mental health impacts, victims have reported losing trust in classmates who targeted them and wanting to switch schools to avoid further contact with harassers. Others stopped posting photos online and remained fearful that the harmful AI images will resurface.
Minors targeting classmates may not realize exactly how far images can potentially spread when generating fake child sex abuse materials (CSAM); they could even end up on the dark web. An investigation by the United Kingdom-based Internet Watch Foundation (IWF) last year reported that “20,254 AI-generated images were found to have been posted to one dark web CSAM forum in a one-month period,” with more than half determined most likely to be criminal.
IWF warned that it has identified a growing market for AI-generated CSAM and concluded that “most AI CSAM found is now realistic enough to be treated as ‘real’ CSAM.” One “shocked” mother of a female classmate victimized in Spain agreed. She told The Guardian that “if I didn’t know my daughter’s body, I would have thought that image was real.”
More drastic steps to stop deepfakes
While lawmakers struggle to apply existing protections against CSAM to AI-generated images or to update laws to explicitly prosecute the offense, other more drastic solutions to prevent the harmful spread of deepfakes have been proposed.
In an op-ed for The Guardian today, journalist Lucia Osborne-Crowley advocated for laws restricting sites used to both generate and surface deepfake pornography, including regulating this harmful content when it appears on social media sites and search engines. And IWF suggested that, like jurisdictions that restrict sharing bomb-making information, lawmakers could also restrict guides instructing bad actors on how to use AI to generate CSAM.
The Malvaluna Association, which represented families of victims in Spain and broadly advocates for better sex education, told El Diario that beyond more regulations, more education is needed to stop teens motivated to use AI to attack classmates. Because the teens were ordered to attend classes, the association agreed to the sentencing measures.
“Beyond this particular trial, these facts should make us reflect on the need to educate people about equality between men and women,” the Malvaluna Association said. The group urged that today’s kids should not be learning about sex through pornography that “generates more sexism and violence.”
Teens sentenced in Spain were between the ages of 13 and 15. According to the Guardian, Spanish law prevented sentencing of minors under 14, but the youth court “can force them to take part in rehabilitation courses.”
Tech companies could also make it easier to report and remove harmful deepfakes. Ars could not immediately reach Meta for comment on efforts to combat the proliferation of AI-generated CSAM on WhatsApp, the private messaging app that was used to share fake images in Spain.
An FAQ said that “WhatsApp has zero tolerance for child sexual exploitation and abuse, and we ban users when we become aware they are sharing content that exploits or endangers children,” but it does not mention AI.
For many artists, it’s a precarious time to post art online. AI image generators keep getting better at cheaply replicating a wider range of unique styles, and basically every popular platform is rushing to update user terms to seize permissions to scrape as much data as possible for AI training.
Defenses against AI training exist—like Glaze, a tool that adds a small amount of imperceptible-to-humans noise to images to stop image generators from copying artists’ styles. But they don’t provide a permanent solution at a time when tech companies appear determined to chase profits by building ever-more-sophisticated AI models that increasingly threaten to dilute artists’ brands and replace them in the market.
In one high-profile example just last month, the estate of Ansel Adams condemned Adobe for selling AI images stealing the famous photographer’s style, Smithsonian reported. Adobe quickly responded and removed the AI copycats. But it’s not just famous artists who risk being ripped off, and lesser-known artists may struggle to prove AI models are referencing their works. In this largely lawless world, every image uploaded risks contributing to an artist’s downfall, potentially watering down demand for their own work each time they promote new pieces online.
Unsurprisingly, artists have increasingly sought protections to diminish or dodge these AI risks. As tech companies update their products’ terms—like when Meta suddenly announced that it was training AI on a billion Facebook and Instagram user photos last December—artists frantically survey the landscape for new defenses. That’s why, counting among those offering scarce AI protections available today, The Glaze Project recently reported a dramatic surge in requests for its free tools.
Designed to help prevent style mimicry and even poison AI models to discourage data scraping without an artist’s consent or compensation, The Glaze Project’s tools are now in higher demand than ever. University of Chicago professor Ben Zhao, who created the tools, told Ars that the backlog for approving a “skyrocketing” number of requests for access is “bad.” And as he recently posted on X (formerly Twitter), an “explosion in demand” in June is only likely to be sustained as AI threats continue to evolve. For the foreseeable future, that means artists searching for protections against AI will have to wait.
Even if Zhao’s team did nothing but approve requests for WebGlaze, its invite-only web-based version of Glaze, “we probably still won’t keep up,” Zhao said. He’s warned artists on X to expect delays.
Compounding artists’ struggles, at the same time as demand for Glaze is spiking, the tool has come under attack by security researchers who claimed it was not only possible but easy to bypass Glaze’s protections. For security researchers and some artists, this attack calls into question whether Glaze can truly protect artists in these embattled times. But for thousands of artists joining the Glaze queue, the long-term future looks so bleak that any promise of protections against mimicry seems worth the wait.
Attack cracking Glaze sparks debate
Millions have downloaded Glaze already, and many artists are waiting weeks or even months for access to WebGlaze, mostly submitting requests for invites on social media. The Glaze Project vets every request to verify that each user is human and ensure bad actors don’t abuse the tools, so the process can take a while.
The team is currently struggling to approve hundreds of requests submitted daily through direct messages on Instagram and Twitter in the order they are received, and artists requesting access must be patient through prolonged delays. Because these platforms’ inboxes aren’t designed to sort messages easily, any artist who follows up on a request gets bumped to the back of the line—as their message bounces to the top of the inbox and Zhao’s team, largely volunteers, continues approving requests from the bottom up.
“This is obviously a problem,” Zhao wrote on X while discouraging artists from sending any follow-ups unless they’ve already gotten an invite. “We might have to change the way we do invites and rethink the future of WebGlaze to keep it sustainable enough to support a large and growing user base.”
Glaze interest is likely also spiking due to word of mouth. Reid Southen, a freelance concept artist for major movies, is advocating for all artists to use Glaze. Reid told Ars that WebGlaze is especially “nice” because it’s “available for free for people who don’t have the GPU power to run the program on their home machine.”
Human Rights Watch (HRW) continues to reveal how photos of real children casually posted online years ago are being used to train AI models powering image generators—even when platforms prohibit scraping and families use strict privacy settings.
Last month, HRW researcher Hye Jung Han found 170 photos of Brazilian kids that were linked in LAION-5B, a popular AI dataset built from Common Crawl snapshots of the public web. Now, she has released a second report, flagging 190 photos of children from all of Australia’s states and territories, including indigenous children who may be particularly vulnerable to harms.
These photos are linked in the dataset “without the knowledge or consent of the children or their families.” They span the entirety of childhood, making it possible for AI image generators to generate realistic deepfakes of real Australian children, Han’s report said. Perhaps even more concerning, the URLs in the dataset sometimes reveal identifying information about children, including their names and locations where photos were shot, making it easy to track down children whose images might not otherwise be discoverable online.
That puts children in danger of privacy and safety risks, Han said, and some parents thinking they’ve protected their kids’ privacy online may not realize that these risks exist.
From a single link to one photo that showed “two boys, ages 3 and 4, grinning from ear to ear as they hold paintbrushes in front of a colorful mural,” Han could trace “both children’s full names and ages, and the name of the preschool they attend in Perth, in Western Australia.” And perhaps most disturbingly, “information about these children does not appear to exist anywhere else on the Internet”—suggesting that families were particularly cautious in shielding these boys’ identities online.
Stricter privacy settings were used in another image that Han found linked in the dataset. The photo showed “a close-up of two boys making funny faces, captured from a video posted on YouTube of teenagers celebrating” during the week after their final exams, Han reported. Whoever posted that YouTube video adjusted privacy settings so that it would be “unlisted” and would not appear in searches.
Only someone with a link to the video was supposed to have access, but that didn’t stop Common Crawl from archiving the image, nor did YouTube policies prohibiting AI scraping or harvesting of identifying information.
Reached for comment, YouTube’s spokesperson, Jack Malon, told Ars that YouTube has “been clear that the unauthorized scraping of YouTube content is a violation of our Terms of Service, and we continue to take action against this type of abuse.” But Han worries that even if YouTube did join efforts to remove images of children from the dataset, the damage has been done, since AI tools have already trained on them. That’s why—even more than parents need tech companies to up their game blocking AI training—kids need regulators to intervene and stop training before it happens, Han’s report said.
Han’s report comes a month before Australia is expected to release a reformed draft of the country’s Privacy Act. Those reforms include a draft of Australia’s first child data protection law, known as the Children’s Online Privacy Code, but Han told Ars that even people involved in long-running discussions about reforms aren’t “actually sure how much the government is going to announce in August.”
“Children in Australia are waiting with bated breath to see if the government will adopt protections for them,” Han said, emphasizing in her report that “children should not have to live in fear that their photos might be stolen and weaponized against them.”
AI uniquely harms Australian kids
To hunt down the photos of Australian kids, Han “reviewed fewer than 0.0001 percent of the 5.85 billion images and captions contained in the data set.” Because her sample was so small, Han expects that her findings represent a significant undercount of how many children could be impacted by the AI scraping.
“It’s astonishing that out of a random sample size of about 5,000 photos, I immediately fell into 190 photos of Australian children,” Han told Ars. “You would expect that there would be more photos of cats than there are personal photos of children,” since LAION-5B is a “reflection of the entire Internet.”
LAION is working with HRW to remove links to all the images flagged, but cleaning up the dataset does not seem to be a fast process. Han told Ars that based on her most recent exchange with the German nonprofit, LAION had not yet removed links to photos of Brazilian kids that she reported a month ago.
LAION declined Ars’ request for comment.
In June, LAION’s spokesperson, Nathan Tyler, told Ars that, “as a nonprofit, volunteer organization,” LAION is committed to doing its part to help with the “larger and very concerning issue” of misuse of children’s data online. But removing links from the LAION-5B dataset does not remove the images online, Tyler noted, where they can still be referenced and used in other AI datasets, particularly those relying on Common Crawl. And Han pointed out that removing the links from the dataset doesn’t change AI models that have already trained on them.
“Current AI models cannot forget data they were trained on, even if the data was later removed from the training data set,” Han’s report said.
Kids whose images are used to train AI models are exposed to a variety of harms, Han reported, including a risk that image generators could more convincingly create harmful or explicit deepfakes. In Australia last month, “about 50 girls from Melbourne reported that photos from their social media profiles were taken and manipulated using AI to create sexually explicit deepfakes of them, which were then circulated online,” Han reported.
For First Nations children—”including those identified in captions as being from the Anangu, Arrernte, Pitjantjatjara, Pintupi, Tiwi, and Warlpiri peoples”—the inclusion of links to photos threatens unique harms. Because culturally, First Nations peoples “restrict the reproduction of photos of deceased people during periods of mourning,” Han said the AI training could perpetuate harms by making it harder to control when images are reproduced.
Once an AI model trains on the images, there are other obvious privacy risks, including a concern that AI models are “notorious for leaking private information,” Han said. Guardrails added to image generators do not always prevent these leaks, with some tools “repeatedly broken,” Han reported.
LAION recommends that, if troubled by the privacy risks, parents remove images of kids online as the most effective way to prevent abuse. But Han told Ars that’s “not just unrealistic, but frankly, outrageous.”
“The answer is not to call for children and parents to remove wonderful photos of kids online,” Han said. “The call should be [for] some sort of legal protections for these photos, so that kids don’t have to always wonder if their selfie is going to be abused.”