generative ai

adobe-to-automatically-move-subscribers-to-pricier,-ai-focused-tier-in-june

Adobe to automatically move subscribers to pricier, AI-focused tier in June

Subscribers to Adobe’s multi-app subscription plan, Creative Cloud All Apps, will be charged more starting on June 17 to accommodate for new generative AI features.

Adobe’s announcement, spotted by MakeUseOf, says the change will affect North American subscribers to the Creative Cloud All Apps plan, which Adobe is renaming Creative Cloud Pro. Starting on June 17, Adobe will automatically renew Creative Cloud All Apps subscribers into the Creative Cloud Pro subscription, which will be $70 per month for individuals who commit to an annual plan, up from $60 for Creative Cloud All Apps. Annual plans for students and teachers plans are moving from $35/month to $40/month, and annual teams pricing will go from $90/month to $100/month. Monthly (non-annual) subscriptions are also increasing, from $90 to $105.

Further, in an apparent attempt to push generative AI users to more expensive subscriptions, as of June 17, Adobe will give single-app subscribers just 25 generative AI credits instead of the current 500.

Current subscribers can opt to move down to a new multi-app plan called Creative Cloud Standard, which is $55/month for annual subscribers and $82.49/month for monthly subscribers. However, this tier limits access to mobile and web app features, and subscribers can’t use premium generative AI features.

Creative Cloud Standard won’t be available to new subscribers, meaning the only option for new customers who need access to many Adobe apps will be the new AI-heavy Creative Cloud Pro plan.

Adobe’s announcement explained the higher prices by saying that the subscription tier “includes all the core applications and new AI capabilities that power the way people create today, and its price reflects that innovation, as well as our ongoing commitment to deliver the future of creative tools.”

Like today’s Creative Cloud All Apps plan, Creative Cloud Pro will include Photoshop, Illustrator, Premiere Pro, Lightroom, and access to Adobe’s web and mobile apps. AI features include unlimited usage of image and vector features in Adobe apps, including Generative Fill in Photoshop, Generative Remove in Lightroom, Generative Shape Fill in Illustrator, and 4K video generation with Generative Extend in Premiere Pro.

Adobe to automatically move subscribers to pricier, AI-focused tier in June Read More »

largest-deepfake-porn-site-shuts-down-forever

Largest deepfake porn site shuts down forever

The shuttering of Mr. Deepfakes won’t solve the problem of deepfakes, though. In 2022, the number of deepfakes skyrocketed as AI technology made the synthetic NCII appear more realistic than ever, prompting an FBI warning in 2023 to alert the public that the fake content was being increasingly used in sextortion schemes. But the immediate solutions society used to stop the spread had little impact. For example, in response to pressure to make the fake NCII harder to find, Google started downranking explicit deepfakes in search results but refused to demote platforms like Mr. Deepfakes unless Google received an unspecified “high volume of removals for fake explicit imagery.”

According to researchers, Mr. Deepfakes—a real person who remains anonymous but reportedly is a 36-year-old hospital worker in Toronto—created the engine driving this spike. His DeepFaceLab quickly became “the leading deepfake software, estimated to be the software behind 95 percent of all deepfake videos and has been replicated over 8,000 times on GitHub,” researchers found. For casual users, his platform hosted videos that could be purchased, usually priced above $50 if it was deemed realistic, while more motivated users relied on forums to make requests or enhance their own deepfake skills to become creators.

Mr. Deepfakes’ illegal trade began on Reddit but migrated to its own platform after a ban in 2018. There, thousands of deepfake creators shared technical knowledge, with the Mr. Deepfakes site forums eventually becoming “the only viable source of technical support for creating sexual deepfakes,” researchers noted last year.

Having migrated once before, it seems unlikely that this community won’t find a new platform to continue generating the illicit content, possibly rearing up under a new name since Mr. Deepfakes seemingly wants out of the spotlight. Back in 2023, researchers estimated that the platform had more than 250,000 members, many of whom may quickly seek a replacement or even try to build a replacement.

Further increasing the likelihood that Mr. Deepfakes’ reign of terror isn’t over, the DeepFaceLab GitHub repository—which was archived in November and can no longer be edited—remains available for anyone to copy and use.

404 Media reported that many Mr. Deepfakes members have already connected on Telegram, where synthetic NCII is also reportedly frequently traded. Hany Farid, a professor at UC Berkeley who is a leading expert on digitally manipulated images, told 404 Media that “while this takedown is a good start, there are many more just like this one, so let’s not stop here.”

Largest deepfake porn site shuts down forever Read More »

first-amendment-doesn’t-just-protect-human-speech,-chatbot-maker-argues

First Amendment doesn’t just protect human speech, chatbot maker argues


Do LLMs generate “pure speech”?

Feds could censor chatbots if their “speech” isn’t protected, Character.AI says.

Pushing to dismiss a lawsuit alleging that its chatbots caused a teen’s suicide, Character Technologies is arguing that chatbot outputs should be considered “pure speech” deserving of the highest degree of protection under the First Amendment.

In their motion to dismiss, the developers of Character.AI (C.AI) argued that it doesn’t matter who the speaker is—whether it’s a video game character spouting scripted dialogue, a foreign propagandist circulating misinformation, or a chatbot churning out AI-generated responses to prompting—courts protect listeners’ rights to access that speech. Accusing the mother of the departed teen, Megan Garcia, of attempting to “insert this Court into the conversations of millions of C.AI users” and supposedly endeavoring to “shut down” C.AI, the chatbot maker argued that the First Amendment bars all of her claims.

“The Court need not wrestle with the novel questions of who should be deemed the speaker of the allegedly harmful content here and whether that speaker has First Amendment rights,” Character Technologies argued, “because the First Amendment protects the public’s ‘right to receive information and ideas.'”

Warning that “imposing tort liability for one user’s alleged response to expressive content would be to ‘declare what the rest of the country can and cannot read, watch, and hear,'” the company urged the court to consider the supposed “chilling effect” that would have on “both on C.AI and the entire nascent generative AI industry.”

“‘Pure speech,’ such as the chat conversations at issue here, ‘is entitled to comprehensive protection under the First Amendment,'” Character Technologies argued in another court filing.

However, Garcia’s lawyers pointed out that even a video game character’s dialogue is written by a human, arguing that all of Character Technologies’ examples of protected “pure speech” are human speech. Although the First Amendment also protects non-human corporations’ speech, corporations are formed by humans, they noted. And unlike corporations, chatbots have no intention behind their outputs, her legal team argued, instead simply using a probabilistic approach to generate text. So they argue that the First Amendment does not apply.

Character Technologies argued in response that demonstrating C.AI’s expressive intent is not required, but if it were, “conversations with Characters feature such intent” because chatbots are designed to “be expressive and engaging,” and users help design and prompt those characters.

“Users layer their own expressive intent into each conversation by choosing which Characters to talk to and what messages to send and can also edit Characters’ messages and direct Characters to generate different responses,” the chatbot maker argued.

In her response opposing the motion to dismiss, Garcia urged the court to decline what her legal team characterized as Character Technologies’ invitation to “radically expand First Amendment protections from expressions of human volition to an unpredictable, non-determinative system where humans can’t even examine many of the mathematical functions creating outputs, let alone control them.”

To support Garcia’s case, they cited a 40-year-old ruling where the Eleventh Circuit ruled that a talking cat called “Blackie” could not be “considered a person” and was deemed a “non-human entity” despite possessing an “exceptional speech-like ability.”

Garcia’s lawyers hope the judge will rule that “AI output is not speech at all,” or if it is speech, it “falls within an exception to the First Amendment”—perhaps deemed offensive to minors who the chatbot maker knew were using the service or possibly resulting in a novel finding that manipulative speech isn’t protected. If either argument is accepted, the chatbot makers’ attempt to invoke “listeners’ rights cannot save it,” they suggested.

However, Character Technologies disputes that any recognized exception to the First Amendment’s protections is applicable in the case, noting that Garcia’s team is not arguing that her son’s chats with bots were “obscene” or incited violence. Rather, the chatbot maker argued, Garcia is asking the court to “be the first to hold that ‘manipulative expression’ is unprotected by the First Amendment because a ‘disparity in power and information between speakers and listeners… frustrat[es] listeners’ rights.'”

Now, a US court is being asked to clarify if chatbot outputs are protected speech. At a hearing Monday, a US district judge in Florida, Anne Conway, did not rule from the bench, Garcia’s legal team told Ars. Asking few questions of either side, the judge is expected to issue an opinion on the motion to dismiss within the next few weeks, or possibly months.

For Garcia and her family, who appeared at the hearing, the idea that AI “has more rights than humans” felt dehumanizing, Garcia’s legal team said.

“Pandering” to Trump administration to dodge guardrails

According to Character Technologies, the court potentially agreeing with Garcia that “that AI-generated speech is categorically unprotected” would have “far-reaching consequences.”

At perhaps the furthest extreme, they’ve warned Conway that without a First Amendment barrier, “the government could pass a law prohibiting AI from ‘offering prohibited accounts of history’ or ‘making negative statements about the nation’s leaders,’ as China has considered doing.” And the First Amendment specifically prohibits the government from controlling the flow of ideas in society, they noted, angling to make chatbot output protections seem crucial in today’s political climate.

Meetali Jain, Garcia’s attorney and founder of the Tech Justice Law Project, told Ars that this kind of legal challenge is new in the generative AI space, where copyright battles have dominated courtroom debates.

“This is the first time that I’ve seen not just the issue of the First Amendment being applied to gen AI but also the First Amendment being applied in this way,” Jain said.

In their court filing, Jain’s team noted that Character Technologies is not arguing that the First Amendment shielded the rights of Garcia’s son, Sewell Setzer, to receive allegedly harmful speech. Instead, their argument is “effectively juxtaposing the listeners’ rights of their millions of users against this one user who was aggrieved. So it’s kind of like the hypothetical users versus the real user who’s in court.”

Jain told Ars that Garcia’s team tried to convince the judge that the argument that it doesn’t matter who the speaker is, even when the speaker isn’t human, is reckless since it seems to be “implying” that “AI is a sentient being and has its own rights.”

Additionally, Jain suggested that Character Technologies’ argument that outputs must be shielded to avoid government censorship seems to be “pandering” to the Trump administration’s fears that China may try to influence American politics through social media algorithms like TikTok’s or powerful open source AI models like DeepSeek.

“That suggests that there can be no sort of imposition of guardrails on AI, lest we either lose on the national security front or because of these vague hypothetical under-theorized First Amendment concerns,” Jain told Ars.

At a press briefing Tuesday, Jain confirmed that the judge clearly understood that “our position was that the First Amendment protects speech, not words.”

“LLMs do not think and feel as humans do,” Jain said, citing University of Colorado law school researchers who supported their complaint. “Rather, they generate text through statistical methods based on patterns found in their training data. And so our position was that there is a distinction to make between words and speech, and that it’s really only the latter that is deserving of First Amendment protection.”

Jain alleged that Character Technologies is angling to create a legal environment where all chatbot outputs are protected against liability claims so that C.AI can operate “without any sort of constraints or guardrails.”

It’s notable, she suggested, that the chatbot maker updated its safety features following the death of Garcia’s son, Sewell Setzer. A C.AI blog mourned the “tragic loss of one of our users” and noted updates, included changes “to reduce the likelihood of encountering sensitive or suggestive content,” improved detection and intervention in harmful chat sessions, and “a revised disclaimer on every chat to remind users that the AI is not a real person.”

Although Character Technologies argues that it’s common to update safety practices over time, Garcia’s team alleged these updates show that C.AI could have made a safer product and chose not to.

Expert warns against giving AI products rights

Character Technologies has also argued that C.AI is not a “product” as Florida law defines it. That has striking industry implications, according to Camille Carlton, a policy director for the Center for Humane Technology who is serving as a technical expert on the case.

At the press briefing, Carlton suggested that “by invoking these First Amendment protections over speech without really specifying whose speech is being protected, Character.AI’s defense has really laid the groundwork for a world in which LLM outputs are protected speech and for a world in which AI products could have other protected rights in the same way that humans do.”

Since chatbot outputs seemingly don’t have Section 230 protections—Jain noted it was somewhat surprising that Character Technologies did not raise this defense—the chatbot maker may be attempting to secure the First Amendment as a shield instead, Carlton suggested.

“It’s a move that they’re incentivized to take because it would reduce their own accountability and their own responsibility,” Carlton said.

Jain expects that whatever Conway decides, the losing side will appeal. However, if Conway denies the motion, then discovery can begin, perhaps allowing Garcia the clearest view yet into the allegedly harmful chats she believes manipulated her son into feeling completely disconnected from the real world.

If courts grant AI products across the board such rights, Carlton warned, troubled parents like Garcia may have no recourse for potentially dangerous outputs.

“This issue could fundamentally reshape how the law approaches AI free speech and corporate accountability,” Carlton said. “And I think the bottom line from our perspective—and from what we’re seeing in terms of the trends in Character.AI and the broader trends from these AI labs—is that we need to double down on the fact that these are products. They’re not people.”

Character Technologies declined Ars’ request to comment.

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

First Amendment doesn’t just protect human speech, chatbot maker argues Read More »

midjourney-introduces-first-new-image-generation-model-in-over-a-year

Midjourney introduces first new image generation model in over a year

AI image generator Midjourney released its first new model in quite some time today; dubbed V7, it’s a ground-up rework that is available in alpha to users now.

There are two areas of improvement in V7: the first is better images, and the second is new tools and workflows.

Starting with the image improvements, V7 promises much higher coherence and consistency for hands, fingers, body parts, and “objects of all kinds.” It also offers much more detailed and realistic textures and materials, like skin wrinkles or the subtleties of a ceramic pot.

Those details are often among the most obvious telltale signs that an image has been AI-generated. To be clear, Midjourney isn’t claiming to have made advancements that make AI images unrecognizable to a trained eye; it’s just saying that some of the messiness we’re accustomed to has been cleaned up to a significant degree.

V7 can reproduce materials and lighting situations that V6.1 usually couldn’t. Credit: Xeophon

On the features side, the star of the show is the new “Draft Mode.” On its various communication channels with users (a blog, Discord, X, and so on), Midjourney says that “Draft mode is half the cost and renders images at 10 times the speed.”

However, the images are of lower quality than what you get in the other modes, so this is not intended to be the way you produce final images. Rather, it’s meant to be a way to iterate and explore to find the desired result before switching modes to make something ready for public consumption.

V7 comes with two modes: turbo and relax. Turbo generates final images quickly but is twice as expensive in terms of credit use, while relax mode takes its time but is half as expensive. There is currently no standard mode for V7, strangely; Midjourney says that’s coming later, as it needs some more time to be refined.

Midjourney introduces first new image generation model in over a year Read More »

google’s-new-ai-generates-hypotheses-for-researchers

Google’s new AI generates hypotheses for researchers

Over the past few years, Google has embarked on a quest to jam generative AI into every product and initiative possible. Google has robots summarizing search results, interacting with your apps, and analyzing the data on your phone. And sometimes, the output of generative AI systems can be surprisingly good despite lacking any real knowledge. But can they do science?

Google Research is now angling to turn AI into a scientist—well, a “co-scientist.” The company has a new multi-agent AI system based on Gemini 2.0 aimed at biomedical researchers that can supposedly point the way toward new hypotheses and areas of biomedical research. However, Google’s AI co-scientist boils down to a fancy chatbot. 

A flesh-and-blood scientist using Google’s co-scientist would input their research goals, ideas, and references to past research, allowing the robot to generate possible avenues of research. The AI co-scientist contains multiple interconnected models that churn through the input data and access Internet resources to refine the output. Inside the tool, the different agents challenge each other to create a “self-improving loop,” which is similar to the new raft of reasoning AI models like Gemini Flash Thinking and OpenAI o3.

This is still a generative AI system like Gemini, so it doesn’t truly have any new ideas or knowledge. However, it can extrapolate from existing data to potentially make decent suggestions. At the end of the process, Google’s AI co-scientist spits out research proposals and hypotheses. The human scientist can even talk with the robot about the proposals in a chatbot interface. 

Google AI co-scientist

The structure of Google’s AI co-scientist.

You can think of the AI co-scientist as a highly technical form of brainstorming. The same way you can bounce party-planning ideas off a consumer AI model, scientists will be able to conceptualize new scientific research with an AI tuned specifically for that purpose. 

Testing AI science

Today’s popular AI systems have a well-known problem with accuracy. Generative AI always has something to say, even if the model doesn’t have the right training data or model weights to be helpful, and fact-checking with more AI models can’t work miracles. Leveraging its reasoning roots, the AI co-scientist conducts an internal evaluation to improve outputs, and Google says the self-evaluation ratings correlate to greater scientific accuracy. 

The internal metrics are one thing, but what do real scientists think? Google had human biomedical researchers evaluate the robot’s proposals, and they reportedly rated the AI co-scientist higher than other, less specialized agentic AI systems. The experts also agreed the AI co-scientist’s outputs showed greater potential for impact and novelty compared to standard AI models. 

This doesn’t mean the AI’s suggestions are all good. However, Google partnered with several universities to test some of the AI research proposals in the laboratory. For example, the AI suggested repurposing certain drugs for treating acute myeloid leukemia, and laboratory testing suggested it was a viable idea. Research at Stanford University also showed that the AI co-scientist’s ideas about treatment for liver fibrosis were worthy of further study. 

This is compelling work, certainly, but calling this system a “co-scientist” is perhaps a bit grandiose. Despite the insistence from AI leaders that we’re on the verge of creating living, thinking machines, AI isn’t anywhere close to being able to do science on its own. That doesn’t mean the AI-co-scientist won’t be useful, though. Google’s new AI could help humans interpret and contextualize expansive data sets and bodies of research, even if it can’t understand or offer true insights. 

Google says it wants more researchers working with this AI system in the hope it can assist with real research. Interested researchers and organizations can apply to be part of the Trusted Tester program, which provides access to the co-scientist UI as well as an API that can be integrated with existing tools.

Google’s new AI generates hypotheses for researchers Read More »

ai-making-up-cases-can-get-lawyers-fired,-scandalized-law-firm-warns

AI making up cases can get lawyers fired, scandalized law firm warns

Morgan & Morgan—which bills itself as “America’s largest injury law firm” that fights “for the people”—learned the hard way this month that even one lawyer blindly citing AI-hallucinated case law can risk sullying the reputation of an entire nationwide firm.

In a letter shared in a court filing, Morgan & Morgan’s chief transformation officer, Yath Ithayakumar, warned the firms’ more than 1,000 attorneys that citing fake AI-generated cases in court filings could be cause for disciplinary action, including “termination.”

“This is a serious issue,” Ithayakumar wrote. “The integrity of your legal work and reputation depend on it.”

Morgan & Morgan’s AI troubles were sparked in a lawsuit claiming that Walmart was involved in designing a supposedly defective hoverboard toy that allegedly caused a family’s house fire. Despite being an experienced litigator, Rudwin Ayala, the firm’s lead attorney on the case, cited eight cases in a court filing that Walmart’s lawyers could not find anywhere except on ChatGPT.

These “cited cases seemingly do not exist anywhere other than in the world of Artificial Intelligence,” Walmart’s lawyers said, urging the court to consider sanctions.

So far, the court has not ruled on possible sanctions. But Ayala was immediately dropped from the case and was replaced by his direct supervisor, T. Michael Morgan, Esq. Expressing “great embarrassment” over Ayala’s fake citations that wasted the court’s time, Morgan struck a deal with Walmart’s attorneys to pay all fees and expenses associated with replying to the errant court filing, which Morgan told the court should serve as a “cautionary tale” for both his firm and “all firms.”

Reuters found that lawyers improperly citing AI-hallucinated cases have scrambled litigation in at least seven cases in the past two years. Some lawyers have been sanctioned, including an early case last June fining lawyers $5,000 for citing chatbot “gibberish” in filings. And in at least one case in Texas, Reuters reported, a lawyer was fined $2,000 and required to attend a course on responsible use of generative AI in legal applications. But in another high-profile incident, Michael Cohen, Donald Trump’s former lawyer, avoided sanctions after Cohen accidentally gave his own attorney three fake case citations to help his defense in his criminal tax and campaign finance litigation.

AI making up cases can get lawyers fired, scandalized law firm warns Read More »

reddit-mods-are-fighting-to-keep-ai-slop-off-subreddits-they-could-use-help.

Reddit mods are fighting to keep AI slop off subreddits. They could use help.


Mods ask Reddit for tools as generative AI gets more popular and inconspicuous.

Redditors in a treehouse with a NO AI ALLOWED sign

Credit: Aurich Lawson (based on a still from Getty Images)

Credit: Aurich Lawson (based on a still from Getty Images)

Like it or not, generative AI is carving out its place in the world. And some Reddit users are definitely in the “don’t like it” category. While some subreddits openly welcome AI-generated images, videos, and text, others have responded to the growing trend by banning most or all posts made with the technology.

To better understand the reasoning and obstacles associated with these bans, Ars Technica spoke with moderators of subreddits that totally or partially ban generative AI. Almost all these volunteers described moderating against generative AI as a time-consuming challenge they expect to get more difficult as time goes on. And most are hoping that Reddit will release a tool to help their efforts.

It’s hard to know how much AI-generated content is actually on Reddit, and getting an estimate would be a large undertaking. Image library Freepik has analyzed the use of AI-generated content on social media but leaves Reddit out of its research because “it would take loads of time to manually comb through thousands of threads within the platform,” spokesperson Bella Valentini told me. For its part, Reddit doesn’t publicly disclose how many Reddit posts involve generative AI use.

To be clear, we’re not suggesting that Reddit has a large problem with generative AI use. By now, many subreddits seem to have agreed on their approach to AI-generated posts, and generative AI has not superseded the real, human voices that have made Reddit popular.

Still, mods largely agree that generative AI will likely get more popular on Reddit over the next few years, making generative AI modding increasingly important to both moderators and general users. Generative AI’s rising popularity has also had implications for Reddit the company, which in 2024 started licensing Reddit posts to train the large language models (LLMs) powering generative AI.

(Note: All the moderators I spoke with for this story requested that I use their Reddit usernames instead of their real names due to privacy concerns.)

No generative AI allowed

When it comes to anti-generative AI rules, numerous subreddits have zero-tolerance policies, while others permit posts that use generative AI if it’s combined with human elements or is executed very well. These rules task mods with identifying posts using generative AI and determining if they fit the criteria to be permitted on the subreddit.

Many subreddits have rules against posts made with generative AI because their mod teams or members consider such posts “low effort” or believe AI is counterintuitive to the subreddit’s mission of providing real human expertise and creations.

“At a basic level, generative AI removes the human element from the Internet; if we allowed it, then it would undermine the very point of r/AskHistorians, which is engagement with experts,” the mods of r/AskHistorians told me in a collective statement.

The subreddit’s goal is to provide historical information, and its mods think generative AI could make information shared on the subreddit less accurate. “[Generative AI] is likely to hallucinate facts, generate non-existent references, or otherwise provide misleading content,” the mods said. “Someone getting answers from an LLM can’t respond to follow-ups because they aren’t an expert. We have built a reputation as a reliable source of historical information, and the use of [generative AI], especially without oversight, puts that at risk.”

Similarly, Halaku, a mod of r/wheeloftime, told me that the subreddit’s mods banned generative AI because “we focus on genuine discussion.” Halaku believes AI content can’t facilitate “organic, genuine discussion” and “can drown out actual artwork being done by actual artists.”

The r/lego subreddit banned AI-generated art because it caused confusion in online fan communities and retail stores selling Lego products, r/lego mod Mescad said. “People would see AI-generated art that looked like Lego on [I]nstagram or [F]acebook and then go into the store to ask to buy it,” they explained. “We decided that our community’s dedication to authentic Lego products doesn’t include AI-generated art.”

Not all of Reddit is against generative AI, of course. Subreddits dedicated to the technology exist, and some general subreddits permit the use of generative AI in some or all forms.

“When it comes to bans, I would rather focus on hate speech, Nazi salutes, and things that actually harm the subreddits,” said 3rdusernameiveused, who moderates r/consoom and r/TeamBuilder25, which don’t ban generative AI. “AI art does not do that… If I was going to ban [something] for ‘moral’ reasons, it probably won’t be AI art.”

“Overwhelmingly low-effort slop”

Some generative AI bans are reflective of concerns that people are not being properly compensated for the content they create, which is then fed into LLM training.

Mod Mathgeek007 told me that r/DeadlockTheGame bans generative AI because its members consider it “a form of uncredited theft,” adding:

You aren’t allowed to sell/advertise the workers of others, and AI in a sense is using patterns derived from the work of others to create mockeries. I’d personally have less of an issue with it if the artists involved were credited and compensated—and there are some niche AI tools that do this.

Other moderators simply think generative AI reduces the quality of a subreddit’s content.

“It often just doesn’t look good… the art can often look subpar,” Mathgeek007 said.

Similarly, r/videos bans most AI-generated content because, according to its announcement, the videos are “annoying” and “just bad video” 99 percent of the time. In an online interview, r/videos mod Abrownn told me:

It’s overwhelmingly low-effort slop thrown together simply for views/ad revenue. The creators rarely care enough to put real effort into post-generation [or] editing of the content [and] rarely have coherent narratives [in] the videos, etc. It seems like they just throw the generated content into a video, export it, and call it a day.

An r/fakemon mod told me, “I can’t think of anything more low-effort in terms of art creation than just typing words and having it generated for you.”

Some moderators say generative AI helps people spam unwanted content on a subreddit, including posts that are irrelevant to the subreddit and posts that attack users.

“[Generative AI] content is almost entirely posted for purely self promotional/monetary reasons, and we as mods on Reddit are constantly dealing with abusive users just spamming their content without regard for the rules,” Abrownn said.

A moderator of the r/wallpaper subreddit, which permits generative AI, disagrees. The mod told me that generative AI “provides new routes for novel content” in the subreddit and questioned concerns about generative AI stealing from human artists or offering lower-quality work, saying those problems aren’t unique to generative AI:

Even in our community, we observe human-generated content that is subjectively low quality (poor camera/[P]hotoshopping skills, low-resolution source material, intentional “shitposting”). It can be argued that AI-generated content amplifies this behavior, but our experience (which we haven’t quantified) is that the rate of such behavior (whether human-generated or AI-generated content) has not changed much within our own community.

But we’re not a very active community—[about] 13 posts per day … so it very well could be a “frog in boiling water” situation.

Generative AI “wastes our time”

Many mods are confident in their ability to effectively identify posts that use generative AI. A bigger problem is how much time it takes to identify these posts and remove them.

The r/AskHistorians mods, for example, noted that all bans on the subreddit (including bans unrelated to AI) have “an appeals process,” and “making these assessments and reviewing AI appeals means we’re spending a considerable amount of time on something we didn’t have to worry about a few years ago.”

They added:

Frankly, the biggest challenge with [generative AI] usage is that it wastes our time. The time spent evaluating responses for AI use, responding to AI evangelists who try to flood our subreddit with inaccurate slop and then argue with us in modmail, [direct messages that message a subreddits’ mod team], and discussing edge cases could better be spent on other subreddit projects, like our podcast, newsletter, and AMAs, … providing feedback to users, or moderating input from users who intend to positively contribute to the community.

Several other mods I spoke with agree. Mathgeek007, for example, named “fighting AI bros” as a common obstacle. And for r/wheeloftime moderator Halaku, the biggest challenge in moderating against generative AI is “a generational one.”

“Some of the current generation don’t have a problem with it being AI because content is content, and [they think] we’re being elitist by arguing otherwise, and they want to argue about it,” they said.

A couple of mods noted that it’s less time-consuming to moderate subreddits that ban generative AI than it is to moderate those that allow posts using generative AI, depending on the context.

“On subreddits where we allowed AI, I often take a bit longer time to actually go into each post where I feel like… it’s been AI-generated to actually look at it and make a decision,” explained N3DSdude, a mod of several subreddits with rules against generative AI, including r/DeadlockTheGame.

MyarinTime, a moderator for r/lewdgames, which allows generative AI images, highlighted the challenges of identifying human-prompted generative AI content versus AI-generated content prompted by a bot:

When the AI bomb started, most of those bots started using AI content to work around our filters. Most of those bots started showing some random AI render, so it looks like you’re actually talking about a game when you’re not. There’s no way to know when those posts are legit games unless [you check] them one by one. I honestly believe it would be easier if we kick any post with [AI-]generated image… instead of checking if a button was pressed by a human or not.

Mods expect things to get worse

Most mods told me it’s pretty easy for them to detect posts made with generative AI, pointing to the distinct tone and favored phrases of AI-generated text. A few said that AI-generated video is harder to spot but still detectable. But as generative AI gets more advanced, moderators are expecting their work to get harder.

In a joint statement, r/dune mods Blue_Three and Herbalhippie said, “AI used to have a problem making hands—i.e., too many fingers, etc.—but as time goes on, this is less and less of an issue.”

R/videos’ Abrownn also wonders how easy it will be to detect AI-generated Reddit content “as AI tools advance and content becomes more lifelike.”

Mathgeek007 added:

AI is becoming tougher to spot and is being propagated at a larger rate. When AI style becomes normalized, it becomes tougher to fight. I expect generative AI to get significantly worse—until it becomes indistinguishable from ordinary art.

Moderators currently use various methods to fight generative AI, but they’re not perfect. r/AskHistorians mods, for example, use “AI detectors, which are unreliable, problematic, and sometimes require paid subscriptions, as well as our own ability to detect AI through experience and expertise,” while N3DSdude pointed to tools like Quid and GPTZero.

To manage current and future work around blocking generative AI, most of the mods I spoke with said they’d like Reddit to release a proprietary tool to help them.

“I’ve yet to see a reliable tool that can detect AI-generated video content,” Aabrown said. “Even if we did have such a tool, we’d be putting hundreds of hours of content through the tool daily, which would get rather expensive rather quickly. And we’re unpaid volunteer moderators, so we will be outgunned shortly when it comes to detecting this type of content at scale. We can only hope that Reddit will offer us a tool at some point in the near future that can help deal with this issue.”

A Reddit spokesperson told me that the company is evaluating what such a tool could look like. But Reddit doesn’t have a rule banning generative AI overall, and the spokesperson said the company doesn’t want to release a tool that would hinder expression or creativity.

For now, Reddit seems content to rely on moderators to remove AI-generated content when appropriate. Reddit’s spokesperson added:

Our moderation approach helps ensure that content on Reddit is curated by real humans. Moderators are quick to remove content that doesn’t follow community rules, including harmful or irrelevant AI-generated content—we don’t see this changing in the near future.

Making a generative AI Reddit tool wouldn’t be easy

Reddit is handling the evolving concerns around generative AI as it has handled other content issues, including by leveraging AI and machine learning tools. Reddit’s spokesperson said that this includes testing tools that can identify AI-generated media, such as images of politicians.

But making a proprietary tool that allows moderators to detect AI-generated posts won’t be easy, if it happens at all. The current tools for detecting generative AI are limited in their capabilities, and as generative AI advances, Reddit would need to provide tools that are more advanced than the AI-detecting tools that are currently available.

That would require a good deal of technical resources and would also likely present notable economic challenges for the social media platform, which only became profitable last year. And as noted by r/videos moderator Abrownn, tools for detecting AI-generated video still have a long way to go, making a Reddit-specific system especially challenging to create.

But even with a hypothetical Reddit tool, moderators would still have their work cut out for them. And because Reddit’s popularity is largely due to its content from real humans, that work is important.

Since Reddit’s inception, that has meant relying on moderators, which Reddit has said it intends to keep doing. As r/dune mods Blue_Three and herbalhippie put it, it’s in Reddit’s “best interest that much/most content remains organic in nature.” After all, Reddit’s profitability has a lot to do with how much AI companies are willing to pay to access Reddit data. That value would likely decline if Reddit posts became largely AI-generated themselves.

But providing the technology to ensure that generative AI isn’t abused on Reddit would be a large challege. For now, volunteer laborers will continue to bear the brunt of generative AI moderation.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder of Reddit.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Reddit mods are fighting to keep AI slop off subreddits. They could use help. Read More »

microsoft-sues-service-for-creating-illicit-content-with-its-ai-platform

Microsoft sues service for creating illicit content with its AI platform

Microsoft and others forbid using their generative AI systems to create various content. Content that is off limits includes materials that feature or promote sexual exploitation or abuse, is erotic or pornographic, or attacks, denigrates, or excludes people based on race, ethnicity, national origin, gender, gender identity, sexual orientation, religion, age, disability status, or similar traits. It also doesn’t allow the creation of content containing threats, intimidation, promotion of physical harm, or other abusive behavior.

Besides expressly banning such usage of its platform, Microsoft has also developed guardrails that inspect both prompts inputted by users and the resulting output for signs the content requested violates any of these terms. These code-based restrictions have been repeatedly bypassed in recent years through hacks, some benign and performed by researchers and others by malicious threat actors.

Microsoft didn’t outline precisely how the defendants’ software was allegedly designed to bypass the guardrails the company had created.

Masada wrote:

Microsoft’s AI services deploy strong safety measures, including built-in safety mitigations at the AI model, platform, and application levels. As alleged in our court filings unsealed today, Microsoft has observed a foreign-based threat–actor group develop sophisticated software that exploited exposed customer credentials scraped from public websites. In doing so, they sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services. Cybercriminals then used these services and resold access to other malicious actors with detailed instructions on how to use these custom tools to generate harmful and illicit content. Upon discovery, Microsoft revoked cybercriminal access, put in place countermeasures, and enhanced its safeguards to further block such malicious activity in the future.

The lawsuit alleges the defendants’ service violated the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act and constitutes wire fraud, access device fraud, common law trespass, and tortious interference. The complaint seeks an injunction enjoining the defendants from engaging in “any activity herein.”

Microsoft sues service for creating illicit content with its AI platform Read More »

why-i’m-disappointed-with-the-tvs-at-ces-2025

Why I’m disappointed with the TVs at CES 2025


Won’t someone please think of the viewer?

Op-ed: TVs miss opportunity for real improvement by prioritizing corporate needs.

The TV industry is hitting users over the head with AI and other questionable gimmicks Credit: Getty

If you asked someone what they wanted from TVs released in 2025, I doubt they’d say “more software and AI.” Yet, if you look at what TV companies have planned for this year, which is being primarily promoted at the CES technology trade show in Las Vegas this week, software and AI are where much of the focus is.

The trend reveals the implications of TV brands increasingly viewing themselves as software rather than hardware companies, with their products being customer data rather than TV sets. This points to an alarming future for smart TVs, where even premium models sought after for top-end image quality and hardware capabilities are stuffed with unwanted gimmicks.

LG’s remote regression

LG has long made some of the best—and most expensive—TVs available. Its OLED lineup, in particular, has appealed to people who use their TVs to watch Blu-rays, enjoy HDR, and the like. However, some features that LG is introducing to high-end TVs this year seem to better serve LG’s business interests than those users’ needs.

Take the new remote. Formerly known as the Magic Remote, LG is calling the 2025 edition the AI Remote. That is already likely to dissuade people who are skeptical about AI marketing in products (research suggests there are many such people). But the more immediately frustrating part is that the new remote doesn’t have a dedicated button for switching input modes, as previous remotes from LG and countless other remotes do.

LG AI remote

LG’s AI Remote. Credit: Tom’s Guide/YouTube

To use the AI Remote to change the TV’s input—a common task for people using their sets to play video games, watch Blu-rays or DVDs, connect their PC, et cetera—you have to long-press the Home Hub button. Single-pressing that button brings up a dashboard of webOS (the operating system for LG TVs) apps. That functionality isn’t immediately apparent to someone picking up the remote for the first time and detracts from the remote’s convenience.

By overlooking other obviously helpful controls (play/pause, fast forward/rewind, and numbers) while including buttons dedicated to things like LG’s free ad-supported streaming TV (FAST) channels and Amazon Alexa, LG missed an opportunity to update its remote in a way centered on how people frequently use TVs. That said, it feels like user convenience didn’t drive this change. Instead, LG seems more focused on getting people to use webOS apps. LG can monetize app usage through, i.e., getting a cut of streaming subscription sign-ups, selling ads on webOS, and selling and leveraging user data.

Moving from hardware provider to software platform

LG, like many other TV OEMs, has been growing its ads and data business. Deals with data analytics firms like Nielsen give it more incentive to acquire customer data. Declining TV margins and rock-bottom prices from budget brands (like Vizio and Roku, which sometimes lose money on TV hardware sales and make up for the losses through ad sales and data collection) are also pushing LG’s software focus. In the case of the AI Remote, software prioritization comes at the cost of an oft-used hardware capability.

Further demonstrating its motives, in September 2023, LG announced intentions to “become a media and entertainment platform company” by offering “services” and a “collection of curated content in products, including LG OLED and LG QNED TVs.” At the time, the South Korean firm said it would invest 1 trillion KRW (about $737.7 million) into its webOS business through 2028.

Low TV margins, improved TV durability, market saturation, and broader economic challenges are all serious challenges for an electronics company like LG and have pushed LG to explore alternative ways to make money off of TVs. However, after paying four figures for TV sets, LG customers shouldn’t be further burdened to help LG accrue revenue.

Google TVs gear up for subscription-based features

There are numerous TV manufacturers, including Sony, TCL, and Philips, relying on Google software to power their TV sets. Numerous TVs announced at CES 2025 will come with what Google calls Gemini Enhanced Google Assistant. The idea that this is something that people using Google TVs have requested is somewhat contradicted by Google Assistant interactions with TVs thus far being “somewhat limited,” per a Lowpass report.

Nevertheless, these TVs are adding far-field microphones so that they can hear commands directed at the voice assistant. For the first time, the voice assistant will include Google’s generative AI chatbot, Gemini, this year—another feature that TV users don’t typically ask for. Despite the lack of demand and the privacy concerns associated with microphones that can pick up audio from far away even when the TV is off, companies are still loading 2025 TVs with far-field mics to support Gemini. Notably, these TVs will likely allow the mics to be disabled, like you can with other TVs using far-field mics. But I still ponder about features/hardware that could have been implemented instead.

Google is also working toward having people pay a subscription fee to use Gemini on their TVs, PCWorld reported.

“For us, our biggest goal is to create enough value that yes, you would be willing to pay for [Gemini],” Google TV VP and GM Shalini Govil-Pai told the publication.

The executive pointed to future capabilities for the Gemini-driven Google Assistant on TVs, including asking it to “suggest a movie like Jurassic Park but suitable for young children” or to show “Bollywood movies that are similar to Mission: Impossible.”

She also pointed to future features like showing weather, top news stories, and upcoming calendar events when someone is near the TV, showing AI-generated news briefings, and the ability to respond to questions like “explain the solar system to a third-grader” with text, audio, and YouTube videos.

But when people have desktops, laptops, tablets, and phones in their homes already, how helpful are these features truly? Govil-Pai admitted to PCWorld that “people are not used to” using their TVs this way “so it will take some time for them to adapt to it.” With this in mind, it seems odd for TV companies to implement new, more powerful microphones to support features that Google acknowledges aren’t in demand. I’m not saying that tech companies shouldn’t get ahead of the curve and offer groundbreaking features that users hadn’t considered might benefit them. But already planning to monetize those capabilities—with a subscription, no less—suggests a prioritization of corporate needs.

Samsung is hungry for AI

People who want to use their TV for cooking inspiration often turn to cooking shows or online cooking videos. However, Samsung wants people to use its TV software to identify dishes they want to try making.

During CES, Samsung announced Samsung Food for TVs. The feature leverages Samsung TVs’ AI processors to identify food displayed on the screen and recommend relevant recipes. Samsung introduced the capability in 2023 as an iOS and Android app after buying the app Whisk in 2019. As noted by TechCrunch, though, other AI tools for providing recipes based on food images are flawed.

So why bother with such a feature? You can get a taste of Samsung’s motivation from its CES-announced deal with Instacart that lets people order off Instacart from Samsung smart fridges that support the capability. Samsung Food on TVs can show users the progress of food orders placed via the Samsung Food mobile app on their TVs. Samsung Food can also create a shopping list for recipe ingredients based on what it knows (using cameras and AI) is in your (supporting) Samsung fridge. The feature also requires a Samsung account, which allows the company to gather more information on users.

Other software-centric features loaded into Samsung TVs this year include a dedicated AI button on the new TVs’ remotes, the ability to use gestures to control the TV but only if you’re wearing a Samsung Galaxy Watch, and AI Karaoke, which lets people sing karaoke using their TVs by stripping vocals from music playing and using their phone as a mic.

Like LG, Samsung has shown growing interest in ads and data collection. In May, for example, it expanded its automatic content recognition tech to track ad exposure on streaming services viewed on its TVs. It also has an ads analytics partnership with Experian.

Large language models on TVs

TVs are mainstream technology in most US homes. Generative AI chatbots, on the other hand, are emerging technology that many people have yet to try. Despite these disparities, LG and Samsung are incorporating Microsoft’s Copilot chatbot into 2025 TVs.

LG claims that Copilot will help its TVs “understand conversational context and uncover subtle user intentions,” adding: “Access to Microsoft Copilot further streamlines the process, allowing users to efficiently find and organize complex information using contextual cues. For an even smoother and more engaging experience, the AI chatbot proactively identifies potential user challenges and offers timely, effective solutions.”

Similarly, Samsung, which is also adding Copilot to some of its smart monitors, said in its announcement that Copilot will help with “personalized content recommendations.” Samsung has also said that Copilot will help its TVs understand strings of commands, like increasing the volume and changing the channel, CNET noted. Samsung said it intends to work with additional AI partners, namely Google, but it’s unclear why it needs multiple AI partners, especially when it hasn’t yet seen how people use large language models on their TVs.

TV-as-a-platform

To be clear, this isn’t a condemnation against new, unexpected TV features. This also isn’t a censure against new TV apps or the usage of AI in TVs.

AI marketing hype is real and misleading regarding the demand, benefits, and possibilities of AI in consumer gadgets. However, there are some cases when innovative software, including AI, can improve things that TV users not only care about but actually want or need. For example, some TVs use AI for things like trying to optimize sound, color, and/or brightness, including based on current environmental conditions or upscaling. This week, Samsung announced AI Live Translate for TVs. The feature is supposed to be able to translate foreign language closed captions in real time, providing a way for people to watch more international content. It’s a feature I didn’t ask for but can see being useful and changing how I use my TV.

But a lot of this week’s TV announcements underscore an alarming TV-as-a-platform trend where TV sets are sold as a way to infiltrate people’s homes so that apps, AI, and ads can be pushed onto viewers. Even high-end TVs are moving in this direction and amplifying features with questionable usefulness, effectiveness, and privacy considerations. Again, I can’t help but wonder what better innovations could have come out this year if more R&D was directed toward hardware and other improvements that are more immediately rewarding for users than karaoke with AI.

The TV industry is facing economic challenges, and, understandably, TV brands are seeking creative solutions for making money. But for consumers, that means paying for features that you’re likely to ignore. Ultimately, many people just want a TV with amazing image and sound quality. Finding that without having to sift through a bunch of fluff is getting harder.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Why I’m disappointed with the TVs at CES 2025 Read More »

anthropic-gives-court-authority-to-intervene-if-chatbot-spits-out-song-lyrics

Anthropic gives court authority to intervene if chatbot spits out song lyrics

Anthropic did not immediately respond to Ars’ request for comment on how guardrails currently work to prevent the alleged jailbreaks, but publishers appear satisfied by current guardrails in accepting the deal.

Whether AI training on lyrics is infringing remains unsettled

Now, the matter of whether Anthropic has strong enough guardrails to block allegedly harmful outputs is settled, Lee wrote, allowing the court to focus on arguments regarding “publishers’ request in their Motion for Preliminary Injunction that Anthropic refrain from using unauthorized copies of Publishers’ lyrics to train future AI models.”

Anthropic said in its motion opposing the preliminary injunction that relief should be denied.

“Whether generative AI companies can permissibly use copyrighted content to train LLMs without licenses,” Anthropic’s court filing said, “is currently being litigated in roughly two dozen copyright infringement cases around the country, none of which has sought to resolve the issue in the truncated posture of a preliminary injunction motion. It speaks volumes that no other plaintiff—including the parent company record label of one of the Plaintiffs in this case—has sought preliminary injunctive relief from this conduct.”

In a statement, Anthropic’s spokesperson told Ars that “Claude isn’t designed to be used for copyright infringement, and we have numerous processes in place designed to prevent such infringement.”

“Our decision to enter into this stipulation is consistent with those priorities,” Anthropic said. “We continue to look forward to showing that, consistent with existing copyright law, using potentially copyrighted material in the training of generative AI models is a quintessential fair use.”

This suit will likely take months to fully resolve, as the question of whether AI training is a fair use of copyrighted works is complex and remains hotly disputed in court. For Anthropic, the stakes could be high, with a loss potentially triggering more than $75 million in fines, as well as an order possibly forcing Anthropic to reveal and destroy all the copyrighted works in its training data.

Anthropic gives court authority to intervene if chatbot spits out song lyrics Read More »

openai-defends-for-profit-shift-as-critical-to-sustain-humanitarian-mission

OpenAI defends for-profit shift as critical to sustain humanitarian mission

OpenAI has finally shared details about its plans to shake up its core business by shifting to a for-profit corporate structure.

On Thursday, OpenAI posted on its blog, confirming that in 2025, the existing for-profit arm will be transformed into a Delaware-based public benefit corporation (PBC). As a PBC, OpenAI would be required to balance its shareholders’ and stakeholders’ interests with the public benefit. To achieve that, OpenAI would offer “ordinary shares of stock” while using some profits to further its mission—”ensuring artificial general intelligence (AGI) benefits all of humanity”—to serve a social good.

To compensate for losing control over the for-profit, the nonprofit would have some shares in the PBC, but it’s currently unclear how many will be allotted. Independent financial advisors will help OpenAI reach a “fair valuation,” the blog said, while promising the new structure would “multiply” the donations that previously supported the nonprofit.

“Our plan would result in one of the best resourced nonprofits in history,” OpenAI said. (During its latest funding round, OpenAI was valued at $157 billion.)

OpenAI claimed the nonprofit’s mission would be more sustainable under the proposed changes, as the costs of AI innovation only continue to compound. The new structure would set the PBC up to control OpenAI’s operations and business while the nonprofit would “hire a leadership team and staff to pursue charitable initiatives in sectors such as health care, education, and science,” OpenAI said.

Some of OpenAI’s rivals, such as Anthropic and Elon Musk’s xAI, use a similar corporate structure, OpenAI noted.

Critics had previously pushed back on this plan, arguing that humanity may be better served if the nonprofit continues controlling the for-profit arm of OpenAI. But OpenAI argued that the old way made it hard for the Board “to directly consider the interests of those who would finance the mission and does not enable the non-profit to easily do more than control the for-profit.

OpenAI defends for-profit shift as critical to sustain humanitarian mission Read More »

photobucket-opted-inactive-users-into-privacy-nightmare,-lawsuit-says

Photobucket opted inactive users into privacy nightmare, lawsuit says

Photobucket was sued Wednesday after a recent privacy policy update revealed plans to sell users’ photos—including biometric identifiers like face and iris scans—to companies training generative AI models.

The proposed class action seeks to stop Photobucket from selling users’ data without first obtaining written consent, alleging that Photobucket either intentionally or negligently failed to comply with strict privacy laws in states like Illinois, New York, and California by claiming it can’t reliably determine users’ geolocation.

Two separate classes could be protected by the litigation. The first includes anyone who ever uploaded a photo between 2003—when Photobucket was founded—and May 1, 2024. Another potentially even larger class includes any non-users depicted in photographs uploaded to Photobucket, whose biometric data has also allegedly been sold without consent.

Photobucket risks huge fines if a jury agrees with Photobucket users that the photo-storing site unjustly enriched itself by breaching its user contracts and illegally seizing biometric data without consent. As many as 100 million users could be awarded untold punitive damages, as well as up to $5,000 per “willful or reckless violation” of various statutes.

If a substantial portion of Photobucket’s entire 13 billion-plus photo collection is found infringing, the fines could add up quickly. In October, Photobucket estimated that “about half of its 13 billion images are public and eligible for AI licensing,” Business Insider reported.

Users suing include a mother of a minor whose biometric data was collected and a professional photographer in Illinois who should have been protected by one of the country’s strongest biometric privacy laws.

So far, Photobucket has confirmed that at least one “alarmed” Illinois user’s data may have already been sold to train AI. The lawsuit alleged that most users eligible to join the class action likely similarly only learned of the “conduct long after the date that Photobucket began selling, licensing, and/or otherwise disclosing Class Members’ biometric data to third parties.”

Photobucket opted inactive users into privacy nightmare, lawsuit says Read More »