Author name: Kelly Newman

on-altman’s-interview-with-theo-von

On Altman’s Interview With Theo Von

Sam Altman talked recently to Theo Von.

Theo is genuinely engaging and curious throughout. This made me want to consider listening to his podcast more. I’d love to hang. He seems like a great dude.

The problem is that his curiosity has been redirected away from the places it would matter most – the Altman strategy of acting as if the biggest concerns, risks and problems flat out don’t exist successfully tricks Theo into not noticing them at all, and there are plenty of other things for him to focus on, so he does exactly that.

Meanwhile, Altman gets away with more of this ‘gentle singularity’ lie without using that term, letting it graduate to a background assumption. Dwarkesh would never.

Quotes are all from Altman.

Sam Altman: But also [kids born a few years ago] will never know a world where products and services aren’t way smarter than them and super capable, they can just do whatever you need.

Thank you, sir. Now actually take that to heart and consider the implications. It goes way beyond ‘maybe college isn’t a great plan.’

Sam Altman: The kids will be fine. I’m worried about the parents.

Why do you think the kids will be fine? Because they’re used to it? So it’s fine?

This is just a new tool that exists in the tool chain.

A new tool that is smarter than you are and super capable? Your words, sir.

No one knows what happens next.

True that. Can you please take your own statements seriously?

How long until you can make an AI CEO for OpenAI? Probably not that long.

No, I think it’s awesome, I’m for sure going to figure out something else to do.

Again, please, I am begging you, take your own statements seriously.

There will be some jobs that totally go away. But mostly I think we will rely on the fact that people’s desire for more stuff for better experiences for you know a higher social status or whatever seems basically limitless, human creativity seems basically limitless and human desire to like be useful to each other and to connect.

And AI will be better at doing all of that. Yet Altman goes through all the past falsified predictions as if they apply here. He keeps going on and on as if the world he’s talking about is a bunch of humans with access to cool tools, except by his own construction those tools can function as OpenAI’s CEO and are smarter than people. It is all so absurd.

What people really want is the agency to co-create the future together.

Highly plausible this is important to people. I don’t see any plan for giving it to them? The solution here is redistribution of a large percentage of world compute, but even if you pull that off under ideal circumstances no, that does not do it.

I haven’t heard any [software engineer] say their job lacks meaning [due to AI]. And I’m hopeful at least for a long time, you know, 100 years, who knows? But I’m hopeful that’s what it’ll feel like with AI is even if we’re asking it to solve huge problems for us. Even if we tell it to go develop a cure for cancer there will still be things to do in that process that feel valuable to a human.

Well, sure, not at this capability level. Where is this hope coming from that it would continue for 100 years? Why does one predict the other? What will be the steps that humans will meaningfully do?

We are going to find a way in our own telling of the story to feel like the main characters.

I think the actual plan is for the AI to lie to us? And for us to lie to ourselves? We’ll set it up so we have this idea that we matter, that we are important, and that will be fine? I disagree that this would be fine.

Altman discusses the parallel to discovering that Earth is not the center of the solar system, and the solar system is not the center of the galaxy, and so on, little blue dot. Well sure, but that wasn’t all that load bearing, we’re still the center of our own universes, and if there’s no other life out there we’re the only place that matters. This is very different.

Theo asks what Altman’s fears are about AI. Altman responds with a case where he couldn’t do something and GPT-5 could do it. But then he went on with his day. His second answer is impact on user mental health with heavy usage, which is a real concern and I’m glad he’s scared about that.

And then… that’s it. That’s what scares you, Altman? There’s nothing else you want to share with the rest of us? Nothing about loss of control issues, nothing about existential risks, and so on? I sure as hell hope that he is lying. I do think he is?

When asked about a legal framework for AI, Altman asks for AI privilege, sees this as urgent, and there is absolutely nothing else he thinks is worth mentioning that requires the law to adjust.

The last few months have felt very fast.

Theo then introduces Yoshua Bengio into the conversation, bringing up deception and sycophancy and neurolese.

We think it’s going to be great. There’s clearly real risks. It kind of feels like you should be able to say something more than that, But in truth, I think all we know right now is that we have discovered, invented, whatever you want to call it, something extraordinary that is going to reshape the course of human history. Dear God, man. But if you don’t know, we don’t know.

Well, of course. I mean, I think no one can predict the future. Like human society is very complex. This is an amazing new technology. Maybe a less dramatic example than the atomic bomb is when they discovered the transistor a few years later.

Yes, we can all agree we don’t know. We get a lot of good attitude, the missing mood is present, but it doesn’t cash out in the missing concerns. ‘There’s clearly real risks’ but that in context seems to apply to things like jobs and meaning and distribution given all the context.

There’s no time in human history at the beginning of the century when the people ever knew what the end of the century was going to be like. Yeah. So maybe it’s I do think it goes faster and faster each century.

The first half of this seems false for quite a lot of times and places? Sure, you don’t know how the fortunes of war might go but for most of human history ‘100 years from now looks a lot like today’ was a very safe bet. Nothing ever happens (other than cycling wars and famines and plagues and so on) did very well. But yes, in 1800 or 1900 or 2000 you would have remarkably little idea.

It certainly feels like [there is a race between companies.]

Theo equates this race to Formula 1 and asks what the race is for. AGI? ASI? Altman says benchmarks are saturated and it’s all about what you get out of the models, but we are headed for some model.

Maybe it’s a system that is capable of doing its own AI research. Maybe it’s a system that is smarter than all of humans put together… some finish line we are going to cross… maybe you call that superintelligence. I don’t have a finish line in mind.

Yeah, those do seem like important things that represent effective ‘finish lines.’

I assume that what will happen, like with every other kind of technology, is we’ll realize there’s this one thing that the tool’s way better than us at. Now, we get to go solve some other problems.

NO NO NO NO NO! That is not what happens! The whole idea is this thing becomes better at solving all the problems, or at least a rapidly growing portion of all problems. He mentions this possibility shortly thereafter but says he doesn’t think ‘the simplistic thing works.’ The ‘simplistic thing’ will be us, the humans.

You say whatever you want. It happens, and you figure out amazing new things to build for the next generation and the next.

Please take this seriously, consider the implications of what you are saying and solve for the equilibrium or what happens right away, come on man. The world doesn’t sit around acting normal while you get to implement some cool idea for an app.

Theo asks, would regular humans vote to keep AI or stop AI? Altman says users would say go ahead and users would say stop. Theo predicts most people would say stop it. My understanding is Theo is right for the West, but not for the East.

Altman asks Theo what he is afraid of with AI, Theo seems worried about They Took Our Jobs and loss of economic survival and also meaning, that we will be left to play zero-sum games of extraction. With Theo staying in Altman’s frame, Altman can pivot back to humans liking to be creative and help each other and so on and pour on the hopium that we’ll all get to be creatives.

Altman says, you get less enjoyment from a ghost robotic kitchen setup, something is missing, you’d rather get the food from the dude who has been making it. To which I’d reply that most of this is that the authentic dude right now makes a better product, but that ten years from now the robot will make a better product than the authentic dude. And yeah, there will still be some value you get from patronizing the dude, but mostly what you want is the food and thus will the market speak, and then we’ve got Waymos with GLP-1 dart guns and burrito cannons for unknown reasons when what you actually get is a highly cheap and efficient delicious food supply chain that I plan on enjoying very much thank you.

We realized actually this is not helping me be my best. you know, like doing the equivalent of getting the like burrito cannon into my mouth on my phone at night, like that’s not making me long-term happy, right? And that’s not helping me like really accomplish my true goals in life. And I think if AI does that, people will reject it.

I mean I think a thing that efficiently gives you burritos does help you with your goals and people will love it, if it’s violently shooting burritos into your face unprompted at random times then no but yeah it’s not going to work like that.

However, if Chhat GBT really helps you to figure out what your true goals in life are and then accomplish those, you know, it says, “Hey, you’ve said you want to be a better father or a better, you know, you want to be in better shape or you, you know, want to like grow your business.

I refer Altman to the parable of the whispering earring, but also this idea that the AI will remain a tool that helps individual humans accomplish their normal goals in normal ways only smarter is a fairy tale. Altman is providing hopium via the implicit overall static structure of the world, then assuming your personal AI is aligned to your goals and well being, and then making additional generous assumptions, and then saying that the result might turn out well.

On the moratorium on all AI regulations that was stripped from the BBB:

There has to be some sort of regulation at some point. I think it’d be a mistake to let each state do this kind of crazy patchwork of stuff. I think like one countrywide approach would be much easier for us to be able to innovate and still have some guardrails, but there have to be guardrails.

The proposal was, for all practical purposes, to have no guardrails. Lawmakers will say ‘it would be better to have one federal regulation than fifty state regulations’ and then ban the fifty state regulations but have zero federal regulation.

The concerns [politicians come to us with] are like, what is this going to do to our kids? Are they going to stop learning? Is this going to spread fake information? Is this going to influence elections? But we’ve never had ‘you can’t say bad things about the president.’

That’s good to hear versus the alternative, better those real concerns than an attempt to put a finger on the scale, although of course these are not the important concerns.

We could [make it favor one candidate over another]. We totally could. I mean, we don’t, but we totally could. Yeah… a lot of people do test it and we need to be held to a very high standard here… we can tell.

As Altman points out, it would be easy to tell if they made the model biased. And I think doing it ‘cleanly’ is not so simple, as Musk has found out. Try to put your finger on the scale and you get a lot of side effects and it is all likely deeply embarrassing.

Maybe we build a big Dyson sphere on the solar system.

I’m noting that because I’m tired of people treating ‘maybe we build a Dyson sphere’ as a statement worthy of mockery and dismissal of a person’s perspective. Please note that Altman thinks this is very possibly the future.

You have to be both [excited and scared]. I don’t think anyone could honestly look at the trajectory humanity is on and not feel both excited and scared.

Being chased by a goose, asking scared of what. But yes.

I think people get blinded by ambition. I think people get blinded by competition. I think people get caught up like very well-meaning people can get caught up in very negative incentives. Negative for society as a whole. By the way, I include us in this.

I think people come in with good intentions. They clearly sometimes do bad stuff.

I think Palantir and Peter Thiel do a lot of great stuff… We’re very close friends…. His brain just works differently… I’m grateful he exists because he thinks the things no one else does.

I think we really need to prioritize the right to privacy.

I’m skipping over a lot of interactions that cover other topics.

Altman is a great guest, engaging, fun to talk to, shares a lot of interesting thoughts and real insights, except it is all in the service of painting a picture that excludes the biggest concerns. I don’t think the deflections I care about most (as in, flat out ignoring them hoping they will go away) are the top item on his agenda in such an interview, or in general, but such deflections are central to the overall strategy.

The problem is that those concerns are part of reality.

As in, something that, when you stop looking at it, doesn’t go away.

If you are interviewing Altman in the future, you want to come in with Theo’s curiosity and friendly attitude. You want to start by letting Altman describe all the things AI will be able to do. That part is great.

Except also do your homework, so you are ready when Altman gives answers that don’t make sense, and that don’t take into account what Altman says that AI will be able to do. That notices the negative space being not mentioned, and that points it out. Not as a gotcha or an accusation, but to not let him get away with ignoring it.

At minimum, you have to point out that the discussion is making one hell of a set of assumptions, ask Altman if he agrees that those assumptions are being made, and check if how confident he is those assumptions are true, and why, even if that isn’t going to be your focus. Get the crucial part on the record. If you ask in a friendly way I don’t think there is a reasonable way to dodge answering.

Discussion about this post

On Altman’s Interview With Theo Von Read More »

at-$250-million,-top-ai-salaries-dwarf-those-of-the-manhattan-project-and-the-space-race

At $250 million, top AI salaries dwarf those of the Manhattan Project and the Space Race


A 24 year-old AI researcher will earn 327x what Oppenheimer made while developing the atomic bomb.

Silicon Valley’s AI talent war just reached a compensation milestone that makes even the most legendary scientific achievements of the past look financially modest. When Meta recently offered AI researcher Matt Deitke $250 million over four years (an average of $62.5 million per year)—with potentially $100 million in the first year alone—it shattered every historical precedent for scientific and technical compensation we can find on record. That includes salaries during the development of major scientific milestones of the 20th century.

The New York Times reported that Deitke had cofounded a startup called Vercept and previously led the development of Molmo, a multimodal AI system, at the Allen Institute for Artificial Intelligence. His expertise in systems that juggle images, sounds, and text—exactly the kind of technology Meta wants to build—made him a prime target for recruitment. But he’s not alone: Meta CEO Mark Zuckerberg reportedly also offered an unnamed AI engineer $1 billion in compensation to be paid out over several years. What’s going on?

These astronomical sums reflect what tech companies believe is at stake: a race to create artificial general intelligence (AGI) or superintelligence—machines capable of performing intellectual tasks at or beyond the human level. Meta, Google, OpenAI, and others are betting that whoever achieves this breakthrough first could dominate markets worth trillions. Whether this vision is realistic or merely Silicon Valley hype, it’s driving compensation to unprecedented levels.

To put these salaries in a historical perspective: J. Robert Oppenheimer, who led the Manhattan Project that ended World War II, earned approximately $10,000 per year in 1943. Adjusted for inflation using the US Government’s CPI Inflation Calculator, that’s about $190,865 in today’s dollars—roughly what a senior software engineer makes today. The 24-year-old Deitke, who recently dropped out of a PhD program, will earn approximately 327 times what Oppenheimer made while developing the atomic bomb.

Many top athletes can’t compete with these numbers. The New York Times noted that Steph Curry’s most recent four-year contract with the Golden State Warriors was $35 million less than Deitke’s Meta deal (although soccer superstar Cristiano Ronaldo will make $275 million this year as the highest-paid professional athlete in the world).  The comparison prompted observers to call this an “NBA-style” talent market—except the AI researchers are making more than NBA stars.

Racing toward “superintelligence”

Mark Zuckerberg recently told investors that Meta plans to continue throwing money at AI talent “because we have conviction that superintelligence is going to improve every aspect of what we do.” In a recent open letter, he described superintelligent AI as technology that would “begin an exciting new era of individual empowerment,” despite declining to define what superintelligence actually is.

This vision explains why companies treat AI researchers like irreplaceable assets rather than well-compensated professionals. If these companies are correct, the first to achieve artificial general intelligence or superintelligence won’t just have a better product—they’ll have technology that could invent endless new products or automate away millions of knowledge-worker jobs and transform the global economy. The company that controls that kind of technology could become the richest company in history by far.

So perhaps it’s not surprising that even the highest salaries of employees from the early tech era pale in comparison to today’s AI researcher salaries. Thomas Watson Sr., IBM’s legendary CEO, received $517,221 in 1941—the third-highest salary in America at the time (about $11.8 million in 2025 dollars). The modern AI researcher’s package represents more than five times Watson’s peak compensation, despite Watson building one of the 20th century’s most dominant technology companies.

The contrast becomes even more stark when considering the collaborative nature of past scientific achievements. During Bell Labs’ golden age of innovation—when researchers developed the transistor, information theory, and other foundational technologies—the lab’s director made about 12 times what the lowest-paid worker earned.  Meanwhile, Claude Shannon, who created information theory at Bell Labs in 1948, worked on a standard professional salary while creating the mathematical foundation for all modern communication.

The “Traitorous Eight” who left William Shockley to found Fairchild Semiconductor—the company that essentially birthed Silicon Valley—split ownership of just 800 shares out of 1,325 total when they started. Their seed funding of $1.38 million (about $16.1 million today) for the entire company is a fraction of what a single AI researcher now commands.

Even Space Race salaries were far cheaper

The Apollo program offers another striking comparison. Neil Armstrong, the first human to walk on the moon, earned about $27,000 annually—roughly $244,639 in today’s money. His crewmates Buzz Aldrin and Michael Collins made even less, earning the equivalent of $168,737 and $155,373, respectively, in today’s dollars. Current NASA astronauts earn between $104,898 and $161,141 per year. Meta’s AI researcher will make more in three days than Armstrong made in a year for taking “one giant leap for mankind.”

The engineers who designed the rockets and mission control systems for the Apollo program also earned modest salaries by modern standards. A 1970 NASA technical report provides a window into these earnings by analyzing salary data for the entire engineering profession. The report, which used data from the Engineering Manpower Commission, noted that these industry-wide salary curves corresponded directly to the government’s General Schedule (GS) pay scale on which NASA’s own employees were paid.

According to a chart in the 1970 report, a newly graduated engineer in 1966 started with an annual salary of between $8,500 and $10,000 (about $84,622 to $99,555 today). A typical engineer with a decade of experience earned around $17,000 annually ($169,244 today). Even the most elite, top-performing engineers with 20 years of experience peaked at a salary of around $278,000 per year in today’s dollars—a sum that a top AI researcher like Deitke can now earn in just a few days.

Why the AI talent market is different

An image of a faceless human silhouette (chest up) with exposed microchip contacts and circuitry erupting from its open head. This visual metaphor explores transhumanism, AI integration, or the erosion of organic thought in the digital age. The stark contrast between the biological silhouette and mechanical components highlights themes of technological dependence or posthuman evolution. Ideal for articles on neural implants, futurism, or the ethics of human augmentation.

This isn’t the first time technical talent has commanded premium prices. In 2012, after three University of Toronto academics published AI research, they auctioned themselves to Google for $44 million (about $62.6 million in today’s dollars). By 2014, a Microsoft executive was comparing AI researcher salaries to NFL quarterback contracts. But today’s numbers dwarf even those precedents.

Several factors explain this unprecedented compensation explosion. We’re in a new realm of industrial wealth concentration unseen since the Gilded Age of the late 19th century. Unlike previous scientific endeavors, today’s AI race features multiple companies with trillion-dollar valuations competing for an extremely limited talent pool. Only a small number of researchers have the specific expertise needed to work on the most capable AI systems, particularly in areas like multimodal AI, which Deitke specializes in. And AI hype is currently off the charts as “the next big thing” in technology.

The economics also differ fundamentally from past projects. The Manhattan Project cost $1.9 billion total (about $34.4 billion adjusted for inflation), while Meta alone plans to spend tens of billions annually on AI infrastructure. For a company approaching a $2 trillion market cap, the potential payoff from achieving AGI first dwarfs Deitke’s compensation package.

One executive put it bluntly to The New York Times: “If I’m Zuck and I’m spending $80 billion in one year on capital expenditures alone, is it worth kicking in another $5 billion or more to acquire a truly world-class team to bring the company to the next level? The answer is obviously yes.”

Young researchers maintain private chat groups on Slack and Discord to share offer details and negotiation strategies. Some hire unofficial agents. Companies not only offer massive cash and stock packages but also computing resources—the NYT reported that some potential hires were told they would be allotted 30,000 GPUs, the specialized chips that power AI development.

Also, tech companies believe they’re engaged in an arms race where the winner could reshape civilization. Unlike the Manhattan Project or Apollo program, which had specific, limited goals, the race for artificial general intelligence ostensibly has no ceiling. A machine that can match human intelligence could theoretically improve itself, creating what researchers call an “intelligence explosion” that could potentially offer cascading discoveries—if it actually comes to pass.

Whether these companies are building humanity’s ultimate labor replacement technology or merely chasing hype remains an open question, but we’ve certainly traveled a long way from the $8 per diem that Neil Armstrong received for his moon mission—about $70.51 in today’s dollars—before deductions for the “accommodations” NASA provided on the spacecraft. After Deitke accepted Meta’s offer, Vercept co-founder Kiana Ehsani joked on social media, “We look forward to joining Matt on his private island next year.”

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

At $250 million, top AI salaries dwarf those of the Manhattan Project and the Space Race Read More »

ukraine-rescues-soldier-via-drone-delivery-of-complete-e-bike

Ukraine rescues soldier via drone delivery of complete e-bike

Details from a frontline war zone are almost impossible to verify, but the brigade has shared plenty of footage, including shots of the drone lifting the bike and a soldier riding it back to safety along a treeline. (Both sides are now making widespread use of e-bikes and motorcycles for quick infantry assaults after three years of drone warfare have wiped out many of the traditional armored vehicles.)

Photo of drone command center.

The drone command center that ran the operation.

In their telling, a soldier with the callsign “Tankist” was holding a frontline position that came under attack, and a number of his comrades were killed. Tankist found himself cut off from safety and had to hold the position alone for several days.

To retrieve him, brigade staff devised a plan to deliver an e-bike via heavy bomber drone. The first drone was shot down, while the second failed under the weight. But the third attempt was successful, and Tankist was finally able to zip back toward Ukrainian lines. (He apparently hit a landmine on the way and survived that, too, finishing the trip on a second delivered e-bike.)

Amazon, of course, has had “drone delivery” in view for years and is currently testing delivery drones at locations around the US, including Pontiac, Michigan; Phoenix, Arizona; and Waco, Texas.

But these drones will only deliver packages weighing under 5 lbs—an e-bike weighs considerably more.

Ukraine rescues soldier via drone delivery of complete e-bike Read More »

delta-denies-using-ai-to-come-up-with-inflated,-personalized-prices

Delta denies using AI to come up with inflated, personalized prices

Delta scandal highlights value of transparency

According to Delta, the company has “zero tolerance for discriminatory or predatory pricing” and only feeds its AI system aggregated data “to enhance our existing fare pricing processes.”

Rather than basing fare prices on customers’ personal information, Carter clarified that “all customers have access to the same fares and offers based on objective criteria provided by the customer such as origin and destination, advance purchase, length of stay, refundability, and travel experience selected.”

The AI use can result in higher or lower prices, but not personalized fares for different customers, Carter said. Instead, Delta plans to use AI pricing to “enhance market competitiveness and drive sales, benefiting both our customers and our business.”

Factors weighed by the AI system, Carter explained, include “customer demand for seats and purchasing data at an aggregated level, competitive offers and schedules, route performance, and cost of providing the service inclusive of jet fuel.” That could potentially mean a rival’s promotion or schedule change could trigger the AI system to lower prices to stay competitive, or it might increase prices based on rising fuel costs to help increase revenue or meet business goals.

“Given the tens of millions of fares and hundreds of thousands of routes for sale at any given time, the use of new technology like AI promises to streamline the process by which we analyze existing data and the speed and scale at which we can respond to changing market dynamics,” Carter wrote.

He explained the AI system helps Delta aggregate purchasing data for specific routes and flights, adapt to new market conditions, and factor in “thousands of variables simultaneously.” AI could also eventually be used to assist with crew scheduling, improve flight availability, or help reservation specialists answer complex questions or resolve disputes.

But “to reiterate, prices are not targeted to individual consumers,” Carter emphasized.

Delta further pointed out that the company does not require customers to log in to search for tickets, which means customers can search for flights without sharing any personal information.

For AI companies paying attention to the Delta backlash, there may be a lesson about the value of transparency in Delta’s scandal. Critics noted Delta was among the first to admit it was using AI to influence pricing, but the vague explanation on the earnings call stoked confusion over how, as Delta seemed to drag its feet amid calls by groups like Consumer Watchdog for more transparency.

Delta denies using AI to come up with inflated, personalized prices Read More »

google-releases-gemini-2.5-deep-think-for-ai-ultra-subscribers

Google releases Gemini 2.5 Deep Think for AI Ultra subscribers

Google is unleashing its most powerful Gemini model today, but you probably won’t be able to try it. After revealing Gemini 2.5 Deep Think at the I/O conference back in May, Google is making this AI available in the Gemini app. Deep Think is designed for the most complex queries, which means it uses more compute resources than other models. So it should come as no surprise that only those subscribing to Google’s $250 AI Ultra plan will be able to access it.

Deep Think is based on the same foundation as Gemini 2.5 Pro, but it increases the “thinking time” with greater parallel analysis. According to Google, Deep Think explores multiple approaches to a problem, even revisiting and remixing the various hypotheses it generates. This process helps it create a higher-quality output.

Deep Think benchmarks

Credit: Google

Like some other heavyweight Gemini tools, Deep Think takes several minutes to come up with an answer. This apparently makes the AI more adept at design aesthetics, scientific reasoning, and coding. Google has exposed Deep Think to the usual battery of benchmarks, showing that it surpasses the standard Gemini 2.5 Pro and competing models like OpenAI o3 and Grok 4. Deep Think shows a particularly large gain in Humanity’s Last Exam, a collection of 2,500 complex, multi-modal questions that cover more than 100 subjects. Other models top out at 20 or 25 percent, but Gemini 2.5 Deep Think managed a score of 34.8 percent.

Google releases Gemini 2.5 Deep Think for AI Ultra subscribers Read More »

backpage-survivors-will-receive-$200m-to-cover-medical,-health-bills

Backpage survivors will receive $200M to cover medical, health bills

Survivors, or their representatives, must submit claims by February 2, 2026. To receive compensation, claims must include at least one document showing they “suffered monetary and/or behavioral health losses,” the claims form specified.

Documents can include emails, texts, screenshots, or advertisements. Claims may be further strengthened by sharing receipts from doctors’ visits, as well as medical or psychological exam results, summaries, or plans.

Medical expenses survivors can document can include any expenses paid out of pocket, including dental expenses, tattoo removals, or even future medical costs referenced in doctor’s referrals, an FAQ noted. Similarly, counseling or therapy costs can be covered, as well as treatment for substance use, alternative behavioral treatments, and future behavioral health plans recommended by a professional.

The FAQ also clarified that lost wages can be claimed, including any documentation of working overtime. Survivors only need to show approximate dates and times of abuse, since the DOJ said that it “appreciates that you may not remember exact number of hours you were trafficked during the relevant timeframe.” However, no future economic losses can be claimed, the FAQ said, and survivors will not be compensated for pain and suffering, despite the DOJ understanding that “your experience was painful and traumatic.”

Consulting the DOJ’s FAQ can help survivors assess the remission process. It noted that any “information regarding aliases, email addresses used, phone numbers, and trafficker names” can “be used to verify your eligibility.” Survivors are also asked to share any prior compensation already received from Backpage or through other lawsuits. To get answers to any additional questions, they can call the administrator in charge of dispensing claims, Epiq Global, at 1-888-859-9206 toll-free or at 1-971-316-5053 for international calls, the DOJ noted.

If you are in immediate danger or need resources because of a trafficking situation, please call 911 or the National Human Trafficking Hotline, toll-free at 1-888-373-7888.

Backpage survivors will receive $200M to cover medical, health bills Read More »

the-king-is-watching-condenses-kingdom-building-strategy-to-a-single-screen

The King Is Watching condenses kingdom-building strategy to a single screen

The kinds of randomized options you’ll have to choose between waves.

The kinds of randomized options you’ll have to choose between waves.

Those inter-wave upgrades are also randomized in each run, adding some roguelike unpredictability that means no two play sessions develop quite the same way. You have to be flexible, adapting to the blueprints and units you’re given, while being willing to abandon plans that are no longer feasible.

Between waves, you’ll often get the opportunity to buy emergency resource drops, useful upgrades that last through the whole run, or one-time spells that can strengthen your units or hinder the opposition. Figuring out the best potential upgrade paths requires a lot of trial and error, and you’ll need a little luck in drawing some of the more powerful upgrade options. While experience and skill can make things more manageable, some runs end up a lot more winnable than others.

Playing through successful waves also earns you tokens you can spend between runs on permanent upgrades (including crucial expansions of that 4×4 grid) and new selectable kings with their own unique gaze shapes and special powers. But even as these upgrades make it easier to succeed in successive runs, the game cranks up the “threat level” in turn to raise the enemy strength level (and the stakes) accordingly.

That kind of self-balancing means The King Is Watching always manages to feel engaging without coming off as totally unfair. And individual runs are zippy enough to not wear out their welcome; you can make it through a full run of two or three bosses in about 30 minutes or so, especially if you use the “fast forward” option to speed up the routine resource production between enemy waves.

Best of all, those 30 minutes are so dense with important decisions and split-second management of your kingly gaze that you never have time to feel bored. The King Is Watching perfectly rides the fine line between engrossing and overwhelming, making it perfect for quick, lunch-break-sized brain breaks that combine positional reasoning, reflexes, and strategic planning.

The King Is Watching condenses kingdom-building strategy to a single screen Read More »

substack’s-“nazi-problem”-won’t-go-away-after-push-notification-apology

Substack’s “Nazi problem” won’t go away after push notification apology


Substack may be legitimizing neo-Nazis as “thought leaders,” researcher warns.

After Substack shocked an unknown number of users by sending a push notification on Monday to check out a Nazi blog featuring a swastika icon, the company quickly apologized for the “error,” tech columnist Taylor Lorenz reported.

“We discovered an error that caused some people to receive push notifications they should never have received,” Substack’s statement said. “In some cases, these notifications were extremely offensive or disturbing. This was a serious error, and we apologize for the distress it caused. We have taken the relevant system offline, diagnosed the issue, and are making changes to ensure it doesn’t happen again.”

Substack has long faced backlash for allowing users to share their “extreme views” on the platform, previously claiming that “censorship (including through demonetizing publications)” doesn’t make “the problem go away—in fact, it makes it worse,” Lorenz noted. But critics who have slammed Substack’s rationale revived their concerns this week, with some accusing Substack of promoting extreme content through features like their push alerts and “rising” lists, which flag popular newsletters and currently also include Nazi blogs.

Joshua Fisher-Birch, a terrorism analyst at a nonprofit non-government organization called the Counter Extremism Project, has been closely monitoring Substack’s increasingly significant role in helping far-right movements spread propaganda online for years. He’s calling for more transparency and changes on the platform following the latest scandal.

In January, Fisher-Birch warned that neo-Nazi groups saw Donald Trump’s election “as a mix of positives and negatives but overall as an opportunity to enlarge their movement.” Since then, he’s documented at least one Telegram channel—which currently has over 12,500 subscribers and is affiliated with the white supremacist Active Club movement—launch an effort to expand their audience by creating accounts on Substack, TikTok, and X.

Of those accounts created in February, only the Substack account is still online, which Fisher-Birch suggested likely sends a message to Nazi groups that their Substack content is “less likely to be removed than other platforms.” At least one Terrorgram-adjacent white supremacist account that Fisher-Birch found in March 2024 confirmed that Substack was viewed as a back-up to Telegram because it was that much more reliable to post content there.

But perhaps even more appealing than Substack’s lack of content moderation, Fisher-Birch noted that these groups see Substack as “a legitimizing tool for sharing content” specifically because the Substack brand—which is widely used by independent journalists, top influencers, cherished content creators, and niche experts—can help them “convey the image of a thought leader.”

“Groups that want to recruit members or build a neo-fascist counter-culture see Substack as a way to get their message out,” Fisher-Birch told Ars.

That’s why Substack users deserve more than an apology for the push notification in light of the expanding white nationalist movements on its platform, Fisher-Birch said.

“Substack should explain how this was allowed to happen and what they will do to prevent it in the future,” Fisher-Birch said.

Ars asked Substack to provide more information on the number of users who got the push notification and on its general practices promoting “extreme” content through push alerts—attempting to find out if there was an intended audience for the “error” push notification. But Substack did not immediately respond to Ars’ request to comment.

Backlash over Substack’s defense of Nazi content

Back in 2023, Substack faced backlash from over 200 users after The Atlantic‘s Jonathan Katz exposed 16 newsletters featuring Nazi imagery in a piece confronting Substack’s “Nazi problem.” At the time, Lorenz noted that Substack co-founder Hamish McKenzie confirmed that the ethos of the platform was that “we don’t like Nazis either” and “we wish no-one held those views,” but since censorship (or even demonetization) won’t stop people from holding those views, Substack thought it would be a worse option to ban the content and hide those extreme views while movements grew in the shadows.

However, Fisher-Birch told Ars that Substack’s tolerance of Nazi content has essentially turned the platform into a “bullhorn” for right-wing extremists at a time when the FBI has warned that online hate speech is growing and increasingly fueling real-world hate crimes, the prevention of which is viewed at the highest-level national threat priority.

Fisher-Birch recommended that Substack take the opportunity of its latest scandal to revisit its content guidelines “and forbid content that promotes hatred or discrimination based on race, ethnicity, national origin, religion, sex, gender identity, sexual orientation, age, disability, or medical condition.”

“If Substack changed its content guidelines and prohibited individuals and groups that promote white supremacism and neo-Nazism from using its platform, the extreme right would move to other online spaces,” Fisher-Birch said. “These right wing extremists would not be able to use the bullhorn of Substack. These ideas would still exist, and the people promoting them would still be around, but they wouldn’t be able to use Substack’s platform to do it.”

Fisher-Birch’s Counter Extremism Project has found that the best way for platforms to counter growing online Nazi movements is to provide “clear terms of service or community guidelines that prohibit individuals or groups that promote hatred or discrimination” and take “action when content is reported.” Platforms should also stay mindful of “changing trends in the online extremist landscape,” Fisher-Birch said.

Instead, Fisher-Birch noted, Substack appears to have failed to follow its own “limited community guidelines” and never removed a white supremacist blog promoting killing one’s enemies and violence against Jewish people, which CEP reported to the platform back in March 2024.

With Substack likely to remain tolerant of such content, CEP will continue monitoring how extremist groups use Substack to expand their movements, Fisher-Birch confirmed.

Favorite alternative platforms for Substack ex-pats

This week, some Substack users renewed calls to boycott the platform after the push notification. One popular writer who long ago abandoned Substack, A.R. Moxon, joined Fisher-Birch in pushing back on Substack’s defense of hosting Nazi content.

“This was ultimately my biggest problem with Substack: their notion that the answer to Nazi ideas is to amplify them so you can defeat them with better ideas presupposes that Nazi ideas have not yet been defeated on the merits, and that Nazis will ever recognize such a defeat,” Moxon posted on Bluesky.

Moxon has switched to Ghost for his independent blog, The Reframe, an open source Substack alternative that woos users by migrating accounts for users and ditching Substack’s fees, which take a 10 percent cut of each Substacker’s transactions. That means users can easily switch platforms and make more money on Ghost, if they can attract as broad an audience as they got on Substack.

However, some users feel that Substack’s design, which can help more users discover their content, is the key reason they can’t switch, and Ghost acknowledges this.

“Getting traffic to an independent website can be challenging, of course,” Ghost’s website said. “But the rewards are that you physically own the content and you’re benefitting your own brand and business.”

But Gillian Brockell, a former Washington Post staff writer, attested on Bluesky that her subscriber rate is up since switching to Ghost. Perhaps that’s because the hype that Substack heightens engagement isn’t real for everyone, but Brockell raised another theory: “Maybe because I’m less ashamed to share it? Maybe because more and more people refuse to subscribe to Substack? I dunno, but I’m happier.”

Another former Substack user, comics writer Grek Pak, posted on Bluesky that Buttondown served his newsletter needs. That platform charges lower fees than Substack and counters claims that Substack’s “network effects” work by pointing to “evidence” that Substack “readers tend to be less engaged and pay you less.”

Fisher-Birch suggested that Substack’s biggest rivals—which include Ghost and Buttondown, as well as Patreon, Medium, BeeHiiv, and even old-school platforms like Tumblr—could benefit if the backlash over the push notification forces more popular content creators to ditch Substack.

“Many people do not want to use a platform that does not remove content promoting neo-Nazism, and several creators have moved to other platforms,” Fisher-Birch said.

Imani Gandy, a journalist and lawyer behind a popular online account called “Angry Black Lady,” suggested on Bluesky that “Substack is not sustainable from a business perspective—and that’s before you get to the fact that they are now pushing Nazi content onto people’s phones. You either move now or move in shame later. Those are the two options really.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Substack’s “Nazi problem” won’t go away after push notification apology Read More »

“it’s-shocking”:-massive-raw-milk-outbreak-from-2023-finally-reported

“It’s shocking”: Massive raw milk outbreak from 2023 finally reported


The outbreak occurred in 2023–2024, but little information had been shared about it.

On October 20, 2023, health officials in the County of San Diego, California, put out a press release warning of a Salmonella outbreak linked to raw (unpasteurized) milk. Such an outbreak is not particularly surprising; the reason the vast majority of milk is pasteurized (heated briefly to kill germs) is because milk can easily pick up nasty pathogens in the farmyard that can cause severe illnesses, particularly in children. It’s the reason public health officials have long and strongly warned against consuming raw milk.

At the time of the press release, officials in San Diego County had identified nine residents who had been sickened in the outbreak. Of those nine, three were children, and all three children had been hospitalized.

On October 25, the county put out a second press release, reporting that the local case count had risen to 12, and the suspected culprit—raw milk and raw cream from Raw Farm LLC—had been recalled. The same day, Orange County’s health department put out its own press release, reporting seven cases among its residents, including one in a 1-year-old infant.

Both counties noted that the California Department of Public Health (CDPH), which had posted the recall notice, was working on the outbreak, too. But it doesn’t appear that CDPH ever followed up with its own press release about the outbreak. The CDPH did write social media posts related to the outbreak: One on October 26, 2023, announced the recall; a second on November 30, 2023, noted “a recent outbreak” of Salmonella cases from raw milk but linked to general information about the risks of raw milk; and a third on December 7, 2023, linked to general information again with no mention of the outbreak.

But that seems to be the extent of the information at the time. For anyone paying attention, it might have seemed like the end of the story. But according to the final outbreak investigation report—produced by CDPH and local health officials—the outbreak actually ran from September 2023 to March 2024, spanned five states, and sickened at least 171 people. That report was released last week, on July 24, 2025.

Shocking outbreak

The report was published in the Morbidity and Mortality Weekly Report, a journal run by the Centers for Disease Control and Prevention. The report describes the outbreak as “one of the largest foodborne outbreaks linked to raw milk in recent US history.” It also said that the state and local health department had issued “extensive public messaging regarding this outbreak.”

According to the final data, of the 171 people, 120 (70 percent) were children and teens, including 67 (39 percent) who were under the age of 5. At least 22 people were hospitalized, nearly all of them (82 percent) were children and teens. Fortunately, there were no deaths.

“I was just candidly shocked that there was an outbreak of 170 plus people because it had not been reported—at all,” Bill Marler, a personal injury lawyer specializing in food poisoning outbreaks, told Ars Technica in an interview. With the large number of cases, the high percentage of kids, and cases in multiple states, “it’s shocking that they never publicized it,” he said. “I mean, what’s the point?”

Ars Technica reached out to CDPH seeking answers about why there wasn’t more messaging and information about the outbreak during and soon after the investigation. At the time this story was published, several business days had passed and the department had told Ars in a follow-up email that it was still working on a response. Shortly after publication, CDPH provided a written statement, but it did not answer any specific questions, including why CDPH did not release its own press release about the state-wide outbreak or make case counts public during the investigation.

“CDPH takes its charge to protect public health seriously and works closely with all partners when a foodborne illness outbreak is identified,” the statement reads. It then referenced only the social media posts and the press releases from San Diego County and Orange County mentioned previously in this story as examples of its public messaging.

“This is pissing me off”

Marler, who represents around two dozen of the 171 people sickened in the outbreak, was one of the first people to get the full picture of the outbreak from California officials. In July of 2024, he obtained an interim report of the investigation from state health officials. At that point, they had documented at least 165 of the cases. And in December 2024, he got access to a preliminary report of the full investigation dated October 15, 2024, which identified the final 171 cases and appears to contain much of the data published in the MMWR, which has had its publication rate slowed amid the second Trump administration.

Getting that information from California officials was not easy, Marler told Ars. “There was one point in time where they wouldn’t give it to me. And I sent them a copy of a subpoena and I said, ‘you know, I’ve been working with public health for 32 years. I’m a big supporter of public health. I believe in your mission, but,’ I said, ‘this is pissing me off.'”

At that point, Marler knew that it was a multi-county outbreak and the CDPH and the state’s Department of Food and Agriculture were involved. He knew there was data. But it took threatening a subpoena to get it. “I’m like ‘OK, you don’t give it to me. I’m going to freaking drop a subpoena on you, and the court’s going to force you to give it.’ And they’re like, ‘OK, we’ll give it to you.'”

The October 15 state report he finally got a hold of provides a breakdown of the California cases. It reports that San Diego had a total of 25 cases (not just the 12 initially reported in the press releases), and Orange County had 19 (not just the seven). Most of the other 171 cases were spread widely across California, spanning 35 local health departments. Only four of the 171 cases were outside of California—one each in New Mexico, Pennsylvania, Texas, and Washington. It’s unclear how people in these states were exposed, given that it’s against federal law to sell raw milk for human consumption across state lines. But two of the four people sickened outside of California specifically reported that they consumed dairy from Raw Farm without going to California.

Of the 171 cases, 159 were confirmed cases, which were defined as being confirmed using whole genome sequencing that linked the Salmonella strain causing a person’s infection to the outbreak strain also found in raw milk samples and a raw milk cheese sample from Raw Farm. The remaining 12 probable cases were people who had laboratory-confirmed Salmonella infections and also reported consuming Raw Farm products within seven days prior to falling ill.

“We own it”

In an interview with Ars Technica, the owner and founder of Raw Farm, Mark McAfee, disputed much of the information in the MMWR study and the October 2024 state report. He claimed that there were not 171 cases—only 19 people got sick, he said, presumably referring to the 19 cases collectively reported in the San Diego and Orange County press releases in October 2023.

“We own it. It’s ours. We’ve got these 19 people,” he told Ars.

But he said he did not believe that the genomic data was accurate and that the other 140 cases confirmed with genetic sequencing were not truly connected to his farm’s products. He also doubted that the outbreak spanned many months and into early 2024. McAfee says that a single cow that had been purchased close to the start of the outbreak had been the source of the Salmonella. Once that animal had been removed from the herd by the end of October 23, subsequent testing was negative. He also outright did not accept that testing identified the Salmonella outbreak strain in the farm’s raw cheese, which was reported in the MMWR and the state report.

Overall, McAfee downplayed the outbreak and claimed that raw milk has significant health benefits, such as being a cure for asthma—a common myth among raw milk advocates that has been debunked. He rejects the substantial number of scientific studies that have refuted the variety of unproven health claims made by raw-milk advocates. (You can read a thorough run-down of raw milk myths and the data refuting them in this post by the Food and Drug Administration.) McAfee claims that he and his company are “pioneers” and that public health experts who warn of the demonstrable health risks are simply stuck in the past.

Outbreak record

McAfee is a relatively high-profile raw milk advocate in California. For example, health secretary and anti-vaccine advocate Robert F. Kennedy Jr. is reportedly a customer. Amid an outbreak of H5N1 on his farm last year, McAfee sent Ars press material claiming that McAfee “has been asked by the RFK transition team to apply for the position of ‘FDA advisor on Raw Milk Policy and Standards Development.'” But McAfee’s opinion of Kennedy has soured since then. In an interview with Ars last week, he said Kennedy “doesn’t have the guts” to loosen federal regulations on raw milk.

On his blog, Marler has a running tally of at least 11 outbreaks linked to the farm’s products.

In this outbreak, illnesses were caused by Salmonella Typhimurium, which generally causes diarrhea, fever, vomiting, and abdominal pain. In some severe cases, the infection can spread outside the gastrointestinal tract and into the blood, brain, bones, and joints, according to the CDC.

Marler noted that, for kids, infections can be severe. “Some of these kids who got sick were hospitalized for extended periods of time,” he said of the some of the cases he is representing in litigation. And those hospitalizations can lead to hundreds of thousands of dollars in medical expenses, he said. “It’s not just tummy aches.”

This post has been updated to include the response from CDPH.

Photo of Beth Mole

Beth is Ars Technica’s Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes.

“It’s shocking”: Massive raw milk outbreak from 2023 finally reported Read More »

trump-claims-europe-won’t-make-big-tech-pay-isps;-eu-says-it-still-might

Trump claims Europe won’t make Big Tech pay ISPs; EU says it still might

We asked the White House and European Commission for more details today and will update this article with any new information.

If the White House fact sheet’s reference to network usage fees has at least some truth to it, it may refer only to a tentative agreement between Trump and von der Leyen. The overall trade deal, which includes a 15 percent cap on tariffs for most EU exports into the US, is not final, as the European Commission pointed out in its announcement.

“The political agreement of 27 July 2025 is not legally binding,” a European Commission announcement said. “Beyond taking the immediate actions committed, the EU and the US will further negotiate, in line with their relevant internal procedures, to fully implement the political agreement.”

Big Tech hopeful that usage fees are dead

The European Union government sought public input on network fees in 2023, drawing opposition from US tech companies and the Biden administration. While European ISPs pushed for new fees from online companies that accounted for over 5 percent of average peak traffic, the Biden administration said the plan “could reinforce the dominant market position of the largest operators… give operators a new bottleneck over customers, raise costs for end users,” and undermine net neutrality.

As tech industry analyst Dean Bubley wrote today, the White House statement on network usage fees is vague, and “the devil is in the detail here.” One thing to watch out for, he said, is whether Europe prohibits back-door methods of charging network usage fees, such as having the government regulate disputes over IP interconnection.

Bubley speculated that the EC might have “received a boatload of negative feedback” about network usage fees in a recent public consultation on the Digital Networks Act and that the trade deal provides “a nice, Trump-shaped excuse to boot out the whole idea, which in any case had huge internal flaws and contradictions—and specifically worked against the EU’s own objectives in having a robust AI industry, which I’d wager is seen in Brussels as much more important.”

Trump claims Europe won’t make Big Tech pay ISPs; EU says it still might Read More »

2025-polestar-3-drives-sporty,-looks-sharp,-can-be-a-little-annoying

2025 Polestar 3 drives sporty, looks sharp, can be a little annoying

Earlier this month, Ars took a look at Volvo’s latest electric vehicle. The EX90 proved to be a rather thoughtful Swedish take on the luxury SUV, albeit one that remains a rare sight on the road. But the EX90 is not the only recipe one can cook with the underlying ingredients. The ingredients in this case are from a platform called SPA2, and to extend the metaphor a bit, the kitchen is the Volvo factory in Ridgeville, South Carolina, which in addition to making a variety of midsize and larger Volvo cars for the US and European markets also produces the Polestar 3.

What’s fascinating is how different the end products are. Intentionally, Polestar and Volvo wisely seek different customers rather than cannibalize each other’s sales. As a new brand, Polestar comes with many fewer preconceptions other than the usual arguments that will rage in the comment section over just how much is Swedish versus Chinese, and perhaps the occasional student of history who remembers the touring car racing team that then developed some bright blue special edition Volvo road cars that for a while held a production car lap record around the Nürburgring Nordschliefe.

That historical link is important. Polestar might now mentally slot into the space that Saab used to occupy in the last century as a refuge for customers with eclectic tastes thanks to its clean exterior designs and techwear-inspired interiors. Once past the necessity of basic transportation, aesthetics are as good a reason as most when it comes to picking a particular car. Just thinking of a Polestar as a brand that exemplifies modern Scandinavian design would be to sell it short, though. The driving dynamics are just too good.

Although it shares a platform with the big Volvo, the Polestar 3 is strictly a two-row SUV. Jonathan Gitlin

High praise

In fact, if there’s another brand out there that might be starting to pay attention to the way Polestars drive, it should be Porsche. Bold words indeed. Often, dual-motor EVs have one motor rated as more powerful than the other, or perhaps even of different designs. But the long-range dual motor Polestar 3 (MSRP: $73,400) is fitted with a pair of identical 241 hp (180 kW), 310 lb-ft (420 Nm) permanent magnet motors. The drive units are not entirely identical, however—at the rear, clutches on either side allow for true torque vectoring during cornering, as well as disconnecting the rear axle entirely for a more efficient mode.

2025 Polestar 3 drives sporty, looks sharp, can be a little annoying Read More »

meta-pirated-and-seeded-porn-for-years-to-train-ai,-lawsuit-says

Meta pirated and seeded porn for years to train AI, lawsuit says

Evidence may prove Meta seeded more content

Seeking evidence to back its own copyright infringement claims, Strike 3 Holdings searched “its archive of recorded infringement captured by its VXN Scan and Cross Reference tools” and found 47 “IP addresses identified as owned by Facebook infringing its copyright protected Works.”

The data allegedly demonstrates a “continued unauthorized distribution” over “several years.” And Meta allegedly did not stop its seeding after Strike 3 Holdings confronted the tech giant with this evidence—despite the IP data supposedly being verified through an industry-leading provider called Maxmind.

Strike 3 Holdings shared a screenshot of MaxMind’s findings. Credit: via Strike 3 Holdings’ complaint

Meta also allegedly attempted to “conceal its BitTorrent activities” through “six Virtual Private Clouds” that formed a “stealth network” of “hidden IP addresses,” the lawsuit alleged, which seemingly implicated a “major third-party data center provider” as a partner in Meta’s piracy.

An analysis of these IP addresses allegedly found “data patterns that matched infringement patterns seen on Meta’s corporate IP Addresses” and included “evidence of other activity on the BitTorrent network including ebooks, movies, television shows, music, and software.” The seemingly non-human patterns documented on both sets of IP addresses suggest the data was for AI training and not for personal use, Strike 3 Holdings alleged.

Perhaps most shockingly, considering that a Meta employee joked “torrenting from a corporate laptop doesn’t feel right,” Strike 3 Holdings further alleged that it found “at least one residential IP address of a Meta employee” infringing its copyrighted works. That suggests Meta may have directed an employee to torrent pirated data outside the office to obscure the data trail.

The adult site operator did not identify the employee or the major data center discussed in its complaint, noting in a subsequent filing that it recognized the risks to Meta’s business and its employees’ privacy of sharing sensitive information.

In total, the company alleged that evidence shows “well over 100,000 unauthorized distribution transactions” linked to Meta’s corporate IPs. Strike 3 Holdings is hoping the evidence will lead a jury to find Meta liable for direct copyright infringement or charge Meta with secondary and vicarious copyright infringement if the jury finds that Meta successfully distanced itself by using the third-party data center or an employee’s home IP address.

“Meta has the right and ability to supervise and/or control its own corporate IP addresses, as well as the IP addresses hosted in off-infra data centers, and the acts of its employees and agents infringing Plaintiffs’ Works through their residential IPs by using Meta’s AI script to obtain content through BitTorrent,” the complaint said.

Meta pirated and seeded porn for years to train AI, lawsuit says Read More »