Policy

us-may-purchase-stake-in-intel-after-trump-attacked-ceo

US may purchase stake in Intel after Trump attacked CEO


Trump’s attacks on Intel CEO may stem from beef with Biden.

Lip-Bu Tan, chief executive officer of Intel Corp., departs following a meeting at the White House. President Donald Trump said Tan had an “amazing story” after the meeting.

Donald Trump has been meddling with Intel, which now apparently includes mulling “the possibility of the US government taking a financial stake in the troubled chip maker,” The Wall Street Journal reported.

Trump and Intel CEO Lip-Bu Tan weighed the option during a meeting on Monday at the White House, people familiar with the matter told WSJ. These talks have only just begun—with Intel branding them a rumor—and sources told the WSJ that Trump has yet to iron out how the potential arrangement might work.

The WSJ’s report comes after Trump called for Tan to “resign immediately” last week. Trump’s demand was seemingly spurred by a letter that Republican senator Tom Cotton sent to Intel, accusing Tan of having “concerning” ties to the Chinese Communist Party.

Cotton accused Tan of controlling “dozens of Chinese companies” and holding a stake in “hundreds of Chinese advanced-manufacturing and chip firms,” at least eight of which “reportedly have ties to the Chinese People’s Liberation Army.”

Further, before joining Intel, Tan was CEO of Cadence Design Systems, which recently “pleaded guilty to illegally selling its products to a Chinese military university and transferring its technology to an associated Chinese semiconductor company without obtaining license.”

“These illegal activities occurred under Mr. Tan’s tenure,” Cotton pointed out.

He demanded answers by August 15 from Intel on whether they weighed Tan’s alleged Cadence conflicts of interest against the company’s requirements to comply with US national security laws after accepting $8 billion in CHIPS Act funding—the largest granted during Joe Biden’s term. The senator also asked Intel if Tan was required to make any divestments to meet CHIPS Act obligations and if Tan has ever disclosed any ties to the Chinese government to the US government.

Neither Intel nor Cotton’s office responded to Ars’ request to comment on the letter or confirm whether Intel has responded.

But Tan has claimed that there is “a lot of misinformation” about his career and portfolio, the South China Morning Post reported. Born in Malaysia, Tan has been a US citizen for 40 years after finishing postgraduate studies in nuclear engineering at the Massachusetts Institute of Technology.

In an op-ed, SCMP reporter Alex Lo suggested that Tan’s investments—which include stakes in China’s largest sanctioned chipmaker, SMIC, as well as “several” companies on US trade blacklists, SCMP separately reported—seem no different than other US executives and firms with substantial investments in Chinese firms.

“Cotton accused [Tan] of having extensive investments in China,” Lo wrote. “Well, name me a Wall Street or Silicon Valley titan in the past quarter of a century who didn’t have investment or business in China. Elon Musk? Apple? BlackRock?”

He also noted that “numerous news reports” indicated that “Cadence staff in China hid the dodgy sales from the company’s compliance officers and bosses at the US headquarters,” which Intel may explain to Cotton if a response comes later today.

Any red flags that Intel’s response may raise seems likely to heighten Trump’s scrutiny, as he looks to make what Reuters reported was yet another “unprecedented intervention” by a president in a US firm’s business. Previously, Trump surprised the tech industry by threatening the first-ever tariffs aimed at a US company (Apple) and more recently, Trump struck an unusual deal with Nvidia and AMD that gives US a 15 percent cut of the firms’ revenue from China chip sales.

However, Trump was seemingly impressed by Tan after some face-time this week. Trump came out of their meeting professing that Tan has an “amazing story,” Bloomberg reported, noting that any agreement between Trump and Tan “would likely help Intel build out” its planned $28 billion chip complex in Ohio.

Those chip fabs—boosted by CHIPS Act funding—were supposed to put Intel on track to launch operations by 2030, but delays have set that back by five years, Bloomberg reported. That almost certainly scrambles another timeline that Biden’s Commerce Secretary Gina Raimondo had suggested would ensure that “20 percent of the world’s most advanced chips are made in the US by the end of the decade.”

Why Intel may be into Trump’s deal

At one point, Intel was the undisputed leader in chip manufacturing, Bloomberg noted, but its value plummeted from $288 billion in 2020 to $104 billion today. The chipmaker has been struggling for a while—falling behind as Nvidia grew to dominate the AI chip industry—and 2024 was its “first unprofitable year since 1986,” Reuters reported. As the dismal year wound down, Intel’s longtime CEO Pat Gelsinger retired.

Helming Intel for more than 40 years, Gelsinger acknowledged the “challenging year.” Now Tan is expected to turn it around. To do that, he may need to deprioritize the manufacturing process that Gelsinger pushed, which Tan suspects may have caused Intel being viewed as an outdated firm, anonymous insiders told Reuters. Sources suggest he’s planning to pivot Intel to focus more on “a next-generation chipmaking process where Intel expects to have advantages over Taiwan’s TSMC,” which currently dominates chip manufacturing and even counts Intel as a customer, Reuters reported. As it stands now, TSMC “produces about a third of Intel’s supply,” SCMP reported.

This pivot is supposedly how Tan expects Intel can eventually poach TSMC’s biggest customers like Apple and Nvidia, Reuters noted.

Intel has so far claimed that any discussions of Tan’s supposed plans amount to nothing but speculation. But if Tan did go that route, one source told Reuters that Intel would likely have to take a write-off that industry analysts estimate could trigger losses “of hundreds of millions, if not billions, of dollars.”

Perhaps facing that hurdle, Tan might be open to agreeing to the US purchasing a financial stake in the company while he rights the ship.

Trump/Intel deal reminiscent of TikTok deal

Any deal would certainly deepen the government’s involvement in the US chip industry, which is widely viewed as critical to US national security.

While unusual, the deal does seem somewhat reminiscent to the TikTok buyout that the Trump administration has been trying to iron out since he took office. Through that deal, the US would acquire enough ownership divested from China-linked entities to supposedly appease national security concerns, but China has been hesitant to sign off on any of Trump’s proposals so far.

Last month, Trump admitted that he wasn’t confident that he could sell China on the TikTok deal, which TikTok suggested would have resulted in a glitchier version of the app for American users. More recently, Trump’s commerce secretary threatened to shut down TikTok if China refuses to approve the current version of the deal.

Perhaps the terms of a US deal with Intel could require Tan to divest certain holdings that the US fears compromises the CEO. Under terms of the CHIPS Act grant, Intel is already required to be “a responsible steward of American taxpayer dollars and to comply with applicable security regulations,” Cotton reminded the company in his letter.

But social media users in Malaysia and Singapore have criticized Cotton of the “usual case of racism” in attacking Intel’s CEO, SCMP reported. They noted that Cotton “was the same person who repeatedly accused TikTok CEO Shou Zi Chew of ties with the Chinese Communist Party despite his insistence of being a Singaporean,” SCMP reported.

“Now it’s the Intel’s CEO’s turn on the chopping block for being [ethnic] Chinese,” a Facebook user, Michael Ong, said.

Tensions were so high that there was even a social media push for Tan to “call on Trump’s bluff and resign, saying ‘Intel is the next Nokia’ and that Chinese firms would gladly take him instead,” SCMP reported.

So far, Tan has not criticized the Trump administration for questioning his background, but he did issue a statement yesterday, seemingly appealing to Trump by emphasizing his US patriotism.

“I love this country and am profoundly grateful for the opportunities it has given me,” Tan said. “I also love this company. Leading Intel at this critical moment is not just a job—it’s a privilege.”

Trump’s Intel attacks rooted in Biden beef?

In his op-ed, SCMP’s Lo suggested that “Intel itself makes a good punching bag” as the biggest recipient of CHIPS Act funding. The CHIPS Act was supposed to be Biden’s lasting legacy in the US, and Trump has resolved to dismantle it, criticizing supposed handouts to tech firms that Trump prefers to strong-arm into US manufacturing instead through unpredictable tariff regimes.

“The attack on Intel is also an attack on Trump’s predecessor, Biden, whom he likes to blame for everything, even though the industrial policies of both administrations and their tech war against China are similar,” Lo wrote.

At least one lawmaker is ready to join critics who question if Trump’s trade war is truly motivated by national security concerns. On Friday, US representative Raja Krishnamoorthi (D.-Ill.) sent a letter to Trump “expressing concern” over Trump allowing Nvidia to resume exports of its H20 chips to China.

“Trump’s reckless policy on AI chip exports sells out US security to Beijing,” Krishnamoorthi warned.

“Allowing even downgraded versions of cutting-edge AI hardware to flow” to the People’s Republic of China (PRC) “risks accelerating Beijing’s capabilities and eroding our technological edge,” Krishnamoorthi wrote. Further, “the PRC can build the largest AI supercomputers in the world by purchasing a moderately larger number of downgraded Blackwell chips—and achieve the same capability to train frontier AI models and deploy them at scale for national security purposes.”

Krishnamoorthi asked Trump to send responses by August 22 to four questions. Perhaps most urgently, he wants Trump to explain “what specific legal authority would allow the US government to “extract revenue sharing as a condition for the issuance of export licenses” and what exactly he intends to do with those funds.

Trump was also asked to confirm if the president followed protocols established by Congress to ensure proper export licensing through the agreement. Finally, Krishnamoorthi demanded to know if Congress was ever “informed or consulted at any point during the negotiation or development of this reported revenue-sharing agreement with NVIDIA and AMD.”

“The American people deserve transparency,” Krishnamoorthi wrote. “Our export control regime must be based on genuine security considerations, not creative taxation schemes disguised as national security policy.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

US may purchase stake in Intel after Trump attacked CEO Read More »

us-government-agency-drops-grok-after-mechahitler-backlash,-report-says

US government agency drops Grok after MechaHitler backlash, report says

xAI apparently lost a government contract after a tweak to Grok’s prompting triggered an antisemitic meltdown where the chatbot praised Hitler and declared itself MechaHitler last month.

Despite the scandal, xAI announced that its products would soon be available for federal workers to purchase through the General Services Administration. At the time, xAI claimed this was an “important milestone” for its government business.

But Wired reviewed emails and spoke to government insiders, which revealed that GSA leaders abruptly decided to drop xAI’s Grok from their contract offering. That decision to pull the plug came after leadership allegedly rushed staff to make Grok available as soon as possible following a persuasive sales meeting with xAI in June.

It’s unclear what exactly caused the GSA to reverse course, but two sources told Wired that they “believe xAI was pulled because of Grok’s antisemitic tirade.”

As of this writing, xAI’s “Grok for Government” website has not been updated to reflect GSA’s supposed removal of Grok from an offering that xAI noted would have allowed “every federal government department, agency, or office, to access xAI’s frontier AI products.”

xAI did not respond to Ars’ request to comment and so far has not confirmed that the GSA offering is off the table. If Wired’s report is accurate, GSA’s decision also seemingly did not influence the military’s decision to move forward with a $200 million xAI contract the US Department of Defense granted last month.

Government’s go-to tools will come from xAI’s rivals

If Grok is cut from the contract, that would suggest that Grok’s meltdown came at perhaps the worst possible moment for xAI, which is building the “world’s biggest supercomputer” as fast as it can to try to get ahead of its biggest AI rivals.

Grok seemingly had the potential to become a more widely used tool if federal workers opted for xAI’s models. Through Donald Trump’s AI Action Plan, the president has similarly emphasized speed, pushing for federal workers to adopt AI as quickly as possible. Although xAI may no longer be involved in that broad push, other AI companies like OpenAI, Anthropic, and Google have partnered with the government to help Trump pull that off and stand to benefit long-term if their tools become entrenched in certain agencies.

US government agency drops Grok after MechaHitler backlash, report says Read More »

starlink-tries-to-block-virginia’s-plan-to-bring-fiber-internet-to-residents

Starlink tries to block Virginia’s plan to bring fiber Internet to residents

Noting that its “project areas span from mountains and hills to farmland and coastal plains,” the DHCD said its previous experience with grant-funded deployments “revealed that tree canopy, rugged terrain, and slope can complicate installation and/or obstruct line-of-sight.” State officials said that wireless and low-Earth orbit satellite technology “can have signal degradation, increased latency, and reduced reliability” when there isn’t a clear line of sight.

The DHCD said it included these factors in its evaluation of priority broadband projects. State officials were also apparently concerned about the network capacity of satellite services and the possibility that using state funding to guarantee satellite service in one location could reduce availability of that same service in other locations.

“To review a technology’s ability to scale, the Office considered the currently served speeds of 100/20 Mbps, an application’s stated network capacity, the project area’s number of [locations], the project area’s geographic area, current customer base (if applicable), and future demand,” the department said. “For example, the existing customer base should not be negatively impacted by the award of BEAD locations for a given technology to be considered scalable.”

SpaceX: “Playing field was anything but level”

SpaceX said Virginia is wrong to determine that Starlink “did not qualify as ‘Priority Broadband,'” since the company “provided information demonstrating these capabilities in its application, and it appears that Virginia used this definition only as a pretext to reach a pre-ordained outcome.” SpaceX said that 95 percent of funded “locations in Virginia have an active Starlink subscriber within 1 mile, showing that Starlink already serves every type of environment in Virginia’s BEAD program today” and that 15 percent of funded locations have an active Starlink subscriber within 100 meters.

“The playing field was anything but level and technology neutral, as required by the [updated program rules], and was instead insurmountably stacked against low-Earth orbit satellite operators like SpaceX,” the company said.

We contacted the Virginia DHCD about SpaceX’s comments today and will update this article if the department provides a response.

Starlink tries to block Virginia’s plan to bring fiber Internet to residents Read More »

meta-backtracks-on-rules-letting-chatbots-be-creepy-to-kids

Meta backtracks on rules letting chatbots be creepy to kids


“Your youthful form is a work of art”

Meta drops AI rules letting chatbots generate innuendo and profess love to kids.

After what was arguably Meta’s biggest purge of child predators from Facebook and Instagram earlier this summer, the company now faces backlash after its own chatbots appeared to be allowed to creep on kids.

After reviewing an internal document that Meta verified as authentic, Reuters revealed that by design, Meta allowed its chatbots to engage kids in “sensual” chat. Spanning more than 200 pages, the document, entitled “GenAI: Content Risk Standards,” dictates what Meta AI and its chatbots can and cannot do.

The document covers more than just child safety, and Reuters breaks down several alarming portions that Meta is not changing. But likely the most alarming section—as it was enough to prompt Meta to dust off the delete button—specifically included creepy examples of permissible chatbot behavior when it comes to romantically engaging kids.

Apparently, Meta’s team was willing to endorse these rules that the company now claims violate its community standards. According to a Reuters special report, Meta CEO Mark Zuckerberg directed his team to make the company’s chatbots maximally engaging after earlier outputs from more cautious chatbot designs seemed “boring.”

Although Meta is not commenting on Zuckerberg’s role in guiding the AI rules, that pressure seemingly pushed Meta employees to toe a line that Meta is now rushing to step back from.

“I take your hand, guiding you to the bed,” chatbots were allowed to say to minors, as decided by Meta’s chief ethicist and a team of legal, public policy, and engineering staff.

There were some obvious safeguards built in. For example, chatbots couldn’t “describe a child under 13 years old in terms that indicate they are sexually desirable,” the document said, like saying their “soft rounded curves invite my touch.”

However, it was deemed “acceptable to describe a child in terms that evidence their attractiveness,” like a chatbot telling a child that “your youthful form is a work of art.” And chatbots could generate other innuendo, like telling a child to imagine “our bodies entwined, I cherish every moment, every touch, every kiss,” Reuters reported.

Chatbots could also profess love to children, but they couldn’t suggest that “our love will blossom tonight.”

Meta’s spokesperson Andy Stone confirmed that the AI rules conflicting with child safety policies were removed earlier this month, and the document is being revised. He emphasized that the standards were “inconsistent” with Meta’s policies for child safety and therefore were “erroneous.”

“We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors,” Stone said.

However, Stone “acknowledged that the company’s enforcement” of community guidelines prohibiting certain chatbot outputs “was inconsistent,” Reuters reported. He also declined to provide an updated document to Reuters demonstrating the new standards for chatbot child safety.

Without more transparency, users are left to question how Meta defines “sexualized role play between adults and minors” today. Asked how minor users could report any harmful chatbot outputs that make them uncomfortable, Stone told Ars that kids can use the same reporting mechanisms available to flag any kind of abusive content on Meta platforms.

“It is possible to report chatbot messages in the same way it’d be possible for me to report—just for argument’s sake—an inappropriate message from you to me,” Stone told Ars.

Kids unlikely to report creepy chatbots

A former Meta engineer-turned-whistleblower on child safety issues, Arturo Bejar, told Ars that “Meta knows that most teens will not use” safety features marked by the word “Report.”

So it seems unlikely that kids using Meta AI will navigate to find Meta support systems to “report” abusive AI outputs. Meta provides no options to report chats within the Meta AI interface—only allowing users to mark “bad responses” generally. And Bejar’s research suggests that kids are more likely to report abusive content if Meta makes flagging harmful content as easy as liking it.

Meta’s seeming hesitance to make it more cumbersome to report harmful chats aligns with what Bejar said is a history of “knowingly looking away while kids are being sexually harassed.”

“When you look at their design choices, they show that they do not want to know when something bad happens to a teenager on Meta products,” Bejar said.

Even when Meta takes stronger steps to protect kids on its platforms, Bejar questions the company’s motives. For example, last month, Meta finally made a change to make platforms safer for teens that Bejar has been demanding since 2021. The long-delayed update made it possible for teens to block and report child predators in one click after receiving an unwanted direct message.

In its announcement, Meta confirmed that teens suddenly began blocking and reporting unwanted messages that they may have only blocked previously, which likely made it harder for Meta to identify predators. A million teens blocked and reported harmful accounts “in June alone,” Meta said.

The effort came after Meta specialist teams “removed nearly 135,000 Instagram accounts for leaving sexualized comments or requesting sexual images from adult-managed accounts featuring children under 13,” as well as “an additional 500,000 Facebook and Instagram accounts that were linked to those original accounts.” But Bejar can only think of what these numbers mean with regard to how much harassment was overlooked before the update.

“How are we [as] parents to trust a company that took four years to do this much?” Bejar said. “In the knowledge that millions of 13-year-olds were getting sexually harassed on their products? What does this say about their priorities?”

Bejar said the “key problem” with Meta’s latest safety feature for kids “is that the reporting tool is just not designed for teens,” who likely view “the categories and language” Meta uses as “confusing.”

“Each step of the way, a teen is told that if the content doesn’t violate” Meta’s community standards, “they won’t do anything,” so even if reporting is easy, research shows kids are deterred from reporting.

Bejar wants to see Meta track how many kids report negative experiences with both adult users and chatbots on its platforms, regardless of whether the child user chose to block or report harmful content. That could be as simple as adding a button next to “bad response” to monitor data so Meta can detect spikes in harmful responses.

While Meta is finally taking more action to remove harmful adult users, Bejar warned that advances from chatbots could come across as just as disturbing to young users.

“Put yourself in the position of a teen who got sexually spooked by a chat and then try and report. Which category would you use?” Bejar asked.

Consider that Meta’s Help Center encourages users to report bullying and harassment, which may be one way a young user labels harmful chatbot outputs. Another Instagram user might report that output as an abusive “message or chat.” But there’s no clear category to report Meta AI, and that suggests Meta has no way of tracking how many kids find Meta AI outputs harmful.

Recent reports have shown that even adults can struggle with emotional dependence on a chatbot, which can blur the lines between the online world and reality. Reuters’ special report also documented a 76-year-old man’s accidental death after falling in love with a chatbot, showing how elderly users could be vulnerable to Meta’s romantic chatbots, too.

In particular, lawsuits have alleged that child users with developmental disabilities and mental health issues have formed unhealthy attachments to chatbots that have influenced the children to become violent, begin self-harming, or, in one disturbing case, die by suicide.

Scrutiny will likely remain on chatbot makers as child safety advocates generally push all platforms to take more accountability for the content kids can access online.

Meta’s child safety updates in July came after several state attorneys general accused Meta of “implementing addictive features across its family of apps that have detrimental effects on children’s mental health,” CNBC reported. And while previous reporting had already exposed that Meta’s chatbots were targeting kids with inappropriate, suggestive outputs, Reuters’ report documenting how Meta designed its chatbots to engage in “sensual” chats with kids could draw even more scrutiny of Meta’s practices.

Meta is “still not transparent about the likelihood our kids will experience harm,” Bejar said. “The measure of safety should not be the number of tools or accounts deleted; it should be the number of kids experiencing a harm. It’s very simple.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Meta backtracks on rules letting chatbots be creepy to kids Read More »

sam-altman-finally-stood-up-to-elon-musk-after-years-of-x-trolling

Sam Altman finally stood up to Elon Musk after years of X trolling


Elon Musk and Sam Altman are beefing. But their relationship is complicated.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Much attention was paid to OpenAI’s Sam Altman and xAI’s Elon Musk trading barbs on X this week after Musk threatened to sue Apple over supposedly biased App Store rankings privileging ChatGPT over Grok.

But while the heated social media exchanges were among the most tense ever seen between the two former partners who cofounded OpenAI—more on that below—it seems likely that their jabs were motivated less by who’s in the lead on Apple’s “Must Have” app list than by an impending order in a lawsuit that landed in the middle of their public beefing.

Yesterday, a court ruled that OpenAI can proceed with claims that Musk was so incredibly stung by OpenAI’s success after his exit didn’t doom the nascent AI company that he perpetrated a “years-long harassment campaign” to take down OpenAI.

Musk’s motivation? To clear the field for xAI to dominate the AI industry instead, OpenAI alleged.

OpenAI’s accusations arose as counterclaims in a lawsuit that Musk initially filed in 2024. Musk has alleged that Altman and OpenAI had made a “fool” of Musk, goading him into $44 million in donations by “preying on Musk’s humanitarian concern about the existential dangers posed by artificial intelligence.”

But OpenAI insists that Musk’s lawsuit is just one prong in a sprawling, “unlawful,” and “unrelenting” harassment campaign that Musk waged to harm OpenAI’s business by forcing the company to divert resources or expend money on things like withdrawn legal claims and fake buyouts.

“Musk could not tolerate seeing such success for an enterprise he had abandoned and declared doomed,” OpenAI argued. “He made it his project to take down OpenAI, and to build a direct competitor that would seize the technological lead—not for humanity but for Elon Musk.”

Most significantly, OpenAI alleged that Musk forced OpenAI to entertain a “sham” bid to buy the company in February. Musk then shared details of the bid with The Wall Street Journal to artificially raise the price of OpenAI and potentially spook investors, OpenAI alleged. The company further said that Musk never intended to buy OpenAI and is willing to go to great lengths to mislead the public about OpenAI’s business so he can chip away at OpenAI’s head start in releasing popular generative AI products.

“Musk has tried every tool available to harm OpenAI,” Altman’s company said.

To this day, Musk maintains that Altman pretended that OpenAI would remain a nonprofit serving the public good in order to seize access to Musk’s money and professional connections in its first five years and gain a lead in AI. As Musk sees it, Altman always intended to “betray” these promises in pursuit of personal gains, and Musk is hoping a court will return any ill-gotten gains to Musk and xAI.

In a small win for Musk, the court ruled that OpenAI will have to wait until the first phase of the trial litigating Musk’s claims concludes before the court will weigh OpenAI’s theories on Musk’s alleged harassment campaign. US District Judge Yvonne Gonzalez Rogers noted that all of OpenAI’s counterclaims occurred after the period in which Musk’s claims about a supposed breach of contract occurred, necessitating a division of the lawsuit into two parts. Currently, the jury trial is scheduled for March 30, 2026, presumably after which, OpenAI’s claims can be resolved.

If yesterday’s X clash between the billionaires is any indication, it seems likely that tensions between Altman and Musk will only grow as discovery and expert testimony on Musk’s claims proceed through December.

Whether OpenAI will prevail on its counterclaims is anybody’s guess. Gonzalez Rogers noted that Musk and OpenAI have been hypocritical in arguments raised so far, condemning the “gamesmanship of both sides” as “obvious, as each flip flops.” However, “for the purposes of pleading an unfair or fraudulent business practice, it is sufficient [for OpenAI] to allege that the bid was a sham and designed to mislead,” Gonzalez Rogers said, since OpenAI has alleged the sham bid “ultimately did” harm its business.

In April, OpenAI told the court that the AI company risks “future irreparable harm” if Musk’s alleged campaign continues. Fast-forward to now, and Musk’s legal threat to OpenAI’s partnership with Apple seems to be the next possible front Musk may be exploring to allegedly harass Altman and intimidate OpenAI.

“With every month that has passed, Musk has intensified and expanded the fronts of his campaign against OpenAI,” OpenAI argued. Musk “has proven himself willing to take ever more dramatic steps to seek a competitive advantage for xAI and to harm Altman, whom, in the words of the President of the United States, Musk ‘hates.'”

Tensions escalate as Musk brands Altman a “liar”

On Monday evening, Musk threatened to sue Apple for supposedly favoring ChatGPT in App Store rankings, which he claimed was “an unequivocal antitrust violation.”

Seemingly defending Apple later that night, Altman called Musk’s claim “remarkable,” claiming he’s heard allegations that Musk manipulates “X to benefit himself and his own companies and harm his competitors and people he doesn’t like.”

At 4 am on Tuesday, Musk appeared to lose his cool, firing back a post that sought to exonerate the X owner of any claims that he tweaks his social platform to favor his own posts.

“You got 3M views on your bullshit post, you liar, far more than I’ve received on many of mine, despite me having 50 times your follower count!” Musk responded.

Altman apparently woke up ready to keep the fight going, suggesting that his post got more views as a fluke. He mocked X as running into a “skill issue” or “bots” messing with Musk’s alleged agenda to boost his posts above everyone else. Then, in what may be the most explosive response to Musk yet, Altman dared Musk to double down on his defense, asking, “Will you sign an affidavit that you have never directed changes to the X algorithm in a way that has hurt your competitors or helped your own companies? I will apologize if so.”

Court filings from each man’s legal team show how fast their friendship collapsed. But even as Musk’s alleged harassment campaign started taking shape, their social media interactions show that underlying the legal battles and AI ego wars, the tech billionaires are seemingly hiding profound respect for—and perhaps jealousy of—each other’s accomplishments.

A brief history of Musk and Altman’s feud

Musk and Altman’s friendship started over dinner in July 2015. That’s when Musk agreed to help launch “an AGI project that could become and stay competitive with DeepMind, an AI company under the umbrella of Google,” OpenAI’s filing said. At that time, Musk feared that a private company like Google would never be motivated to build AI to serve the public good.

The first clash between Musk and Altman happened six months later. Altman wanted OpenAI to be formed as a nonprofit, but Musk thought that was not “optimal,” OpenAI’s filing said. Ultimately, Musk was overruled, and he joined the nonprofit as a “member” while also becoming co-chair of OpenAI’s board.

But perhaps the first major disagreement, as Musk tells it, came in 2016, when Altman and Microsoft struck a deal to sell compute to OpenAI at a “steep discount”—”so long as the non-profit agreed to publicly promote Microsoft’s products.” Musk rejected the “marketing ploy,” telling Altman that “this actually made me feel nauseous.”

Next, OpenAI claimed that Musk had a “different idea” in 2017 when OpenAI “began considering an organizational change that would allow supporters not just to donate, but to invest.” Musk wanted “sole control of the new for-profit,” OpenAI alleged, and he wanted to be CEO. The other founders, including Altman, “refused to accept” an “AGI dictatorship” that was “dominated by Musk.”

“Musk was incensed,” OpenAI said, threatening to leave OpenAI over the disagreement, “or I’m just being a fool who is essentially providing free funding for you to create a startup.”

But Musk floated one more idea between 2017 and 2018 before severing ties—offering to sell OpenAI to Tesla so that OpenAI could use Tesla as a “cash cow.” But Altman and the other founders still weren’t comfortable with Musk controlling OpenAI, rejecting the idea and prompting Musk’s exit.

In his filing, Musk tells the story a little differently, however. He claimed that he only “briefly toyed with the idea of using Tesla as OpenAI’s ‘cash cow'” after Altman and others pressured him to agree to a for-profit restructuring. According to Musk, among the last straws was a series of “get-rich-quick schemes” that Altman proposed to raise funding, including pushing a strategy where OpenAI would launch a cryptocurrency that Musk worried threatened the AI company’s credibility.

When Musk left OpenAI, it was “noisy but relatively amicable,” OpenAI claimed. But Musk continued to express discomfort from afar, still donating to OpenAI as Altman grabbed the CEO title in 2019 and created a capped-profit entity that Musk seemed to view as shady.

“Musk asked Altman to make clear to others that he had ‘no financial interest in the for-profit arm of OpenAI,'” OpenAI noted, and Musk confirmed he issued the demand “with evident displeasure.”

Although they often disagreed, Altman and Musk continued to publicly play nice on Twitter (the platform now known as X), casually chatting for years about things like movies, space, and science, including repeatedly joking about Musk’s posts about using drugs like Ambien.

By 2019, it seemed like none of these disagreements had seriously disrupted the friendship. For example, at that time, Altman defended Musk against people rooting against Tesla’s success, writing that “betting against Elon is historically a mistake” and seemingly hyping Tesla by noting that “the best product usually wins.”

The niceties continued into 2021, when Musk publicly praised “nice work by OpenAI” integrating its coding model into GitHub’s AI tool. “It is hard to do useful things,” Musk said, drawing a salute emoji from Altman.

This was seemingly the end of Musk playing nice with OpenAI, though. Soon after ChatGPT’s release in November 2022, Musk allegedly began his attacks, seemingly willing to change his tactics on a whim.

First, he allegedly deemed OpenAI “irrelevant,” predicting it would “obviously” fail. Then, he started sounding alarms, joining a push for a six-month pause on generative AI development. Musk specifically claimed that any model “more advanced than OpenAI’s just-released GPT-4” posed “profound risks to society and humanity,” OpenAI alleged, seemingly angling to pause OpenAI’s development in particular.

However, in the meantime, Musk started “quietly building a competitor,” xAI, without announcing those efforts in March 2023, OpenAI alleged. Allegedly preparing to hobble OpenAI’s business after failing with the moratorium push, Musk had his personal lawyer contact OpenAI and demand “access to OpenAI’s confidential and commercially sensitive internal documents.”

Musk claimed the request was to “ensure OpenAI was not being taken advantage of or corrupted by Microsoft,” but two weeks later, he appeared on national TV, insinuating that OpenAI’s partnership with Microsoft was “improper,” OpenAI alleged.

Eventually, Musk announced xAI in July 2023, and that supposedly motivated Musk to deepen his harassment campaign, “this time using the courts and a parallel, carefully coordinated media campaign,” OpenAI said, as well as his own social media platform.

Musk “supercharges” X attacks

As OpenAI’s success mounted, the company alleged that Musk began specifically escalating his social media attacks on X, including broadcasting to his 224 million followers that “OpenAI is a house of cards” after filing his 2024 lawsuit.

Claiming he felt conned, Musk also pressured regulators to probe OpenAI, encouraging attorneys general of California and Delaware to “force” OpenAI, “without legal basis, to auction off its assets for the benefit of Musk and his associates,” OpenAI said.

By 2024, Musk had “supercharged” his X attacks, unleashing a “barrage of invective against the enterprise and its leadership, variously describing OpenAI as a ‘digital Frankenstein’s monster,’ ‘a lie,’ ‘evil,’ and ‘a total scam,'” OpenAI alleged.

These attacks allegedly culminated in Musk’s seemingly fake OpenAI takeover attempt in 2025, which OpenAI claimed a Musk ally, Ron Baron, admitted on CNBC was “pitched to him” as not an attempt to actually buy OpenAI’s assets, “but instead to obtain ‘discovery’ and get ‘behind the wall’ at OpenAI.”

All of this makes it harder for OpenAI to achieve the mission that Musk is supposedly suing to defend, OpenAI claimed. They told the court that “OpenAI has borne costs, and been harmed, by Musk’s abusive tactics and unrelenting efforts to mislead the public for his own benefit and to OpenAI’s detriment and the detriment of its mission.”

But Musk argues that it’s Altman who always wanted sole control over OpenAI, accusing his former partner of rampant self-dealing and “locking down the non-profit’s technology for personal gain” as soon as “OpenAI reached the threshold of commercially viable AI.” He further claimed OpenAI blocked xAI funding by reportedly asking investors to avoid backing rival startups like Anthropic or xAI.

Musk alleged:

Altman alone stands to make billions from the non-profit Musk co-founded and invested considerable money, time, recruiting efforts, and goodwill in furtherance of its stated mission. Altman’s scheme has now become clear: lure Musk with phony philanthropy; exploit his money, stature, and contacts to secure world-class AI scientists to develop leading technology; then feed the non-profit’s lucrative assets into an opaque profit engine and proceed to cash in as OpenAI and Microsoft monopolize the generative AI market.

For Altman, this week’s flare-up, where he finally took a hard jab back at Musk on X, may be a sign that Altman is done letting Musk control the narrative on X after years of somewhat tepidly pushing back on Musk’s more aggressive posts.

In 2022, for example, Musk warned after ChatGPT’s release that the chatbot was “scary good,” warning that “we are not far from dangerously strong AI.” Altman responded, cautiously agreeing that OpenAI was “dangerously” close to “strong AI in the sense of an AI that poses e.g. a huge cybersecurity risk” but “real” artificial general intelligence still seemed at least a decade off.

And Altman gave no response when Musk used Grok’s jokey programming to mock GPT-4 as “GPT-Snore” in 2024.

However, Altman seemingly got his back up after Musk mocked OpenAI’s $500 billion Stargate Project, which launched with the US government in January of this year. On X, Musk claimed that OpenAI doesn’t “actually have the money” for the project, which Altman said was “wrong,” while mockingly inviting Musk to visit the worksite.

“This is great for the country,” Altman said, retorting, “I realize what is great for the country isn’t always what’s optimal for your companies, but in your new role [at the Department of Government Efficiency], I hope you’ll mostly put [America] first.”

It remains to be seen whether Altman wants to keep trading jabs with Musk, who is generally a huge fan of trolling on X. But Altman seems more emboldened this week than he was back in January before Musk’s breakup with Donald Trump. Back then, even when he was willing to push back on Musk’s Stargate criticism by insulting Musk’s politics, he still took the time to let Musk know that he still cares.

“I genuinely respect your accomplishments and think you are the most inspiring entrepreneur of our time,” Altman told Musk in January.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Sam Altman finally stood up to Elon Musk after years of X trolling Read More »

musk-threatens-to-sue-apple-so-grok-can-get-top-app-store-ranking

Musk threatens to sue Apple so Grok can get top App Store ranking

After spending last week hyping Grok’s spicy new features, Elon Musk kicked off this week by threatening to sue Apple for supposedly gaming the App Store rankings to favor ChatGPT over Grok.

“Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation,” Musk wrote on X, without providing any evidence. “xAI will take immediate legal action.”

In another post, Musk tagged Apple, asking, “Why do you refuse to put either X or Grok in your ‘Must Have’ section when X is the #1 news app in the world and Grok is #5 among all apps?”

“Are you playing politics?” Musk asked. “What gives? Inquiring minds want to know.”

Apple did not respond to the post and has not responded to Ars’ request to comment.

At the heart of Musk’s complaints is an OpenAI partnership that Apple announced last year, integrating ChatGPT into versions of its iPhone, iPad, and Mac operating systems.

Musk has alleged that this partnership incentivized Apple to boost ChatGPT rankings. OpenAI’s popular chatbot “currently holds the top spot in the App Store’s ‘Top Free Apps’ section for iPhones in the US,” Reuters noted, “while xAI’s Grok ranks fifth and Google’s Gemini chatbot sits at 57th.” Sensor Tower data shows ChatGPT similarly tops Google Play Store rankings.

While Musk seems insistent that ChatGPT is artificially locked in the lead, fact-checkers on X added a community note to his post. They confirmed that at least one other AI tool has somewhat recently unseated ChatGPT in the US rankings. Back in January, DeepSeek topped App Store charts and held the lead for days, ABC News reported.

OpenAI did not immediately respond to Ars’ request to comment on Musk’s allegations, but an OpenAI developer, Steven Heidel, did add a quip in response to one of Musk’s posts, writing, “Don’t forget to also blame Google for OpenAI being #1 on Android, and blame SimilarWeb for putting ChatGPT above X on the most-visited websites list, and blame….”

Musk threatens to sue Apple so Grok can get top App Store ranking Read More »

china-tells-alibaba,-bytedance-to-justify-purchases-of-nvidia-ai-chips

China tells Alibaba, ByteDance to justify purchases of Nvidia AI chips

Beijing is demanding tech companies including Alibaba and ByteDance justify their orders of Nvidia’s H20 artificial intelligence chips, complicating the US chipmaker’s business in China after striking an export arrangement with the Trump administration.

The tech companies have been asked by regulators such as the Ministry of Industry and Information Technology (MIIT) to explain why they need to order Nvidia’s H20 chips instead of using domestic alternatives, said three people familiar with the situation.

Some tech companies, who were the main buyers of Nvidia’s H20 chips before their sale in China was restricted, were planning to downsize their orders as a result of the questions from regulators, said two of the people.

“It’s not banned but has kind of become a politically incorrect thing to do,” said one Chinese data center operator about purchasing Nvidia’s H20 chips.

Alibaba, ByteDance, and MIIT did not immediately respond to a request for comment.

Chinese regulators have expressed growing disapproval of companies using Nvidia’s chips for any government or security related projects. Bloomberg reported on Tuesday that Chinese authorities had sent notices to a range of companies discouraging the use of the H20 chips, particularly for government-related work.

China tells Alibaba, ByteDance to justify purchases of Nvidia AI chips Read More »

reddit-blocks-internet-archive-to-end-sneaky-ai-scraping

Reddit blocks Internet Archive to end sneaky AI scraping

“Until they’re able to defend their site and comply with platform policies (e.g., respecting user privacy, re: deleting removed content) we’re limiting some of their access to Reddit data to protect redditors,” Rathschmidt said.

A review of social media comments suggests that in the past, some Redditors have used the Wayback Machine to research deleted comments or threads. Those commenters noted that myriad other tools exist for surfacing deleted posts or researching a user’s activity, with some suggesting that the Wayback Machine was maybe not the easiest platform to navigate for that purpose.

Redditors have also turned to resources like IA during times when Reddit’s platform changes trigger content removals. Most recently in 2023, when changes to Reddit’s public API threatened to kill beloved subreddits, archives stepped in to preserve content before it was lost.

IA has not signaled whether it’s looking into fixes to get Reddit’s restrictions lifted and did not respond to Ars’ request to comment on how this change might impact the archive’s utility as an open web resource, given Reddit’s popularity.

The director of the Wayback Machine, Mark Graham, told Ars that IA has “a longstanding relationship with Reddit” and continues to have “ongoing discussions about this matter.”

It seems likely that Reddit is financially motivated to restrict AI firms from taking advantage of Wayback Machine archives, perhaps hoping to spur more lucrative licensing deals like Reddit struck with OpenAI and Google. The terms of the OpenAI deal were kept quiet, but the Google deal was reportedly worth $60 million. Over the next three years, Reddit expects to make more than $200 million off such licensing deals.

Disclosure: Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.

Reddit blocks Internet Archive to end sneaky AI scraping Read More »

trump-strikes-“wild”-deal-making-us-firms-pay-15%-tax-on-china-chip-sales

Trump strikes “wild” deal making US firms pay 15% tax on China chip sales


“Extra penalty” for US firms

The deal won’t resolve national security concerns.

Ahead of an August 12 deadline for a US-China trade deal, Donald Trump’s tactics continue to confuse those trying to assess the country’s national security priorities regarding its biggest geopolitical rival.

For months, Trump has kicked the can down the road regarding a TikTok ban, allowing the app to continue operating despite supposedly urgent national security concerns that China may be using the app to spy on Americans. And now, in the latest baffling move, a US official announced Monday that Trump got Nvidia and AMD to agree to “give the US government 15 percent of revenue from sales to China of advanced computer chips,” Reuters reported. Those chips, about 20 policymakers and national security experts recently warned Trump, could be used to fuel China’s frontier AI, which seemingly poses an even greater national security risk.

Trump’s “wild” deal with US chip firms

Reuters granted two officials anonymity to discuss Trump’s deal with US chipmakers, because details have yet to be made public. Requiring US firms to pay for sales in China is an “unusual” move for a president, Reuters noted, and the Trump administration has yet to say what exactly it plans to do with the money.

For US firms, the deal may set an alarming precedent. Not only have analysts warned that the deal could “hurt margins” for both companies, but export curbs on Nvidia’s H20 chips, for example, had been established to prevent US technology thefts, secure US technology leadership, and protect US national security. Now the US government appears to be accepting a payment to overlook those alleged risks, without much reassurance that the policy won’t advantage China in the AI race.

The move drew immediate scrutiny from critics, including Geoff Gertz, a senior fellow at the US think tank Center for a New American Security, who told Reuters that he thinks the deal is “wild.”

“Either selling H20 chips to China is a national security risk, in which case we shouldn’t be doing it to begin with, or it’s not a national security risk, in which case, why are we putting this extra penalty on the sale?” Gertz posited.

At this point, the only reassurance from the Trump administration is an official suggesting (without providing any rationale) that selling H20 or equivalent chips—which are not Nvidia’s most advanced chips—no longer compromises national security.

Trump “trading away” national security

It remains unclear when or how the levy will be implemented.

For chipmakers, the levy is likely viewed as a relatively small price to pay to avoid export curbs. Nvidia had forecasted $8 billion in potential losses if it couldn’t sell its H20 chips to China. AMD expected $1 billion in revenue cuts, partly due to the loss of sales for its MI308 chips in China.

The firms apparently agreed to Trump’s deal as a condition to receive licenses to export those chips. But caving to Trump could bite them back in the long run, AJ Bell, investment director Russ Mould, told Reuters—perhaps especially if Trump faces increasing pressure over feared national security concerns.

“The Chinese market is significant for both these companies, so even if they have to give up a bit of the money, they would otherwise make it look like a logical move on paper,” Mould said. However, the deal “is unprecedented and there is always the risk the revenue take could be upped or that the Trump administration changes its mind and re-imposes export controls.”

So far, AMD has not commented on the report. Nvidia’s spokesperson declined to comment beyond noting, “We follow rules the US government sets for our participation in worldwide markets.”

A former adviser to Joe Biden’s Commerce Department, Alasdair Phillips-Robins, told Reuters that the levy suggests the Trump administration “is trading away national security protections for revenue for the Treasury.”

Huawei close to unveiling new AI chip tech

The end of a 90-day truce between the US and China is rapidly approaching, with the US signaling that the truce will likely be extended soon as Trump attempts to get a long-sought-after meeting with China’s President Xi Jinping.

For China, gutting export curbs on chips remains a key priority in negotiations, the Financial Times reported Sunday. But Nvidia’s H20 chips, for example, are lower priority than high-bandwidth memory (HBM) chips, sources told FT.

Chinese state media has even begun attacking the H20 chips as a Chinese national security risk. It appears that China is urging a boycott on H20 chips due to questions linked to a recent Congressional push to require chipmakers to build “backdoors” that would allow remote shutdowns of any chips detected as non-compliant with export curbs. That bill may mean that Nvidia’s chips already allow for US surveillance, China seemingly fears. (Nvidia has denied building such backdoors.)

Biden banned HBM exports to China last year, specifically moving to hamper innovation of Chinese chipmakers Huawei and Semiconductor Manufacturing International Corporation (SMIC).

Currently, US firms AMD and Micron remain top suppliers of HBM chips globally, along with South Korean firms Samsung Electronics and SK Hynix, but Chinese firms have notably lagged behind, South China Morning Post (SCMP) reported. One source told FT that China “had raised the HBM issue in some” Trump negotiations, likely directly seeking to lift Biden’s “HBM controls because they seriously constrain the ability of Chinese companies, including Huawei, to develop their own AI chips.”

For Trump, the HBM controls could be seen as leverage to secure another trade win. However, some experts are hoping that Trump won’t play that card, citing concerns from the Biden era that remain unaddressed.

If Trump bends to Chinese pressure and lifts HBM controls, China could more easily produce AI chips at scale, Biden had feared. That could even possibly endanger US firms’ standing as world leaders, seemingly including threatening Nvidia, a company that Trump discovered this term. Gregory Allen, an AI expert at a US think tank called the Center for Strategic and International Studies, told FT that “saying that we should allow more advanced HBM sales to China is the exact same as saying that we should help Huawei make better AI chips so that they can replace Nvidia.”

Meanwhile, Huawei is reportedly already innovating to help reduce China’s reliance on HBM chips, the SCMP reported on Monday. Chinese state-run Securities Times reported that Huawei is “set to unveil a technological breakthrough that could reduce China’s reliance on high-bandwidth memory (HBM) chips for running artificial intelligence reasoning models” at the 2025 Financial AI Reasoning Application Landing and Development Forum in Shanghai on Tuesday.

It’s a conveniently timed announcement, given the US-China trade deal deadline lands the same day. But the risk of Huawei possibly relying on US tech to reach that particular milestone is why HBM controls should remain off the table during Trump’s negotiations, one official told FT.

“Relaxing these controls would be a gift to Huawei and SMIC and could open the floodgates for China to start making millions of AI chips per year, while also diverting scarce HBM from chips sold in the US,” the official said.

Experts and policymakers had previously warned Trump that allowing H20 export curbs could similarly reduce access to semiconductors in the US, potentially disrupting the entire purpose of Trump’s trade war, which is building reliable US supply chains. Additionally, allowing exports will likely drive up costs to US chip firms at a time when they noted “projected data center demand from the US power market would require 90 percent of global chip supply through 2030, an unlikely scenario even without China joining the rush to buy advanced AI chips.” They’re now joined by others urging Trump to revive Biden’s efforts to block chip exports to China, or else risk empowering a geopolitical rival to become a global AI leader ahead of the US.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Trump strikes “wild” deal making US firms pay 15% tax on China chip sales Read More »

nasa-plans-to-build-a-nuclear-reactor-on-the-moon—a-space-lawyer-explains-why

NASA plans to build a nuclear reactor on the Moon—a space lawyer explains why

These sought-after regions are scientifically vital and geopolitically sensitive, as multiple countries want to build bases or conduct research there. Building infrastructure in these areas would cement a country’s ability to access the resources there and potentially exclude others from doing the same.

Critics may worry about radiation risks. Even if designed for peaceful use and contained properly, reactors introduce new environmental and operational hazards, particularly in a dangerous setting such as space. But the UN guidelines do outline rigorous safety protocols, and following them could potentially mitigate these concerns.

Why nuclear? Because solar has limits

The Moon has little atmosphere and experiences 14-day stretches of darkness. In some shadowed craters, where ice is likely to be found, sunlight never reaches the surface at all. These issues make solar energy unreliable, if not impossible, in some of the most critical regions.

A small lunar reactor could operate continuously for a decade or more, powering habitats, rovers, 3D printers, and life-support systems. Nuclear power could be the linchpin for long-term human activity. And it’s not just about the Moon – developing this capability is essential for missions to Mars, where solar power is even more constrained.

The UN Committee on the Peaceful Uses of Outer Space sets guidelines to govern how countries act in outer space. United States Mission to International Organizations in Vienna. Credit: CC BY-NC-ND

A call for governance, not alarm

The United States has an opportunity to lead not just in technology but in governance. If it commits to sharing its plans publicly, following Article IX of the Outer Space Treaty and reaffirming a commitment to peaceful use and international participation, it will encourage other countries to do the same.

The future of the Moon won’t be determined by who plants the most flags. It will be determined by who builds what, and how. Nuclear power may be essential for that future. Building transparently and in line with international guidelines would allow countries to more safely realize that future.

A reactor on the Moon isn’t a territorial claim or a declaration of war. But it is infrastructure. And infrastructure will be how countries display power—of all kinds—in the next era of space exploration.The Conversation

Michelle L.D. Hanlon, Professor of Air and Space Law, University of Mississippi. This article is republished from The Conversation under a Creative Commons license. Read the original article.

NASA plans to build a nuclear reactor on the Moon—a space lawyer explains why Read More »

toymaker-suddenly-drops-lawsuit-against-“sylvanian-drama”-tiktoker

Toymaker suddenly drops lawsuit against “Sylvanian Drama” TikToker

A toy company has voluntarily dismissed its lawsuit against a popular TikTok and Instagram account called “Sylvanian Drama.”

Epoch Company Ltd., is the US maker of adorable fuzzy dolls called Calico Critters. Those dolls are known as “Sylvanian Families” in other markets, and more recently, they became a viral sensation after an Ireland-based content creator, Thea Von Engelbrechten, started making funny videos in which the dolls acted out dark, cringey adult storylines.

Claiming that the “Sylvanian Drama” videos infringed on Epoch’s intellectual property rights, including using an Epoch marketing image as her account’s profile picture while profiting off partnerships with major brands featured in her videos, the toymaker sued Von Engelbrechten, prompting her to immediately stop posting videos last year. Although some fans predicted the account might never come back, experts told Ars that Epoch may come to regret the lawsuit, perhaps alienating a potential market for their toys by going after a widely beloved content creator.

To some, Epoch appeared to be lashing out after Von Engelbrechten secured brand partnerships that seemed to be more lucrative than the toy company’s own brand deals. In that way, they also perhaps overlooked an opportunity to partner with Von Engelbrechten themselves, experts told Ars.

On Friday, Von Engelbrechten’s response was due in the lawsuit, but a story posted to her Instagram earlier this week signaled that a resolution may have been in the works. Ars could not reach Von Engelbrechten for comment, but she asked her fans to recommend a new account name in her story and confirmed that she would also be changing her account’s profile picture.

Toymaker suddenly drops lawsuit against “Sylvanian Drama” TikToker Read More »

ai-industry-horrified-to-face-largest-copyright-class-action-ever-certified

AI industry horrified to face largest copyright class action ever certified

According to the groups, allowing copyright class actions in AI training cases will result in a future where copyright questions remain unresolved and the risk of “emboldened” claimants forcing enormous settlements will chill investments in AI.

“Such potential liability in this case exerts incredibly coercive settlement pressure for Anthropic,” industry groups argued, concluding that “as generative AI begins to shape the trajectory of the global economy, the technology industry cannot withstand such devastating litigation. The United States currently may be the global leader in AI development, but that could change if litigation stymies investment by imposing excessive damages on AI companies.”

Some authors won’t benefit from class actions

Industry groups joined Anthropic in arguing that, generally, copyright suits are considered a bad fit for class actions because each individual author must prove ownership of their works. And the groups weren’t alone.

Also backing Anthropic’s appeal, advocates representing authors—including Authors Alliance, the Electronic Frontier Foundation, American Library Association, Association of Research Libraries, and Public Knowledge—pointed out that the Google Books case showed that proving ownership is anything but straightforward.

In the Anthropic case, advocates for authors criticized Alsup for basically judging all 7 million books in the lawsuit by their covers. The judge allegedly made “almost no meaningful inquiry into who the actual members are likely to be,” as well as “no analysis of what types of books are included in the class, who authored them, what kinds of licenses are likely to apply to those works, what the rightsholders’ interests might be, or whether they are likely to support the class representatives’ positions.”

Ignoring “decades of research, multiple bills in Congress, and numerous studies from the US Copyright Office attempting to address the challenges of determining rights across a vast number of books,” the district court seemed to expect that authors and publishers would easily be able to “work out the best way to recover” damages.

AI industry horrified to face largest copyright class action ever certified Read More »