Policy

bytedance-unlikely-to-sell-tiktok,-as-former-trump-official-plots-purchase

ByteDance unlikely to sell TikTok, as former Trump official plots purchase

ByteDance unlikely to sell TikTok, as former Trump official plots purchase

Aurich Lawson | Getty Images Pool

Former US Treasury Secretary Steven Mnuchin is reportedly assembling an investor group to buy TikTok as the US comes closer to enacting legislation forcing the company to either divest from Chinese ownership or face a nationwide ban.

“I think the legislation should pass, and I think it should be sold,” Mnuchin told CNBC Thursday. “It’s a great business, and I’m going to put together a group to buy TikTok.”

Mnuchin currently leads Liberty Strategic Capital, which describes itself as “a Washington DC-based private equity firm focused on investing in dynamic global technology companies.”

According to CNBC, there is already “common ground between Liberty and ByteDance,” as Softbank—which invested in ByteDance in 2018—partnered with Liberty in 2021, contributing what Financial Times reported was an unknown amount to Mnuchin’s $2.5 billion private equity fund.

TikTok has made no indication that it would consider a sale should the legislation be enacted. Instead, TikTok CEO Shou Zi Chew is continuing to rally TikTok users to oppose the legislation. In a TikTok post viewed by 3.8 million users, the CEO described yesterday’s vote passing the law in the US House of Representatives as “disappointing.”

“This legislation, if signed into law, WILL lead to a ban of TikTok in the United States,” Chew said, seeming to suggest that TikTok’s CEO is not considering a sale to be an option.

But Mnuchin expects that TikTok may be forced to choose to divest—as the US remains an increasingly significant market for the company. If so, he plans to be ready to snatch up the popular app, which TikTok estimated boasts 170 million American monthly active users.

“This should be owned by US businesses,” Mnuchin told CNBC. “There’s no way that the Chinese would ever let a US company own something like this in China.”

Chinese foreign ministry spokesperson Wang Wenbin has said that a TikTok ban in the US would hurt the US, while little evidence backs up the supposed national security threat that lawmakers claim is urgent to address, the BBC reported. Wang has accused the US of “bullying behavior that cannot win in fair competition.” This behavior, Wang said, “disrupts companies’ normal business activity, damages the confidence of international investors in the investment environment, and damages the normal international economic and trade order.”

Liberty and Mnuchin were not immediately available to comment on whether investors have shown any serious interest so far.

However, according to the Los Angeles Times, Mnuchin has already approached a “bunch of people” to consider investing. Mnuchin told CNBC that TikTok’s technology would be the driving force behind wooing various investors.

“It would be a combination of investors, so there would be no one investor that controls this,” Mnuchin told CNBC. “The issue is all about the technology. This needs to be controlled by US businesses.”

Mnuchin’s group would likely face competition to buy TikTok. ByteDance—which PitchBook data indicates was valued at $223.5 billion in 2023—should also expect an offer from former Activision Blizzard CEO Bobby Kotick, The Wall Street Journal reported.

It’s unclear how valuable TikTok is to ByteDance, CNBC reported, and Mnuchin has not specified what potential valuation his group would anticipate. But if TikTok’s algorithm—which was developed in China—is part of the sale, the price would likely be higher than if ByteDance refused to sell the tech fueling the social media app’s rapid rise to popularity.

In 2020, ByteDance weighed various ownership options while facing a potential US ban under the Trump administration, The New York Times reported. Mnuchin served as Secretary of the Treasury at that time. Although ByteDance ended up partnering with Oracle to protect American TikTok users’ data instead, people briefed on ByteDance’s discussions then confirmed that ByteDance was considering carving out TikTok, potentially allowing the company to “receive new investments from existing ByteDance investors.”

The Information provided a breakdown of the most likely investors to be considered by ByteDance back in 2020. Under that plan, though, ByteDance intended to retain a minority holding rather than completely divesting ownership, the Times reported.

ByteDance unlikely to sell TikTok, as former Trump official plots purchase Read More »

meta-sues-“brazenly-disloyal”-former-exec-over-stolen-confidential-docs

Meta sues “brazenly disloyal” former exec over stolen confidential docs

Meta sues “brazenly disloyal” former exec over stolen confidential docs

A recently unsealed court filing has revealed that Meta has sued a former senior employee for “brazenly disloyal and dishonest conduct” while leaving Meta for an AI data startup called Omniva that The Information has described as “mysterious.”

According to Meta, its former vice president of infrastructure, Dipinder Singh Khurana (also known as T.S.), allegedly used his access to “confidential, non-public, and highly sensitive” information to steal more than 100 internal documents in a rushed scheme to poach Meta employees and borrow Meta’s business plans to speed up Omniva’s negotiations with key Meta suppliers.

Meta believes that Omniva—which Data Center Dynamics (DCD) reported recently “pivoted from crypto to AI cloud”—is “seeking to provide AI cloud computing services at scale, including by designing and constructing data centers.” But it was held back by a “lack of data center expertise at the top,” DCD reported.

The Information reported that Omniva began hiring Meta employees to fill the gaps in this expertise, including wooing Khurana away from Meta.

Last year, Khurana notified Meta that he was leaving on May 15, and that’s when Meta first observed Khurana’s allegedly “utter disregard for his contractual and legal obligations to Meta—including his confidentiality obligations to Meta set forth in the Confidential Information and Invention Assignment Agreement that Khurana signed when joining Meta.”

A Meta investigation found that during Khurana’s last two weeks at the company, he allegedly uploaded confidential Meta documents—including “information about Meta’s ‘Top Talent,’ performance information for hundreds of Meta employees, and detailed employee compensation information”—on Meta’s network to a Dropbox folder labeled with his new employer’s name.

“Khurana also uploaded several of Meta’s proprietary, highly sensitive, confidential, and non-public contracts with business partners who supply Meta with crucial components for its data centers,” Meta alleged. “And other documents followed.”

In addition to pulling documents, Khurana also allegedly sent “urgent” requests to subordinates for confidential information on a key supplier, including Meta’s pricing agreement “for certain computing hardware.”

“Unaware of Khurana’s plans, the employee provided Khurana with, among other things, Meta’s pricing-form agreement with that supplier for the computing hardware and the supplier’s Meta-specific preliminary pricing for a particular chip,” Meta alleged.

Some of these documents were “expressly marked confidential,” Meta alleged. Those include a three-year business plan and PowerPoints regarding “Meta’s future ‘roadmap’ with a key supplier” and “Meta’s 2022 redesign of its global-supply-chain group” that Meta alleged “would directly aid Khurana in building his own efficient and effective supply-chain organization” and afford a path for Omniva to bypass “years of investment.” Khurana also allegedly “uploaded a PowerPoint discussing Meta’s use of GPUs for artificial intelligence.”

Meta was apparently tipped off to this alleged betrayal when Khurana used his Meta email and network access to complete a writing assignment for Omniva as part of his hiring process. For this writing assignment, Khurana “disclosed non-public information about Meta’s relationship with certain suppliers that it uses for its data centers” when asked to “explain how he would help his potential new employer develop the supply chain for a company building data centers using specific technologies.”

In a seeming attempt to cover up the alleged theft of Meta documents, Khurana apparently “attempted to scrub” one document “of its references to Meta,” as well as removing a label marking it “CONFIDENTIAL—FOR INTERNAL USE ONLY.” But when replacing “Meta” with “X,” Khurana allegedly missed the term “Meta” in “at least five locations.”

“Khurana took such action to try and benefit himself or his new employer, including to help ensure that Khurana would continue to work at his new employer, continue to receive significant compensation from his new employer, and/or to enable Khurana to take shortcuts in building his supply-chain team at his new employer and/or helping to build his new employer’s business,” Meta alleged.

Ars could not immediately reach Khurana for comment. Meta noted that he has repeatedly denied breaching his contract or initiating contact with Meta employees who later joined Omniva. He also allegedly refused to sign a termination agreement that reiterates his confidentiality obligations.

Meta sues “brazenly disloyal” former exec over stolen confidential docs Read More »

eu-votes-to-ban-riskiest-forms-of-ai-and-impose-restrictions-on-others

EU votes to ban riskiest forms of AI and impose restrictions on others

Europe’s AI Act —

Lawmaker hails “world’s first binding law on artificial intelligence.”

Illustration of a European flag composed of computer code

Getty Images | BeeBright

The European Parliament today voted to approve the Artificial Intelligence Act, which will ban uses of AI “that pose unacceptable risks” and impose regulations on less risky types of AI.

“The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases,” a European Parliament announcement today said. “Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behavior or exploits people’s vulnerabilities will also be forbidden.”

The ban on certain AI applications provides for penalties of up to 35 million euros or 7 percent of a firm’s “total worldwide annual turnover for the preceding financial year, whichever is higher.” Violations of other provisions have lower penalties.

There are exemptions to allow law enforcement use of remote biometric identification systems in certain cases. A European Commission summary of the legislation said:

All remote biometric identification systems are considered high-risk and subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes is, in principle, prohibited.

Narrow exceptions are strictly defined and regulated, such as when necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence.

“Strict obligations” for high-risk AI

The AI Act was supported by 523 members of the European Parliament (MEPs), while 46 voted against and 49 abstained. The legislation classifies AI into four categories of risk: unacceptable risk, high risk, limited risk, and minimal or no risk.

“High-risk AI systems will be subject to strict obligations before they can be put on the market,” the legislation summary said. Obligations include “adequate risk assessment and mitigation systems,” “logging of activity to ensure traceability of results,” “appropriate human oversight measures to minimise risk,” and other requirements.

The law drew opposition from the Computer & Communications Industry Association, a tech-industry lobby group.

“The agreed AI Act imposes stringent obligations on developers of cutting-edge technologies that underpin many downstream systems, and is therefore likely to slow down innovation in Europe,” the group said when a deal on the law was agreed to in December 2023. “Furthermore, certain low-risk AI systems will now be subjected to strict requirements without further justification, while others will be banned altogether. This could lead to an exodus of European AI companies and talent seeking growth elsewhere.”

The law will officially be on the books 20 days after its publication in the official Journal, the European Parliament announcement said. The law’s ban on prohibited practices will apply six months after that, but other regulations won’t take effect until later. The “obligations for high-risk systems” will only take effect after 36 months, the announcement said.

“We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency,” said MEP Brando Benifei, the Internal Market Committee co-rapporteur. An AI office will be formed “to support companies to start complying with the rules before they enter into force,” he said.

Risky AI categories

Examples of high-risk AI include AI used in robot-assisted surgery; credit scoring systems that can deny loans; law enforcement that may interfere with fundamental rights, such as evaluation of the reliability of evidence; and automated examination of visa applications.

The limited-risk category has to do with applications that aren’t transparent about AI usage. “The AI Act introduces specific transparency obligations to ensure that humans are informed when necessary, fostering trust,” the European Commission said. “For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can take an informed decision to continue or step back. Providers will also have to ensure that AI-generated content is identifiable.”

AI-generated text that is “published with the purpose to inform the public on matters of public interest must be labelled as artificially generated,” and this requirement “also applies to audio and video content constituting deep fakes.”

AI with minimal or no risk “includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category,” the commission said. There would be no restrictions on this category.

EU votes to ban riskiest forms of AI and impose restrictions on others Read More »

bill-that-could-ban-tiktok-passes-in-house-despite-constitutional-concerns

Bill that could ban TikTok passes in House despite constitutional concerns

Bill that could ban TikTok passes in House despite constitutional concerns

On Wednesday, the US House of Representatives passed a bill with a vote of 352–65 that could block TikTok in the US. Fifteen Republicans and 50 Democrats voted in opposition, and one Democrat voted present, CNN reported.

TikTok is not happy. A spokesperson told Ars, “This process was secret and the bill was jammed through for one reason: it’s a ban. We are hopeful that the Senate will consider the facts, listen to their constituents, and realize the impact on the economy, 7 million small businesses, and the 170 million Americans who use our service.”

Lawmakers insist that the Protecting Americans from Foreign Adversary Controlled Applications Act is not a ban. Instead, they claim the law gives TikTok a choice: either divest from ByteDance’s China-based owners or face the consequences of TikTok being cut off in the US.

Under the law—which still must pass the Senate, a more significant hurdle, where less consensus is expected and a companion bill has not yet been introduced—app stores and hosting services would face steep consequences if they provide access to apps controlled by US foreign rivals. That includes allowing the app to be updated or maintained by US users who already have the app on their devices.

Violations subject app stores and hosting services to fines of $5,000 for each individual US user “determined to have accessed, maintained, or updated a foreign adversary-controlled application.” With 170 million Americans currently on TikTok, that could add up quickly to eye-popping fines.

If the bill becomes law, app stores and hosting services would have 180 days to limit access to foreign adversary-controlled apps. The bill specifically names TikTok and ByteDance as restricted apps, making it clear that lawmakers intend to quash the alleged “national security threat” that TikTok poses in the US.

House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-Wash.), a proponent of the bill, has said that “foreign adversaries like China pose the greatest national security threat of our time. With applications like TikTok, these countries are able to target, surveil, and manipulate Americans.” The proposed bill “ends this practice by banning applications controlled by foreign adversaries of the United States that pose a clear national security risk.”

McMorris Rodgers has also made it clear that “our goal is to get this legislation onto the president’s desk.” Joe Biden has indicated he will sign the bill into law, leaving the Senate as the final hurdle to clear. Senators told CNN that they were waiting to see what happened in the House before seeking a path forward in the Senate that would respect TikTok users’ civil liberties.

Attempts to ban TikTok have historically not fared well in the US, with a recent ban in Montana being reversed by a federal judge last December. Judge Donald Molloy granted TikTok’s request for a preliminary injunction, denouncing Montana’s ban as an unconstitutional infringement of Montana-based TikTok users’ rights.

More recently, the American Civil Liberties Union (ACLU) has slammed House lawmakers for rushing the bill through Congress, accusing lawmakers of attempting to stifle free speech. ACLU senior policy counsel Jenna Leventoff said in a press release that lawmakers were “once again attempting to trade our First Amendment rights for cheap political points during an election year.”

“Just because the bill sponsors claim that banning TikTok isn’t about suppressing speech, there’s no denying that it would do just that,” Leventoff said.

Bill that could ban TikTok passes in House despite constitutional concerns Read More »

some-states-are-now-trying-to-ban-lab-grown-meat

Some states are now trying to ban lab-grown meat

A franken-burger and a side of fries —

Spurious “war on ranching” cited as reason for legislation.

tanks for growing cell-cultivated chicken

Enlarge / Cell-cultivated chicken is made in the pictured tanks at the Eat Just office on July 27, 2023, in Alameda, Calif.

Justin Sullivan/Getty Images

Months in jail and thousands of dollars in fines and legal fees—those are the consequences Alabamians and Arizonans could soon face for selling cell-cultured meat products that could cut into the profits of ranchers, farmers, and meatpackers in each state.

State legislators from Florida to Arizona are seeking to ban meat grown from animal cells in labs, citing a “war on our ranching” and a need to protect the agriculture industry from efforts to reduce the consumption of animal protein, thereby reducing the high volume of climate-warming methane emissions the sector emits.

Agriculture accounts for about 11 percent of the country’s greenhouse gas emissions, according to federal data, with livestock such as cattle making up a quarter of those emissions, predominantly from their burps, which release methane—a potent greenhouse gas that’s roughly 80 times more effective at warming the atmosphere than carbon dioxide over 20 years. Globally, agriculture accounts for about 37 percent of methane emissions.

For years, climate activists have been calling for more scrutiny and regulation of emissions from the agricultural sector and for nations to reduce their consumption of meat and dairy products due to their climate impacts. Last year, over 150 countries pledged to voluntarily cut emissions from food and agriculture at the United Nations’ annual climate summit.

But the industry has avoided increased regulation and pushed back against efforts to decrease the consumption of meat, with help from local and state governments across the US.

Bills in Alabama, Arizona, Florida, and Tennessee are just the latest legislation passed in statehouses across the US that have targeted cell-cultured meat, which is produced by taking a sample of an animal’s muscle cells and growing them into edible products in a lab. Sixteen states—Alabama, Arkansas, Georgia, Kansas, Kentucky, Louisiana, Maine, Mississippi, Missouri, Montana, North Dakota, Oklahoma, South Carolina, South Dakota, Texas, and Wyoming—have passed laws addressing the use of the word “meat” in such products’ packaging, according to the National Agricultural Law Center at the University of Arkansas, with some prohibiting cell-cultured, plant-based, or insect-based food products from being labeled as meat.

“Cell-cultured meat products are so new that there’s not really a framework for how state and federal labeling will work together,” said Rusty Rumley, a senior staff attorney with the National Agricultural Law Center, resulting in no standardized requirements for how to label the products, though legislation has been proposed that could change that.

At the federal level, Rep. Mark Alford (R-Mo.) introduced the Fair and Accurate Ingredient Representation on Labels Act of 2024, which would authorize the United States Department of Agriculture to regulate imitation meat products and restrict their sale if they are not properly labeled, and US Sens. Jon Tester (D-Mont.) and Mike Rounds (R-S.D.) introduced a bill to ban schools from serving cell-cultured meat.

But while plant-based meat substitutes are widespread, cell-cultivated meats are not widely available, with none currently being sold in stores. Just last summer, federal agencies gave their first-ever approvals to two companies making cell-cultivated poultry products, which are appearing on restaurant menus. The meat substitutes have garnered the support of some significant investors, including billionaire Bill Gates, who has been the subject of attacks from supporters of some of the state legislation proposed.

“Let me start off by explaining why I drafted this bill,” said Rep. David Marshall, an Arizona Republican who proposed legislation to ban cell-cultured meat from being sold or produced in the state, during a hearing on the bill. “It’s because of organizations like the FDA and the World Economic Forum, also Bill Gates and others, who have openly declared war on our ranching.”

In Alabama, fear of “franken-meat” competition spurs legislation

In Alabama, an effort to ban lab-grown meat is winding its way through the State House in Montgomery.

There, state senators have already passed a bill that would make it a misdemeanor, punishable by up to three months in jail and a $500 fine, to sell, manufacture, or distribute what the proposed legislation labels “cultivated food products.” An earlier version of the bill called lab-grown protein “meat,” but it was quickly revised by lawmakers. The bill passed out of committee and through the Senate without opposition from any of its members.

Now, the bill is headed toward a vote in the Alabama House of Representatives, where the body’s health committee recently held a public hearing on the issue. Rep. Danny Crawford, who is carrying the bill in the body, told fellow lawmakers during that hearing that he’s concerned about two issues: health risks and competition for Alabama farmers.

“Lab-grown meat or whatever you want to call it—we’re not sure of all of the long-term problems with that,” he said. “And it does compete with our farming industry.”

Crawford said that legislators had heard from NASA, which expressed concern about the bill’s impact on programs to develop alternative proteins for astronauts. An amendment to the bill will address that problem, Crawford said, allowing an exemption for research purposes.

Some states are now trying to ban lab-grown meat Read More »

50-injured-on-boeing-787-as-“strong-shake”-reportedly-sent-heads-into-ceiling

50 injured on Boeing 787 as “strong shake” reportedly sent heads into ceiling

Boeing nosedive —

LATAM Airlines said “technical event” in mid-flight “caused a strong movement.”

A Boeing airplane on a runway. The LATAM Airlines logo is printed on the side of the plane.

Enlarge / A LATAM Airlines Boeing 787-9 Dreamliner taxiing at Arturo Merino Benítez International Airport in Chile on March 20, 2019.

Getty Images | SOPA Images

About 50 people were injured on a LATAM Airlines flight today in which a Boeing 787-9 Dreamliner suffered a technical problem that caused a “strong shake,” reportedly causing some passengers’ heads to hit the ceiling.

The plane flying from Australia to New Zealand “experienced a strong shake during flight, the cause of which is currently under investigation,” LATAM said on its website today. LATAM, a Chilean airline, was also quoted in news reports as saying the plane suffered “a technical event during the flight which caused a strong movement.”

The Boeing plane, carrying 263 passengers and nine flight and cabin crew members, landed at Auckland Airport as scheduled. New Zealand ambulance service Hato Hone St. John published a statement saying that its “ambulance crews assessed and treated approximately 50 patients, with one patient in a serious condition and the remainder in a moderate to minor condition.” Twelve patients were taken to hospitals, the statement said.

Most of the patients were “discharged shortly after,” LATAM said on its website. “Only one passenger and one cabin crew member required additional attention, but without any life-threatening risks.”

The plane was originally supposed to continue from New Zealand to Chile, but that leg of the trip was rescheduled. LATAM said it is “working in coordination with the respective authorities to support the investigations into the incident.”

Boeing told news outlets that it is “working to gather more information about the flight and will provide any support needed by our customers.” We contacted Boeing today and will update this article if it provides more information.

Passenger describes nosedive, people hitting the ceiling

Passenger Brian Jokat described the frightening incident in interviews with several media outlets. “The ceiling’s broken from people’s heads and bodies hitting it,” Jokat said, according to ABC News. “Basically neck braces were being put on people, guys’ heads were cut and they were bleeding. It was just crazy.”

Jokat was also quoted as saying that he “felt the plane take a nosedive—it felt like it was at the top of a roller coaster, and then it flattened out again.” It all happened in “split seconds,” he reportedly said.

Today’s flight came about two months after a near-disaster involving a Boeing 737 Max 9 plane used by Alaska Airlines. On January 5, the plane was forced to return to Portland International Airport in Oregon after a passenger door plug blew off the aircraft during flight.

The National Transportation Safety Board concluded that four bolts were missing from the plane. The Justice Department has opened a criminal investigation into the incident, The Wall Street Journal reported Saturday.

Boeing was seeking a safety exemption from the US Federal Aviation Administration related to its 737 Max 7 aircraft, but withdrew the application in January after the 737 Max 9 door-plug blowout.

50 injured on Boeing 787 as “strong shake” reportedly sent heads into ceiling Read More »

nvidia-sued-over-ai-training-data-as-copyright-clashes-continue

Nvidia sued over AI training data as copyright clashes continue

In authors’ bad books —

Copyright suits over AI training data reportedly decreasing AI transparency.

Nvidia sued over AI training data as copyright clashes continue

Book authors are suing Nvidia, alleging that the chipmaker’s AI platform NeMo—used to power customized chatbots—was trained on a controversial dataset that illegally copied and distributed their books without their consent.

In a proposed class action, novelists Abdi Nazemian (Like a Love Story), Brian Keene (Ghost Walk), and Stewart O’Nan (Last Night at the Lobster) argued that Nvidia should pay damages and destroy all copies of the Books3 dataset used to power NeMo large language models (LLMs).

The Books3 dataset, novelists argued, copied “all of Bibliotek,” a shadow library of approximately 196,640 pirated books. Initially shared through the AI community Hugging Face, the Books3 dataset today “is defunct and no longer accessible due to reported copyright infringement,” the Hugging Face website says.

According to the authors, Hugging Face removed the dataset last October, but not before AI companies like Nvidia grabbed it and “made multiple copies.” By training NeMo models on this dataset, the authors alleged that Nvidia “violated their exclusive rights under the Copyright Act.” The authors argued that the US district court in San Francisco must intervene and stop Nvidia because the company “has continued to make copies of the Infringed Works for training other models.”

A Hugging Face spokesperson clarified to Ars that “Hugging Face never removed this dataset, and we did not host the Books3 dataset on the Hub.” Instead, “Hugging Face hosted a script that downloads the data from The Eye, which is the place where ELeuther hosted the data,” until “Eleuther removed the data from The Eye” over copyright concerns, causing the dataset script on Hugging Face to break.

Nvidia did not immediately respond to Ars’ request to comment.

Demanding a jury trial, authors are hoping the court will rule that Nvidia has no possible defense for both allegedly violating copyrights and intending “to cause further infringement” by distributing NeMo models “as a base from which to build further models.”

AI models decreasing transparency amid suits

The class action was filed by the same legal team representing authors suing OpenAI, whose lawsuit recently saw many claims dismissed, but crucially not their claim of direct copyright infringement. Lawyers told Ars last month that authors would be amending their complaints against OpenAI and were “eager to move forward and litigate” their direct copyright infringement claim.

In that lawsuit, the authors alleged copyright infringement both when OpenAI trained LLMs and when chatbots referenced books in outputs. But authors seemed more concerned about alleged damages from chatbot outputs, warning that AI tools had an “uncanny ability to generate text similar to that found in copyrighted textual materials, including thousands of books.”

Uniquely, in the Nvidia suit, authors are focused exclusively on Nvidia’s training data, seemingly concerned that Nvidia could empower businesses to create any number of AI models on the controversial dataset, which could affect thousands of authors whose works could allegedly be broadly infringed just by training these models.

There’s no telling yet how courts will rule on the direct copyright claims in either lawsuit—or in the New York Times’ lawsuit against OpenAI—but so far, OpenAI has failed to convince courts to toss claims aside.

However, OpenAI doesn’t appear very shaken by the lawsuits. In February, OpenAI said that it expected to beat book authors’ direct copyright infringement claim at a “later stage” of the case and, most recently in the New York Times case, tried to convince the court that NYT “hacked” ChatGPT to “set up” the lawsuit.

And Microsoft, a co-defendant in the NYT lawsuit, even more recently introduced a new argument that could help tech companies defeat copyright suits over LLMs. Last month, Microsoft argued that The New York Times was attempting to stop a “groundbreaking new technology” and would fail, just like movie producers attempting to kill off the VCR in the 1980s.

“Despite The Times’s contentions, copyright law is no more an obstacle to the LLM than it was to the VCR (or the player piano, copy machine, personal computer, Internet, or search engine),” Microsoft wrote.

In December, Hugging Face’s machine learning and society lead, Yacine Jernite, noted that developers appeared to be growing less transparent about training data after copyright lawsuits raised red flags about companies using the Books3 dataset, “especially for commercial models.”

Meta, for example, “limited the amount of information [it] disclosed about” its LLM, Llama-2, “to a single paragraph description and one additional page of safety and bias analysis—after [its] use of the Books3 dataset when training the first Llama model was brought up in a copyright lawsuit,” Jernite wrote.

Jernite warned that AI models lacking transparency could hinder “the ability of regulatory safeguards to remain relevant as training methods evolve, of individuals to ensure that their rights are respected, and of open science and development to play their role in enabling democratic governance of new technologies.” To support “more accountability,” Jernite recommended “minimum meaningful public transparency standards to support effective AI regulation,” as well as companies providing options for anyone to opt out of their data being included in training data.

“More data transparency supports better governance and fosters technology development that more reliably respects peoples’ rights,” Jernite wrote.

Nvidia sued over AI training data as copyright clashes continue Read More »

apple-and-tesla-feel-the-pain-as-china-opts-for-homegrown-products

Apple and Tesla feel the pain as China opts for homegrown products

Domestically made smartphones were much in evidence at the National People’s Congress in Beijing

Enlarge / Domestically made smartphones were much in evidence at the National People’s Congress in Beijing

Wang Zhao/AFP/Getty Images

Apple and Tesla cracked China, but now the two largest US consumer companies in the country are experiencing cracks in their own strategies as domestic rivals gain ground and patriotic buying often trumps their allure.

Falling market share and sales figures reported this month indicate the two groups face rising competition and the whiplash of US-China geopolitical tensions. Both have turned to discounting to try to maintain their appeal.

A shift away from Apple, in particular, has been sharp, spurred on by a top-down campaign to reduce iPhone usage among state employees and the triumphant return of Chinese national champion Huawei, which last year overcame US sanctions to roll out a homegrown smartphone capable of near 5G speeds.

Apple’s troubles were on full display at China’s annual Communist Party bash in Beijing this month, where a dozen participants told the Financial Times they were using phones from Chinese brands.

“For people coming here, they encourage us to use domestic phones, because phones like Apple are not safe,” said Zhan Wenlong, a nuclear physicist and party delegate. “[Apple phones] are made in China, but we don’t know if the chips have back doors.”

Wang Chunru, a member of China’s top political advisory body, the Chinese People’s Political Consultative Conference, said he was using a Huawei device. “We all know Apple has eavesdropping capabilities,” he said.

Delegate Li Yanfeng from Guangxi said her phone was manufactured by Huawei. “I trust domestic brands, using them was a uniform request.”

Financial Times using Bloomberg data

Outside of the US, China is both Apple and Tesla’s single-largest market, respectively contributing 19 percent and 22 percent of total revenues during their most recent fiscal years. Their mounting challenges in the country have caught Wall Street’s attention, contributing to Apple’s 9 percent share price slide this year and Tesla’s 28 percent fall, making them the poorest performers among the so-called Magnificent Seven tech stocks.

Apple and Tesla are the latest foreign companies to feel the pain of China’s shift toward local brands. Sales of Nike and Adidas clothing have yet to return to their 2021 peak. A recent McKinsey report showed a growing preference among Chinese consumers for local brands.

Apple and Tesla feel the pain as China opts for homegrown products Read More »

op-ed:-charges-against-journalist-tim-burke-are-a-hack-job

Op-ed: Charges against journalist Tim Burke are a hack job

Permission required? —

Burke was indicted after sharing outtakes of a Fox News interview.

Op-ed: Charges against journalist Tim Burke are a hack job

Caitlin Vogus is the deputy director of advocacy at Freedom of the Press Foundation and a First Amendment lawyer. Jennifer Stisa Granick is the surveillance and cybersecurity counsel with the ACLU’s Speech, Privacy, and Technology Project. The opinions in this piece do not necessarily reflect the views of Ars Technica.

Imagine a journalist finds a folder on a park bench, opens it, and sees a telephone number inside. She dials the number. A famous rapper answers and spews a racist rant. If no one gave her permission to open the folder and the rapper’s telephone number was unlisted, should the reporter go to jail for publishing what she heard?

If that sounds ridiculous, it’s because it is. And yet, add in a computer and the Internet, and that’s basically what a newly unsealed federal indictment accuses Florida journalist Tim Burke of doing when he found and disseminated outtakes of Tucker Carlson’s Fox News interview with Ye, the artist formerly known as Kanye West, going on the first of many antisemitic diatribes.

The vast majority of the charges against Burke are under the Computer Fraud and Abuse Act (CFAA), a law that the ACLU and Freedom of the Press Foundation have long argued is vague and subject to abuse. Now, in a new and troubling move, the government suggests in the Burke indictment that journalists violate the CFAA if they don’t ask for permission to use information they find publicly posted on the Internet.

According to news reports and statements from Burke’s lawyer, the charges are, in part, related to the unaired segments of the interview between Carlson and Ye. After Burke gave the video to news sites to publish, Ye’s disturbing remarks, and Fox’s decision to edit them out of the interview when broadcast, quickly made national news.

According to Burke, the video of Carlson’s interview with Ye was streamed via a publicly available, unencrypted URL that anyone could access by typing the address into your browser. Those URLs were not listed in any search engine, but Burke says that a source pointed him to a website on the Internet Archive where a radio station had posted “demo credentials” that gave access to a page where the URLs were listed.

The credentials were for a webpage created by LiveU, a company that provides video streaming services to broadcasters. Using the demo username and password, Burke logged into the website, and, Burke’s lawyer claims, the list of URLs for video streams automatically downloaded to his computer.

And that, the government says, is a crime. It charges Burke with violating the CFAA’s prohibition on intentionally accessing a computer “without authorization” because he accessed the LiveU website and URLs without having been authorized by Fox or LiveU. In other words, because Burke didn’t ask Fox or LiveU for permission to use the demo account or view the URLs, the indictment alleges, he acted without authorization.

But there’s a difference between LiveU and Fox’s subjective wishes about what journalists or others would find, and what the services and websites they maintained and used permitted people to find. The relevant question should be the latter. Generally, it is both a First Amendment and a due process problem to allow a private party’s desire to control information to form the basis of criminal prosecutions.

The CFAA charges against Burke take advantage of the vagueness of the statutory term “without authorization.” The law doesn’t define the term, and its murkiness has enabled plenty of ill-advised prosecutions over the years. In Burke’s case, because the list of unencrypted URLs was password protected and the company didn’t want outsiders to access the URLs, the government claims that Burke acted “without authorization.”

Using a published demo password to get a list of URLs, which anyone could have used a software program to guess and access, isn’t that big of a deal. What was a big deal is that Burke’s research embarrassed Fox News. But that’s what journalists are supposed to do—uncover questionable practices of powerful entities.

Journalists need never ask corporations for permission to investigate or embarrass them, and the law shouldn’t encourage or force them to. Just because someone doesn’t like what a reporter does online doesn’t mean that it’s without authorization and that what he did is therefore a crime.

Still, this isn’t the first time that prosecutors have abused computer hacking laws to go after journalists and others, like security researchers. Until a 2021 Supreme Court ruling, researchers and journalists worried that their good faith investigations of algorithmic discrimination could expose them to CFAA liability for exceeding sites’ terms of service.

Even now, the CFAA and similarly vague state computer crime laws continue to threaten press freedom. Just last year, in August, police raided the newsroom of the Marion County Record and accused its journalists of breaking state computer hacking laws by using a government website to confirm a tip from a source. Police dropped the case after a national outcry.

The White House seemed concerned about the Marion ordeal. But now the same administration is using an overly broad interpretation of a hacking law to target a journalist. Merely filing charges against Burke sends a chilling message that the government will attempt to penalize journalists for engaging in investigative reporting it dislikes.

Even worse, if the Burke prosecution succeeds, it will encourage the powerful to use the CFAA as a veto over news reporting based on online sources just because it is embarrassing or exposes their wrongdoing. These charges were also an excuse for the government to seize Burke’s computer equipment and digital work—and demand to keep it permanently. This seizure interferes with Burke’s ongoing reporting, a tactic that it could repeat in other investigations.

If journalists must seek permission to publish information they find online from the very people they’re exposing, as the government’s indictment of Burke suggests, it’s a good bet that most information from the obscure but public corners of the Internet will never see the light of day. That would endanger both journalism and public access to important truths. The court reviewing Burke’s case should dismiss the charges.

Op-ed: Charges against journalist Tim Burke are a hack job Read More »

florida-middle-schoolers-charged-with-making-deepfake-nudes-of-classmates

Florida middle-schoolers charged with making deepfake nudes of classmates

no consent —

AI tool was used to create nudes of 12- to 13-year-old classmates.

Florida middle-schoolers charged with making deepfake nudes of classmates

Jacqui VanLiew; Getty Images

Two teenage boys from Miami, Florida, were arrested in December for allegedly creating and sharing AI-generated nude images of male and female classmates without consent, according to police reports obtained by WIRED via public record request.

The arrest reports say the boys, aged 13 and 14, created the images of the students who were “between the ages of 12 and 13.”

The Florida case appears to be the first arrests and criminal charges as a result of alleged sharing of AI-generated nude images to come to light. The boys were charged with third-degree felonies—the same level of crimes as grand theft auto or false imprisonment—under a state law passed in 2022 which makes it a felony to share “any altered sexual depiction” of a person without their consent.

The parent of one of the boys arrested did not respond to a request for comment in time for publication. The parent of the other boy said that he had “no comment.” The detective assigned to the case, and the state attorney handling the case, did not respond for comment in time for publication.

As AI image-making tools have become more widely available, there have been several high-profile incidents in which minors allegedly created AI-generated nude images of classmates and shared them without consent. No arrests have been disclosed in the publicly reported cases—at Issaquah High School in Washington, Westfield High School in New Jersey, and Beverly Vista Middle School in California—even though police reports were filed. At Issaquah High School, police opted not to press charges.

The first media reports of the Florida case appeared in December, saying that the two boys were suspended from Pinecrest Cove Academy in Miami for 10 days after school administrators learned of allegations that they created and shared fake nude images without consent. After parents of the victims learned about the incident, several began publicly urging the school to expel the boys.

Nadia Khan-Roberts, the mother of one of the victims, told NBC Miami in December that for all of the families whose children were victimized the incident was traumatizing. “Our daughters do not feel comfortable walking the same hallways with these boys,” she said. “It makes me feel violated, I feel taken advantage [of] and I feel used,” one victim, who asked to remain anonymous, told the TV station.

WIRED obtained arrest records this week that say the incident was reported to police on December 6, 2023, and that the two boys were arrested on December 22. The records accuse the pair of using “an artificial intelligence application” to make the fake explicit images. The name of the app was not specified and the reports claim the boys shared the pictures between each other.

“The incident was reported to a school administrator,” the reports say, without specifying who reported it, or how that person found out about the images. After the school administrator “obtained copies of the altered images” the administrator interviewed the victims depicted in them, the reports say, who said that they did not consent to the images being created.

After their arrest, the two boys accused of making the images were transported to the Juvenile Service Department “without incident,” the reports say.

A handful of states have laws on the books that target fake, nonconsensual nude images. There’s no federal law targeting the practice, but a group of US senators recently introduced a bill to combat the problem after fake nude images of Taylor Swift were created and distributed widely on X.

The boys were charged under a Florida law passed in 2022 that state legislators designed to curb harassment involving deepfake images made using AI-powered tools.

Stephanie Cagnet Myron, a Florida lawyer who represents victims of nonconsensually shared nude images, tells WIRED that anyone who creates fake nude images of a minor would be in possession of child sexual abuse material, or CSAM. However, she claims it’s likely that the two boys accused of making and sharing the material were not charged with CSAM possession due to their age.

“There’s specifically several crimes that you can charge in a case, and you really have to evaluate what’s the strongest chance of winning, what has the highest likelihood of success, and if you include too many charges, is it just going to confuse the jury?” Cagnet Myron added.

Mary Anne Franks, a professor at the George Washington University School of Law and a lawyer who has studied the problem of nonconsensual explicit imagery, says it’s “odd” that Florida’s revenge porn law, which predates the 2022 statute under which the boys were charged, only makes the offense a misdemeanor, while this situation represented a felony.

“It is really strange to me that you impose heftier penalties for fake nude photos than for real ones,” she says.

Franks adds that although she believes distributing nonconsensual fake explicit images should be a criminal offense, thus creating a deterrent effect, she doesn’t believe offenders should be incarcerated, especially not juveniles.

“The first thing I think about is how young the victims are and worried about the kind of impact on them,” Franks says. “But then [I] also question whether or not throwing the book at kids is actually going to be effective here.”

This story originally appeared on wired.com.

Florida middle-schoolers charged with making deepfake nudes of classmates Read More »

tesla-drivers-who-sued-over-exaggerated-ev-range-are-forced-into-arbitration

Tesla drivers who sued over exaggerated EV range are forced into arbitration

Tesla beats drivers —

Judge upholds arbitration agreement but says Tesla can still face injunction.

Four Tesla charging stations inside a parking garage.

Enlarge / Tesla Superchargers at Boanrka shopping center in Krakow, Poland on March 4, 2024.

Getty Images | NurPhoto

Tesla drivers who say the carmaker “grossly” exaggerated the ranges of its electric vehicles have lost their attempt to sue Tesla as a class. They will have to pursue claims individually in arbitration, a federal judge ruled yesterday.

Two related lawsuits were filed after a Reuters investigation last year found that Tesla consistently exaggerated the driving range of its electric vehicles, leading car owners to think something was broken when the actual driving range was much lower than advertised. Tesla reportedly created a “Diversion Team” to handle these complaints and routinely canceled service appointments because there was no way to improve the actual distance Tesla cars could drive between charges.

Several Tesla drivers sued in US District Court for the Northern District of California, seeking class-action status to represent buyers of Tesla cars.

When buying their Teslas, each named plaintiff in the two lawsuits signed an order agreement that included an arbitration provision, US District Judge Yvonne Gonzalez Rogers wrote. The agreement says that “any dispute arising out of or relating to any aspect of the relationship between you and Tesla will not be decided by a judge or jury but instead by a single arbitrator in an arbitration administered by the American Arbitration Association.”

The agreement has a severance clause that says, “If a court or arbitrator decides that any part of this agreement to arbitrate cannot be enforced as to a particular claim for relief or remedy, then that claim or remedy (and only that claim or remedy) must be brought in court and any other claims must be arbitrated.”

Tesla drivers argued that the arbitration agreement is not enforceable under the McGill v. Citibank precedent, in which the California Supreme Court ruled that arbitration provisions are unenforceable if they waive a plaintiff’s right to seek public injunctive relief. However, the McGill precedent doesn’t always give plaintiffs the right to pursue claims as a class, Gonzalez Rogers wrote. In the Tesla case, “the Arbitration Provision does not prohibit plaintiffs from pursuing public injunctive relief in their individual capacities,” the ruling said.

Tesla could still be hit with injunction

Public injunctive relief is “brought on behalf of an individual for the benefit of the public, not as a class or representative claim,” the judge wrote. Public injunctive relief is supposed to benefit the public at large. When an injunction benefits the plaintiff, it does so “only incidentally and/or as a member of the general public.”

In other words, a Tesla driver could win an arbitration case and seek an injunction that forces Tesla to change its practices. In a case won by Comcast, the US Court of Appeals for the 9th Circuit in 2021 stated that “public injunctive relief within the meaning of McGill is limited to forward-looking injunctions that seek to prevent future violations of law for the benefit of the general public as a whole, as opposed to a particular class of persons… without the need to consider the individual claims of any non-party.”

Gonzalez Rogers ruled that Tesla’s arbitration agreement “permits plaintiffs to seek public injunctive relief in arbitration.” The US District Court could also issue an injunction against Tesla after an arbitration case.

The Tesla drivers are seeking remedies under the California Consumer Legal Remedies Act (CLRA), the California Unfair Competition Law (UCL), and the California False Advertising Law (FAL). After arbitration, the court “will be able to craft appropriate public injunctive relief if plaintiffs successfully arbitrate their UCL, FAL, and CLRA claims and such relief is deemed unavailable,” Gonzalez Rogers wrote.

The judge stayed the case “pending resolution of the arbitration in case it is required to adjudicate any request for public injunctive relief… The Court finds that the Arbitration Provision does not prohibit plaintiffs from pursuing public injunctive relief in their individual capacities. To the extent an arbitrator finds otherwise, the Court STAYS the action as such relief is severable and can be separately adjudicated by this Court.”

Tesla arbitration clause upheld in earlier case

Tesla previously won a different case in the same court involving its arbitration clause. In September 2023, Judge Haywood Gilliam Jr. ruled that four Tesla drivers who sued the company over its allegedly deceptive “self-driving” claims would have to go to arbitration instead of pursuing a class action.

The plaintiffs in that case argued that “Tesla’s arbitration agreement is unconscionable, and thus [un]enforceable.” They said the arbitration agreement “is not referenced on the Order page” and “is buried in small font in the middle of an Order Agreement, which is only accessible through an inconspicuous hyperlink.”

Ruling against the plaintiffs, Gilliam found that Tesla’s “order payment screens provided conspicuous notice of the order agreements.” He also found that provisions such as a 30-day opt-out clause were enforceable, even though Tesla drivers argued it was too short because it “typically takes much more than 30 days for Tesla to configure and deliver a car.”

Tesla drivers who sued over exaggerated EV range are forced into arbitration Read More »

us-lawmakers-vote-50-0-to-force-sale-of-tiktok-despite-angry-calls-from-users

US lawmakers vote 50-0 to force sale of TikTok despite angry calls from users

Divest or get out —

Lawmaker: TikTok must “sever relationship with the Chinese Communist Party.”

A large TikTok ad at a subway station.

Getty Images | Bloomberg

The House Commerce Committee today voted 50-0 to approve a bill that would force TikTok owner ByteDance to sell the company or lose access to the US market.

The Protecting Americans from Foreign Adversary Controlled Applications Act “addresses the immediate national security risks posed by TikTok and establishes a framework for the Executive Branch to protect Americans from future foreign adversary controlled applications,” a committee memo said. “If an application is determined to be operated by a company controlled by a foreign adversary—like ByteDance, Ltd., which is controlled by the People’s Republic of China—the application must be divested from foreign adversary control within 180 days.”

If the bill passes in the House and Senate and is signed into law by President Biden, TikTok would eventually be dropped from app stores in the US if its owner doesn’t sell. It also would lose access to US-based web-hosting services.

“If the application is not divested, entities in the United States would be prohibited from distributing the application through an application marketplace or store and providing web hosting services,” the committee memo said.

Chair: “CCP weaponizes applications it controls”

The bill was introduced on Tuesday and had 20 sponsors split evenly between Democrats and Republicans. TikTok urged its users to protest the bill, sending a notification that said, “Congress is planning a total ban of TikTok… Let Congress know what TikTok means to you and tell them to vote NO.”

Many users called lawmakers’ offices to complain, congressional staffers told Politico. “It’s so so bad. Our phones have not stopped ringing. They’re teenagers and old people saying they spend their whole day on the app and we can’t take it away,” one House GOP staffer was quoted as saying.

House Commerce Committee Chair Cathy McMorris Rodgers (R-Wash.) said that TikTok enlisting users to call lawmakers showed “in real time how the Chinese Communist Party can weaponize platforms like TikTok to manipulate the American people.”

“This is just a small taste of how the CCP weaponizes applications it controls to manipulate tens of millions of people to further their agenda. These applications present a clear national security threat to the United States and necessitate the decisive action we will take today,” she said before the vote.

The American Civil Liberties Union opposes the TikTok bill, saying it “would violate the First Amendment rights of hundreds of millions of Americans who use the app to communicate and express themselves daily.”

Bill sponsor: “It’s not a ban”

Bill sponsor Rep. Mike Gallagher (R-Wis.) expressed anger at TikTok for telling its users that the bill would ban the app completely, pointing out that the bill would only ban the app if it isn’t sold.

“If you actually read the bill, it’s not a ban. It’s a divestiture,” Gallagher said, according to Politico. Gallagher also said his bill puts the decision “squarely in the hands of TikTok to sever their relationship with the Chinese Communist Party.”

TikTok issued a statement calling the bill “an outright ban of TikTok, no matter how much the authors try to disguise it.” The House Commerce Committee responded to TikTok’s claim, calling it “yet another lie.”

While the bill text could potentially wrap in other apps in the future, it specifically lists the ByteDance-owned TikTok as a “foreign adversary controlled application.”

“It shall be unlawful for an entity to distribute, maintain, or update (or enable the distribution, maintenance, or updating of) a foreign adversary controlled application,” the bill says. An app would be allowed to stay in the US market after a divestiture if the president determines that the sale “would result in the relevant covered company no longer being controlled by a foreign adversary.”

US lawmakers vote 50-0 to force sale of TikTok despite angry calls from users Read More »