Policy

japan-wins-2-year-“war-on-floppy-disks,”-kills-regulations-requiring-old-tech

Japan wins 2-year “war on floppy disks,” kills regulations requiring old tech

Farewell, floppy —

But what about fax machines?

floppy disks on white background

About two years after the country’s digital minister publicly declared a “war on floppy discs,” Japan reportedly stopped using floppy disks in governmental systems as of June 28.

Per a Reuters report on Wednesday, Japan’s government “eliminated the use of floppy disks in all its systems.” The report notes that by mid-June, Japan’s Digital Agency (a body set up during the COVID-19 pandemic and aimed at updating government technology) had “scrapped all 1,034 regulations governing their use, except for one environmental stricture related to vehicle recycling.” That suggests that there’s up to one government use that could still turn to floppy disks, though more details weren’t available.

Digital Minister Taro Kono, the politician behind the modernization of the Japanese government’s tech, has made his distaste for floppy disks and other old office tech, like fax machines, quite public. Kono, who’s reportedly considering a second presidential run, told Reuters in a statement today:

We have won the war on floppy disks on June 28!

Although Kono only announced plans to eradicate floppy disks from the government two years ago, it’s been 20 years since floppy disks were in their prime and 53 years since they debuted. It was only in January 2024 that the Japanese government stopped requiring physical media, like floppy disks and CD-ROMs, for 1,900 types of submissions to the government, such as business filings and submission forms for citizens.

The timeline may be surprising, considering that the last company to make floppy disks, Sony, stopped doing so in 2011. As a storage medium, of course, floppies can’t compete with today’s options since most floppies max out at 1.44MB (2.88MB floppies were also available). And you’ll be hard-pressed to find a modern system that can still read the disks. There are also basic concerns around the old storage format, such as Tokyo police reportedly losing a pair of floppy disks with information on dozens of public housing applicants in 2021.

But Japan isn’t the only government body with surprisingly recent ties to the technology. For example, San Francisco’s Muni Metro light rail uses a train control system that uses software that runs off floppy disks and plans to keep doing so until 2030. The US Air Force used using 8-inch floppies until 2019.

Outside of the public sector, floppy disks remain common in numerous industries, including embroidery, cargo airlines, and CNC machines. We reported on Chuck E. Cheese using floppy disks for its animatronics as recently as January 2023.

Modernization resistance

Now that the Japanese government considers its reliance on floppy disks over, eyes are on it to see what, if any, other modernization overhauls it will make.

Despite various technological achievements, the country has a reputation for holding on to dated technology. The Institute for Management Development’s (IMD) 2023 World Digital Competitiveness Ranking listed Japan as number 32 out of 64 economies. The IMD says its rankings measure the “capacity and readiness of 64 economies to adopt and explore digital technologies as a key driver for economic transformation in business, government, and wider society.”

It may be a while before the government is ready to let go of some older technologies. For example, government officials have reportedly resisted moving to the cloud for administrative systems. Kono urged government offices to quit requiring hanko personal stamps in 2020, but per The Japan Times, movement from the seal is occurring at a “glacial pace.”

Many workplaces in Japan also opt for fax machines over emails, and 2021 plans to remove fax machines from government offices have been tossed due to resistance.

Some believe Japan’s reliance on older technology stems from the comfort and efficiencies associated with analog tech as well as governmental bureaucracy.

Japan wins 2-year “war on floppy disks,” kills regulations requiring old tech Read More »

millions-of-onlyfans-paywalls-make-it-hard-to-detect-child-sex-abuse,-cops-say

Millions of OnlyFans paywalls make it hard to detect child sex abuse, cops say

Millions of OnlyFans paywalls make it hard to detect child sex abuse, cops say

OnlyFans’ paywalls make it hard for police to detect child sexual abuse materials (CSAM) on the platform, Reuters reported—especially new CSAM that can be harder to uncover online.

Because each OnlyFans creator posts their content behind their own paywall, five specialists in online child sexual abuse told Reuters that it’s hard to independently verify just how much CSAM is posted. Cops would seemingly need to subscribe to each account to monitor the entire platform, one expert who aids in police CSAM investigations, Trey Amick, suggested to Reuters.

OnlyFans claims that the amount of CSAM on its platform is extremely low. Out of 3.2 million accounts sharing “hundreds of millions of posts,” OnlyFans only removed 347 posts as suspected CSAM in 2023. Each post was voluntarily reported to the CyberTipline of the National Center for Missing and Exploited Children (NCMEC), which OnlyFans told Reuters has “full access” to monitor content on the platform.

However, that intensified monitoring seems to have only just begun. NCMEC just got access to OnlyFans in late 2023, the child safety group told Reuters. And NCMEC seemingly can’t scan the entire platform at once, telling Reuters that its access was “limited” exclusively “to OnlyFans accounts reported to its CyberTipline or connected to a missing child case.”

Similarly, OnlyFans told Reuters that police do not have to subscribe to investigate a creator’s posts, but the platform only grants free access to accounts when there’s an active investigation. That means once police suspect that CSAM is being exchanged on an account, they get “full access” to review “account details, content, and direct messages,” Reuters reported.

But that access doesn’t aid police hoping to uncover CSAM shared on accounts not yet flagged for investigation. That’s a problem, a Reuters investigation found, because it’s easy for creators to make a new account, where bad actors can mask their identities to avoid OnlyFans’ “controls meant to hold account holders responsible for their own content,” one detective, Edward Scoggins, told Reuters.

Evading OnlyFans’ CSAM detection seems easy

OnlyFans told Reuters that “would-be creators must provide at least nine pieces of personally identifying information and documents, including bank details, a selfie while holding a government photo ID, and—in the United States—a Social Security number.”

“All this is verified by human judgment and age-estimation technology that analyzes the selfie,” OnlyFans told Reuters. On OnlyFans’ site, the platform further explained that “we continuously scan our platform to prevent the posting of CSAM. All our content moderators are trained to identify and swiftly report any suspected CSAM.”

However, Reuters found that none of these controls worked 100 percent of the time to stop bad actors from sharing CSAM. And the same seemingly holds true for some minors motivated to post their own explicit content. One girl told Reuters that she evaded age verification first by using an adult’s driver’s license to sign up, then by taking over an account of an adult user.

An OnlyFans spokesperson told Ars that low amounts of CSAM reported to NCMEC is a “testament to the rigorous safety controls OnlyFans has in place.”

OnlyFans is proud of the work we do to aggressively target, report, and support the investigations and prosecutions of anyone who seeks to abuse our platform in this way,” OnlyFans’ spokesperson told Ars. “Unlike many other platforms, the lack of anonymity and absence of end-to-end encryption on OnlyFans means that reports are actionable by law enforcement and prosecutors.”

Millions of OnlyFans paywalls make it hard to detect child sex abuse, cops say Read More »

ai-trains-on-kids’-photos-even-when-parents-use-strict-privacy-settings

AI trains on kids’ photos even when parents use strict privacy settings

“Outrageous” —

Even unlisted YouTube videos are used to train AI, watchdog warns.

AI trains on kids’ photos even when parents use strict privacy settings

Human Rights Watch (HRW) continues to reveal how photos of real children casually posted online years ago are being used to train AI models powering image generators—even when platforms prohibit scraping and families use strict privacy settings.

Last month, HRW researcher Hye Jung Han found 170 photos of Brazilian kids that were linked in LAION-5B, a popular AI dataset built from Common Crawl snapshots of the public web. Now, she has released a second report, flagging 190 photos of children from all of Australia’s states and territories, including indigenous children who may be particularly vulnerable to harms.

These photos are linked in the dataset “without the knowledge or consent of the children or their families.” They span the entirety of childhood, making it possible for AI image generators to generate realistic deepfakes of real Australian children, Han’s report said. Perhaps even more concerning, the URLs in the dataset sometimes reveal identifying information about children, including their names and locations where photos were shot, making it easy to track down children whose images might not otherwise be discoverable online.

That puts children in danger of privacy and safety risks, Han said, and some parents thinking they’ve protected their kids’ privacy online may not realize that these risks exist.

From a single link to one photo that showed “two boys, ages 3 and 4, grinning from ear to ear as they hold paintbrushes in front of a colorful mural,” Han could trace “both children’s full names and ages, and the name of the preschool they attend in Perth, in Western Australia.” And perhaps most disturbingly, “information about these children does not appear to exist anywhere else on the Internet”—suggesting that families were particularly cautious in shielding these boys’ identities online.

Stricter privacy settings were used in another image that Han found linked in the dataset. The photo showed “a close-up of two boys making funny faces, captured from a video posted on YouTube of teenagers celebrating” during the week after their final exams, Han reported. Whoever posted that YouTube video adjusted privacy settings so that it would be “unlisted” and would not appear in searches.

Only someone with a link to the video was supposed to have access, but that didn’t stop Common Crawl from archiving the image, nor did YouTube policies prohibiting AI scraping or harvesting of identifying information.

Reached for comment, YouTube’s spokesperson, Jack Malon, told Ars that YouTube has “been clear that the unauthorized scraping of YouTube content is a violation of our Terms of Service, and we continue to take action against this type of abuse.” But Han worries that even if YouTube did join efforts to remove images of children from the dataset, the damage has been done, since AI tools have already trained on them. That’s why—even more than parents need tech companies to up their game blocking AI training—kids need regulators to intervene and stop training before it happens, Han’s report said.

Han’s report comes a month before Australia is expected to release a reformed draft of the country’s Privacy Act. Those reforms include a draft of Australia’s first child data protection law, known as the Children’s Online Privacy Code, but Han told Ars that even people involved in long-running discussions about reforms aren’t “actually sure how much the government is going to announce in August.”

“Children in Australia are waiting with bated breath to see if the government will adopt protections for them,” Han said, emphasizing in her report that “children should not have to live in fear that their photos might be stolen and weaponized against them.”

AI uniquely harms Australian kids

To hunt down the photos of Australian kids, Han “reviewed fewer than 0.0001 percent of the 5.85 billion images and captions contained in the data set.” Because her sample was so small, Han expects that her findings represent a significant undercount of how many children could be impacted by the AI scraping.

“It’s astonishing that out of a random sample size of about 5,000 photos, I immediately fell into 190 photos of Australian children,” Han told Ars. “You would expect that there would be more photos of cats than there are personal photos of children,” since LAION-5B is a “reflection of the entire Internet.”

LAION is working with HRW to remove links to all the images flagged, but cleaning up the dataset does not seem to be a fast process. Han told Ars that based on her most recent exchange with the German nonprofit, LAION had not yet removed links to photos of Brazilian kids that she reported a month ago.

LAION declined Ars’ request for comment.

In June, LAION’s spokesperson, Nathan Tyler, told Ars that, “as a nonprofit, volunteer organization,” LAION is committed to doing its part to help with the “larger and very concerning issue” of misuse of children’s data online. But removing links from the LAION-5B dataset does not remove the images online, Tyler noted, where they can still be referenced and used in other AI datasets, particularly those relying on Common Crawl. And Han pointed out that removing the links from the dataset doesn’t change AI models that have already trained on them.

“Current AI models cannot forget data they were trained on, even if the data was later removed from the training data set,” Han’s report said.

Kids whose images are used to train AI models are exposed to a variety of harms, Han reported, including a risk that image generators could more convincingly create harmful or explicit deepfakes. In Australia last month, “about 50 girls from Melbourne reported that photos from their social media profiles were taken and manipulated using AI to create sexually explicit deepfakes of them, which were then circulated online,” Han reported.

For First Nations children—”including those identified in captions as being from the Anangu, Arrernte, Pitjantjatjara, Pintupi, Tiwi, and Warlpiri peoples”—the inclusion of links to photos threatens unique harms. Because culturally, First Nations peoples “restrict the reproduction of photos of deceased people during periods of mourning,” Han said the AI training could perpetuate harms by making it harder to control when images are reproduced.

Once an AI model trains on the images, there are other obvious privacy risks, including a concern that AI models are “notorious for leaking private information,” Han said. Guardrails added to image generators do not always prevent these leaks, with some tools “repeatedly broken,” Han reported.

LAION recommends that, if troubled by the privacy risks, parents remove images of kids online as the most effective way to prevent abuse. But Han told Ars that’s “not just unrealistic, but frankly, outrageous.”

“The answer is not to call for children and parents to remove wonderful photos of kids online,” Han said. “The call should be [for] some sort of legal protections for these photos, so that kids don’t have to always wonder if their selfie is going to be abused.”

AI trains on kids’ photos even when parents use strict privacy settings Read More »

scotus-agrees-to-review-texas-law-that-caused-pornhub-to-leave-the-state

SCOTUS agrees to review Texas law that caused Pornhub to leave the state

A Texas flag painted on very old boards and hanging on a barn.

Getty Images | Kathryn8

The US Supreme Court today agreed to hear a challenge to the Texas law that requires age verification on porn sites. A list of orders released this morning shows that the court granted a petition for certiorari filed by the Free Speech Coalition, an adult-industry lobby group.

In March, the US Court of Appeals for the 5th Circuit ruled that Texas could continue enforcing the law while litigation continues. In a 2-1 decision, 5th Circuit judges wrote that “the age-verification requirement is rationally related to the government’s legitimate interest in preventing minors’ access to pornography. Therefore, the age-verification requirement does not violate the First Amendment.”

The dissenting judge faulted the 5th Circuit majority for reviewing the law under the “rational-basis” standard instead of the more stringent strict scrutiny. The Supreme Court “has unswervingly applied strict scrutiny to content-based regulations that limit adults’ access to protected speech,” Judge Patrick Higginbotham wrote at the time.

Though the 5th Circuit majority upheld the age-verification rule, it also found that a requirement to display health warnings about pornography “unconstitutionally compel[s] speech” and cannot be enforced.

While the Supreme Court could eventually overturn the age-verification law, it is being enforced in the meantime. In April, the Supreme Court declined a request to temporarily block the Texas law.

Pornhub disabled site in Texas

After losing that April decision, the Free Speech Coalition said: “[We] remain hopeful that the Supreme Court will grant our petition for certiorari and reaffirm its lengthy line of cases applying strict scrutiny to content-based restrictions on speech like those in the Texas statute we’ve challenged.”

The Texas law, which took effect in September 2023, applies to websites in which more than one-third of the content “is sexual material harmful to minors.” Those websites must “use reasonable age verification methods” to limit their material to adults.

In February 2024, Texas Attorney General Ken Paxton alleged in a lawsuit that Pornhub owner Aylo (formerly MindGeek) violated the law. Pornhub disabled its website in Texas after the 5th Circuit ruling and has gone dark in other states in response to similar age laws.

The Free Speech Coalition’s petition for certiorari said that the Supreme Court “has repeatedly held that States may rationally restrict minors’ access to sexual materials, but such restrictions must withstand strict scrutiny if they burden adults’ access to constitutionally protected speech.” The group asked the court to determine whether the 5th Circuit “erred as a matter of law in applying rational-basis review to a law burdening adults’ access to protected speech, instead of strict scrutiny as this Court and other circuits have consistently done.”

“While purportedly seeking to limit minors’ access to online sexual content, the Act imposes significant burdens on adults’ access to constitutionally protected expression,” the petition said. “Of central relevance here, it requires every user, including adults, to submit personally identifying information to access sensitive, intimate content over a medium—the Internet—that poses unique security and privacy concerns.”

SCOTUS agrees to review Texas law that caused Pornhub to leave the state Read More »

biden-rushes-to-avert-labor-shortage-with-chips-act-funding-for-workers

Biden rushes to avert labor shortage with CHIPS act funding for workers

Less than one month to apply —

To dodge labor shortage, US finally aims CHIPS Act funding at training workers.

US President Joe Biden (C) speaks during a tour of the TSMC Semiconductor Manufacturing Facility in Phoenix, Arizona, on December 6, 2022.

Enlarge / US President Joe Biden (C) speaks during a tour of the TSMC Semiconductor Manufacturing Facility in Phoenix, Arizona, on December 6, 2022.

In the hopes of dodging a significant projected worker shortage in the next few years, the Biden administration will finally start funding workforce development projects to support America’s ambitions to become the world’s leading chipmaker through historic CHIPS and Science Act investments.

The Workforce Partner Alliance (WFPA) will be established through the CHIPS Act’s first round of funding focused on workers, officials confirmed in a press release. The program is designed to “focus on closing workforce and skills gaps in the US for researchers, engineers, and technicians across semiconductor design, manufacturing, and production,” a program requirements page said.

Bloomberg reported that the US risks a technician shortage reaching 90,000 by 2030. This differs slightly from Natcast’s forecast, which found that out of “238,000 jobs the industry is projected to create by 2030,” the semiconductor industry “will be unable to fill more than 67,000.”

Whatever the industry demand will actually be, with a projected tens of thousands of jobs needing to be filled just as the country is hoping to produce more chips than ever, the Biden administration is hoping to quickly train enough workers to fill openings for “researchers, engineers, and technicians across semiconductor design, manufacturing, and production,” a WFPA site said.

To do this, a “wide range of workforce solution providers” are encouraged to submit “high-impact” WFPA project proposals that can be completed within two years, with total budgets of between $500,000 and $2 million per award, the press release said.

Examples of “evidence-based workforce development strategies and methodologies that may be considered for this program” include registered apprenticeship and pre-apprenticeship programs, colleges or universities offering semiconductor industry-relevant degrees, programs combining on-the-job training with effective education or mentorship, and “experiential learning opportunities such as co-ops, externships, internships, or capstone projects.” While programs supporting construction activities will not be considered, programs designed to “reduce barriers” to entry in the semiconductor industry can use funding to support workers’ training, such as for providing childcare or transportation for workers.

“Making investments in the US semiconductor workforce is an opportunity to serve underserved communities, to connect individuals to good-paying sustainable jobs across the country, and to develop a robust workforce ecosystem that supports an industry essential to the national and economic security of the US,” Natcast said.

Between four to 10 projects will be selected, providing opportunities for “established programs with a track record of success seeking to scale,” as well as for newer programs “that meet a previously unaddressed need, opportunity, or theory of change” to be launched or substantially expanded.

The deadline to apply for funding is July 26, which gives applicants less than one month to get their proposals together. Applicants must have a presence in the US but can include for-profit organizations, accredited education institutions, training programs, state and local government agencies, and nonprofit organizations, Natcast’s eligibility requirements said.

Natcast—the nonprofit entity created to operate the National Semiconductor Technology Center Consortium—will manage the WFPA. An FAQ will be provided soon, Natcast said, but in the meantime, the agency is giving a brief window to submit questions about the program. Curious applicants can send questions to wfpa2024@natcast.org until 11: 59 pm ET on July 9.

Awardees will be notified by early fall, Natcast said.

Planning the future of US chip workforce

In Natcast’s press release, Deirdre Hanford, Natcast’s CEO, said that the WFPA will “accelerate progress in the US semiconductor industry by tackling its most critical challenges, including the need for a highly skilled workforce that can meet the evolving demands of the industry.”

And the senior manager of Natcast’s workforce development programs, Michael Barnes, said that the WFPA will be critical to accelerating the industry’s growth in the US.

“It is imperative that we develop a domestic semiconductor workforce ecosystem that can support the industry’s anticipated growth and strengthen American national security, economic prosperity, and global competitiveness,” Barnes said.

Biden rushes to avert labor shortage with CHIPS act funding for workers Read More »

supreme-court-vacates-rulings-on-texas-and-florida-social-media-laws

Supreme Court vacates rulings on Texas and Florida social media laws

The US Supreme Court building is seen on a sunny day. Kids mingle around a small pool on the grounds in front of the building.

Enlarge / The Supreme Court of the United States in Washington, DC, in May 2023.

Getty Images | NurPhoto

The US Supreme Court has avoided making a final decision on challenges to the Texas and Florida social media laws, but the majority opinion written by Justice Elena Kagan criticized the Texas law and made it clear that content moderation is protected by the First Amendment.

The Texas law “is unlikely to withstand First Amendment scrutiny,” the Supreme Court majority wrote. “Texas has thus far justified the law as necessary to balance the mix of speech on Facebook’s News Feed and similar platforms; and the record reflects that Texas officials passed it because they thought those feeds skewed against politically conservative voices. But this Court has many times held, in many contexts, that it is no job for government to decide what counts as the right balance of private expression—to ‘un-bias’ what it thinks biased, rather than to leave such judgments to speakers and their audiences. That principle works for social-media platforms as it does for others.”

A Big Tech lobby group that challenged the state laws said it was pleased by the ruling. “In a complex series of opinions that were unanimous in the outcome, but divided 6-3 in their reasoning, the Court sent the cases back to lower courts, making clear that a State may not interfere with private actors’ speech,” the Computer & Communications Industry Association said.

Today’s Supreme Court ruling vacated decisions by two courts. The US Court of Appeals for the 5th Circuit previously upheld the Texas state law that prohibits large social media companies from moderating posts based on a user’s “viewpoint.” By contrast, the US Court of Appeals for the 11th Circuit blocked a Florida law that prohibits large social media sites from banning politicians and requires platforms to “apply censorship, deplatforming, and shadow banning standards in a consistent manner among its users on the platform.”

Lower courts failed to do full analysis

The Supreme Court said it remanded the cases to the appeals courts because the courts didn’t do a full analysis of the laws’ effects. “Today, we vacate both decisions for reasons separate from the First Amendment merits, because neither Court of Appeals properly considered the facial nature of [tech industry lobby group] NetChoice’s challenge,” the court majority wrote.

Justices found that the lower courts focused too much on the biggest platforms, like Facebook and YouTube, without considering the wider effects of the laws. The majority wrote:

The courts mainly addressed what the parties had focused on. And the parties mainly argued these cases as if the laws applied only to the curated feeds offered by the largest and most paradigmatic social-media platforms—as if, say, each case presented an as-applied challenge brought by Facebook protesting its loss of control over the content of its News Feed. But argument in this Court revealed that the laws might apply to, and differently affect, other kinds of websites and apps. In a facial challenge, that could well matter, even when the challenge is brought under the First Amendment.

The courts need to examine ways in which the laws might affect “how an email provider like Gmail filters incoming messages, how an online marketplace like Etsy displays customer reviews, how a payment service like Venmo manages friends’ financial exchanges, or how a ride-sharing service like Uber runs,” justices wrote.

Supreme Court vacates rulings on Texas and Florida social media laws Read More »

meta-defends-charging-fee-for-privacy-amid-showdown-with-eu

Meta defends charging fee for privacy amid showdown with EU

Meta defends charging fee for privacy amid showdown with EU

Meta continues to hit walls with its heavily scrutinized plan to comply with the European Union’s strict online competition law, the Digital Markets Act (DMA), by offering Facebook and Instagram subscriptions as an alternative for privacy-inclined users who want to opt out of ad targeting.

Today, the European Commission (EC) announced preliminary findings that Meta’s so-called “pay or consent” or “pay or OK” model—which gives users a choice to either pay for access to its platforms or give consent to collect user data to target ads—is not compliant with the DMA.

According to the EC, Meta’s advertising model violates the DMA in two ways. First, it “does not allow users to opt for a service that uses less of their personal data but is otherwise equivalent to the ‘personalized ads-based service.” And second, it “does not allow users to exercise their right to freely consent to the combination of their personal data,” the press release said.

Now, Meta will have a chance to review the EC’s evidence and defend its policy, with today’s findings kicking off a process that will take months. The EC’s investigation is expected to conclude next March. Thierry Breton, the commissioner for the internal market, said in the press release that the preliminary findings represent “another important step” to ensure Meta’s full compliance with the DMA.

“The DMA is there to give back to the users the power to decide how their data is used and ensure innovative companies can compete on equal footing with tech giants on data access,” Breton said.

A Meta spokesperson told Ars that Meta plans to fight the findings—which could trigger fines up to 10 percent of the company’s worldwide turnover, as well as fines up to 20 percent for repeat infringement if Meta loses.

Meta continues to claim that its “subscription for no ads” model was “endorsed” by the highest court in Europe, the Court of Justice of the European Union (CJEU), last year.

“Subscription for no ads follows the direction of the highest court in Europe and complies with the DMA,” Meta’s spokesperson said. “We look forward to further constructive dialogue with the European Commission to bring this investigation to a close.”

However, some critics have noted that the supposed endorsement was not an official part of the ruling and that particular case was not regarding DMA compliance.

The EC agreed that more talks were needed, writing in the press release, “the Commission continues its constructive engagement with Meta to identify a satisfactory path towards effective compliance.”

Meta defends charging fee for privacy amid showdown with EU Read More »

appeals-court-seems-lost-on-how-internet-archive-harms-publishers

Appeals court seems lost on how Internet Archive harms publishers

Deciding “the future of books” —

Appeals court decision potentially reversing publishers’ suit may come this fall.

Appeals court seems lost on how Internet Archive harms publishers

The Internet Archive (IA) went before a three-judge panel Friday to defend its open library’s controlled digital lending (CDL) practices after book publishers last year won a lawsuit claiming that the archive’s lending violated copyright law.

In the weeks ahead of IA’s efforts to appeal that ruling, IA was forced to remove 500,000 books from its collection, shocking users. In an open letter to publishers, more than 30,000 readers, researchers, and authors begged for access to the books to be restored in the open library, claiming the takedowns dealt “a serious blow to lower-income families, people with disabilities, rural communities, and LGBTQ+ people, among many others,” who may not have access to a local library or feel “safe accessing the information they need in public.”

During a press briefing following arguments in court Friday, IA founder Brewster Kahle said that “those voices weren’t being heard.” Judges appeared primarily focused on understanding how IA’s digital lending potentially hurts publishers’ profits in the ebook licensing market, rather than on how publishers’ costly ebook licensing potentially harms readers.

However, lawyers representing IA—Joseph C. Gratz, from the law firm Morrison Foerster, and Corynne McSherry, from the nonprofit Electronic Frontier Foundation—confirmed that judges were highly engaged by IA’s defense. Arguments that were initially scheduled to last only 20 minutes stretched on instead for an hour and a half. Ultimately, judges decided not to rule from the bench, with a decision expected in the coming months or potentially next year. McSherry said the judges’ engagement showed that the judges “get it” and won’t make the decision without careful consideration of both sides.

“They understand this is an important decision,” McSherry said. “They understand that there are real consequences here for real people. And they are taking their job very, very seriously. And I think that’s the best that we can hope for, really.”

On the other side, the Association of American Publishers (AAP), the trade organization behind the lawsuit, provided little insight into how the day went. When reached for comment, AAP simply said, “We thought it was a strong day in court, and we look forward to the opinion.”

Decision could come early fall

According to Gratz, most of the questions for IA focused on “how to think about the situation where a particular book is available” from the open library and also available as an ebook that a library can license. Judges said they did not know how to think about “a situation where the publishers just haven’t come forward with any data showing that this has an impact,” Gratz said.

One audience member at the press briefing noted that instead judges were floating hypotheticals, like “if every single person in the world made a copy of a hypothetical thing, could hypothetically this affect the publishers’ revenue.”

McSherry said this was a common tactic when judges must weigh the facts while knowing that their decision will set an important precedent. However, IA has shown evidence, Gratz said, that even if IA provided limitless loans of digitized physical copies, “CDL doesn’t cause any economic harm to publishers, or authors,” and “there was absolutely no evidence of any harm of that kind that the publishers were able to bring forward.”

McSherry said that IA pushed back on claims that IA behaves like “pirates” when digitally lending books, with critics sometimes comparing the open library to illegal file-sharing networks. Instead, McSherry said that CDL provides a path to “meet readers where they are,” allowing IA to loan books that it owns to one user at a time no matter where in the world they are located.

“It’s not unlawful for a library to lend a book it owns to one patron at a time,” Gratz said IA told the court. “And the advent of digital technology doesn’t change that result. That’s lawful. And that’s what librarians do.”

In the open letter, IA fans pointed out that many IA readers were “in underserved communities where access is limited” to quality library resources. Being suddenly cut off from accessing nearly half a million books has “far-reaching implications,” they argued, removing access to otherwise inaccessible “research materials and literature that support their learning and academic growth.”

IA has argued that because copyright law is intended to provide equal access to knowledge, copyright law is better served by allowing IA’s lending than by preventing it. They’re hoping the judges will decide that CDL is fair use, reversing the lower court’s decision and restoring access to books recently removed from the open library. But Gratz said there’s no telling yet when that decision will come.

“There is no deadline for them to make a decision,” Gratz said, but it “probably won’t happen until early fall” at the earliest. After that, whichever side loses will have an opportunity to appeal the case, which has already stretched on for four years, to the Supreme Court. Since neither side seems prepared to back down, the Supreme Court eventually weighing in seems inevitable.

McSherry seemed optimistic that the judges at least understood the stakes for IA readers, noting that fair use is “designed to ensure that copyright actually serves the public interest,” not publishers’. Should the court decide otherwise, McSherry warned, the court risks allowing “a few powerful publishers” to “hijack the future of books.”

When IA first appealed, Kahle put out a statement saying IA couldn’t walk away from “a fight to keep library books available for those seeking truth in the digital age.”

Appeals court seems lost on how Internet Archive harms publishers Read More »

scotus-kills-chevron-deference,-giving-courts-more-power-to-block-federal-rules

SCOTUS kills Chevron deference, giving courts more power to block federal rules

Supreme Court Chief Justice John Roberts and Associate Justice Sonia Sotomayor wearing their robes as they arrive for the State of the Union address.

Enlarge / Supreme Court Chief Justice John Roberts and Associate Justice Sonia Sotomayor arrive for President Joe Biden’s State of the Union address on March 7, 2024, in Washington, DC.

Getty Images | Win McNamee

The US Supreme Court today overturned the 40-year-old Chevron precedent in a ruling that limits the regulatory authority of federal agencies. The 6-3 decision in Loper Bright Enterprises v. Raimondo will make it harder for agencies such as the Federal Communications Commission and Environmental Protection Agency to issue regulations without explicit authorization from Congress.

Chief Justice John Roberts delivered the opinion of the court and was joined by Clarence Thomas, Samuel Alito, Neil Gorsuch, Brett Kavanaugh, and Amy Coney Barrett. Justice Elena Kagan filed a dissenting opinion that was joined by Sonia Sotomayor and Ketanji Brown Jackson.

Chevron gave agencies leeway to interpret ambiguous laws as long as the agency’s conclusion was reasonable. But the Roberts court said that a “statutory ambiguity does not necessarily reflect a congressional intent that an agency, as opposed to a court, resolve the resulting interpretive question.”

“Perhaps most fundamentally, Chevron‘s presumption is misguided because agencies have no special competence in resolving statutory ambiguities. Courts do,” the ruling said. “The Framers anticipated that courts would often confront statutory ambiguities and expected that courts would resolve them by exercising independent legal judgment. Chevron gravely erred in concluding that the inquiry is fundamentally different just because an administrative interpretation is in play.”

This is especially critical “when the ambiguity is about the scope of an agency’s own power—perhaps the occasion on which abdication in favor of the agency is least appropriate,” the court said. The Roberts opinion also said the Administrative Procedure Act “specifies that courts, not agencies, will decide ‘all relevant questions of law’ arising on review of agency action—even those involving ambiguous laws,” and “prescribes no deferential standard for courts to employ in answering those legal questions.”

Kagan: SCOTUS majority now “administrative czar”

The Loper Bright case involved a challenge to a rule enforced by the National Marine Fisheries Service. Lower courts applied the Chevron framework when ruling in favor of the government.

Kagan’s dissent said that Chevron “has become part of the warp and woof of modern government, supporting regulatory efforts of all kinds—to name a few, keeping air and water clean, food and drugs safe, and financial markets honest.”

Ambiguities should generally be resolved by agencies instead of courts, Kagan wrote. “This Court has long understood Chevron deference to reflect what Congress would want, and so to be rooted in a presumption of legislative intent. Congress knows that it does not—in fact cannot—write perfectly complete regulatory statutes. It knows that those statutes will inevitably contain ambiguities that some other actor will have to resolve, and gaps that some other actor will have to fill. And it would usually prefer that actor to be the responsible agency, not a court,” the dissent said.

The Roberts court ruling “flips the script: It is now ‘the courts (rather than the agency)’ that will wield power when Congress has left an area of interpretive discretion,” Kagan wrote. “A rule of judicial humility gives way to a rule of judicial hubris.”

Kagan wrote that the court in recent years “has too often taken for itself decision-making authority Congress assigned to agencies,” substituting “its own judgment on workplace health for that of the Occupational Safety and Health Administration; its own judgment on climate change for that of the Environmental Protection Agency; and its own judgment on student loans for that of the Department of Education.”

Apparently deciding those previous decisions were “too piecemeal,” the court “majority today gives itself exclusive power over every open issue—no matter how expertise-driven or policy-laden—involving the meaning of regulatory law,” Kagan wrote. “As if it did not have enough on its plate, the majority turns itself into the country’s administrative czar. It defends that move as one (suddenly) required by the (nearly 80-year-old) Administrative Procedure Act. But the Act makes no such demand. Today’s decision is not one Congress directed. It is entirely the majority’s choice.”

The unanimous 1984 SCOTUS ruling in Chevron U.S.A. Inc. v. Natural Resources Defense Council involved the Environmental Protection Agency and air pollution rules. Even with Chevron deference in place, the EPA faced limits to its regulatory power. A Supreme Court ruling earlier this week imposed a stay on rules meant to limit the spread of ozone-generating pollutants across state lines.

Consumer advocacy group Public Knowledge criticized today’s ruling, saying that it “grounds judicial superiority over the legislative and executive branches by declaring that the Constitution requires judges to unilaterally decide the meaning of statutes written by Congress and entrusted to agencies.”

Public Knowledge Senior VP Harold Feld argued that after today’s ruling, “no consumer protection is safe. Even if Congress can write with such specificity that a court cannot dispute its plain meaning, Congress will need to change the law for every new technology and every change in business practice. Even at the best of times, it would be impossible for Congress to keep up. Given the dysfunction of Congress today, we are at the mercy of the whims of the Imperial Court.”

SCOTUS kills Chevron deference, giving courts more power to block federal rules Read More »

tesla-says-model-3-that-burst-into-flames-in-fatal-tree-crash-wasn’t-defective

Tesla says Model 3 that burst into flames in fatal tree crash wasn’t defective

Tesla says Model 3 that burst into flames in fatal tree crash wasn’t defective

Tesla has denied that “any defect in the Autopilot system caused or contributed” to the 2022 death of a Tesla employee, Hans von Ohain, whose Tesla Model 3 burst into flames after the car suddenly veered off a road and crashed into a tree.

“Von Ohain fought to regain control of the vehicle, but, to his surprise and horror, his efforts were prevented by the vehicle’s Autopilot features, leaving him helpless and unable to steer back on course,” a wrongful death lawsuit filed in May by von Ohain’s wife, Nora Bass, alleged.

In Tesla’s response to the lawsuit filed Thursday, the carmaker also denied that the 2021 vehicle had any defects, contradicting Bass’ claims that Tesla knew that the car should have been recalled but chose to “prioritize profits over consumer safety.”

As detailed in her complaint, initially filed in a Colorado state court, Bass believes the Tesla Model 3 was defective in that it “did not perform as safely as an ordinary consumer would have expected it to perform” and “the benefits of the vehicle’s design did not outweigh the risks.”

Instead of acknowledging alleged defects and exploring alternative designs, Tesla marketed the car as being engineered “to be the safest” car “built to date,” Bass’ complaint said.

Von Ohain was particularly susceptible to this marketing, Bass has said, because he considered Tesla CEO Elon Musk to be a “brilliant man,” The Washington Post reported. “We knew the technology had to learn, and we were willing to be part of that,” Bass said, but the couple didn’t realize how allegedly dangerous it could be to help train “futuristic technology,” The Post reported.

In Tesla’s response, the carmaker defended its marketing of the Tesla Model 3, denying that the company “engaged in unfair and deceptive acts or practices.”

“The product in question was not defective or unreasonably dangerous,” Tesla’s filing said.

Insisting in its response that the vehicle was safe when it was sold, Tesla again disputed Bass’ complaint, which claimed that “at no time after the purchase of the 2021 Tesla Model 3 did any person alter, modify, or change any aspect or component of the vehicle’s design or manufacture.” Contradicting this, Tesla suggested that the car “may not have been in the same condition at the time of the crash as it was at the time when it left Tesla’s custody.”

The Washington Post broke the story about von Ohain’s fatal crash, reporting that it may be “the first documented fatality linked to the most advanced driver assistance technology offered” by Tesla. In response to Tesla’s filing, Bass’ attorney, Jonathan Michaels, told The Post that his team is “committed to advocating fiercely for the von Ohain family, ensuring they receive the justice they deserve.”

Michaels told The Post that perhaps as significant as alleged autonomous driving flaws, the Tesla Model 3 was also allegedly defective “because of the intensity of the fire that ensued after von Ohain hit the tree, which ultimately caused his death.” According to the Colorado police officer looking into the crash, Robert Madden, the vehicle fire was among “the most intense” he’d ever investigated, The Post reported.

Lawyers for Bass and Tesla did not immediately respond to Ars’ request for comment.

Tesla says Model 3 that burst into flames in fatal tree crash wasn’t defective Read More »

brussels-explores-antitrust-probe-into-microsoft’s-partnership-with-openai

Brussels explores antitrust probe into Microsoft’s partnership with OpenAI

still asking questions —

EU executive arm drops merger review into US tech companies’ alliance.

EU competition chief Margrethe Vestager said the bloc was looking into practices that could in effect lead to a company controlling a greater share of the AI market.

Enlarge / EU competition chief Margrethe Vestager said the bloc was looking into practices that could in effect lead to a company controlling a greater share of the AI market.

Brussels is preparing for an antitrust investigation into Microsoft’s $13 billion investment into OpenAI, after the European Union decided not to proceed with a merger review into the most powerful alliance in the artificial intelligence industry.

The European Commission, the EU’s executive arm, began to explore a review under merger control rules in January, but on Friday announced that it would not proceed due to a lack of evidence that Microsoft controls OpenAI.

However, the commission said it was now exploring the possibility of a traditional antitrust investigation into whether the tie-up between the world’s most valuable listed company and the best-funded AI start-up was harming competition in the fast-growing market.

The commission has also made inquiries about Google’s deal with Samsung to install a modified version of its Gemini AI system in the South Korean manufacturer’s smartphones, it revealed on Friday.

Margrethe Vestager, the bloc’s competition chief, said in a speech on Friday: “The key question was whether Microsoft had acquired control on a lasting basis over OpenAI. After a thorough review we concluded that such was not the case. So we are closing this chapter, but the story is not over.”

She said the EU had sent a new set of questions to understand whether “certain exclusivity clauses” in the agreement between Microsoft and OpenAI “could have a negative effect on competitors.” The move is seen as a key step toward a formal antitrust probe.

The bloc had already sent questions to Microsoft and other tech companies in March to determine whether market concentration in AI could potentially block new companies from entering the market, Vestager said.

Microsoft said: “We appreciate the European Commission’s thorough review and its conclusion that Microsoft’s investment and partnership with OpenAI does not give Microsoft control over the company.”

Brussels began examining Microsoft’s relationship with the ChatGPT maker after OpenAI’s board abruptly dismissed its chief executive Sam Altman in November 2023, only to be rehired a few days later. He briefly joined Microsoft as the head of a new AI research unit, highlighting the close relationship between the two companies.

Regulators in the US and UK are also scrutinizing the alliance. Microsoft is the biggest backer of OpenAI, although its investment of up to $13 billion, which was expanded in January 2023, does not involve acquiring conventional equity due to the startup’s unusual corporate structure. Microsoft has a minority interest in OpenAI’s commercial subsidiary, which is owned by a not-for-profit organization.

Antitrust investigations tend to last years, compared with a much shorter period for merger reviews, and they focus on conduct that could be undermining rivals. Companies that are eventually found to be breaking the law, for example by bundling products or blocking competitors from access to key technology, risk hefty fines and legal obligations to change their behavior.

Vestager said the EU was looking into practices that could in effect lead to a company controlling a greater share of the AI market. She pointed to a practice called “acqui-hires,” where a company buys another one mainly to get its talent. For example, Microsoft recently struck a deal to hire most of the top team from AI start-up Inflection, in which it had previously invested. Inflection remains an independent company, however, complicating any traditional merger investigation.

The EU’s competition chief said regulators were also looking into the way big tech companies may be preventing smaller AI models from reaching users.

“This is why we are also sending requests for information to better understand the effects of Google’s arrangement with Samsung to pre-install its small model ‘Gemini nano’ on certain Samsung devices,” said Vestager.

Jonathan Kanter, the top US antitrust enforcer, told the Financial Times earlier this month that he was also examining “monopoly choke points and the competitive landscape” in AI. The UK’s Competition and Markets Authority said in December that it had “decided to investigate” the Microsoft-OpenAI deal when it invited comments from customers and rivals.

© 2024 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post to the web.

Brussels explores antitrust probe into Microsoft’s partnership with OpenAI Read More »

shopping-app-temu-is-“dangerous-malware,”-spying-on-your-texts,-lawsuit-claims

Shopping app Temu is “dangerous malware,” spying on your texts, lawsuit claims

“Cleverly hidden spyware” —

Temu “surprised” by the lawsuit, plans to “vigorously defend” itself.

A person is holding a package from Temu.

Enlarge / A person is holding a package from Temu.

Temu—the Chinese shopping app that has rapidly grown so popular in the US that even Amazon is reportedly trying to copy it—is “dangerous malware” that’s secretly monetizing a broad swath of unauthorized user data, Arkansas Attorney General Tim Griffin alleged in a lawsuit filed Tuesday.

Griffin cited research and media reports exposing Temu’s allegedly nefarious design, which “purposely” allows Temu to “gain unrestricted access to a user’s phone operating system, including, but not limited to, a user’s camera, specific location, contacts, text messages, documents, and other applications.”

“Temu is designed to make this expansive access undetected, even by sophisticated users,” Griffin’s complaint said. “Once installed, Temu can recompile itself and change properties, including overriding the data privacy settings users believe they have in place.”

Griffin fears that Temu is capable of accessing virtually all data on a person’s phone, exposing both users and non-users to extreme privacy and security risks. It appears that anyone texting or emailing someone with the shopping app installed risks Temu accessing private data, Griffin’s suit claimed, which Temu then allegedly monetizes by selling it to third parties, “profiting at the direct expense” of users’ privacy rights.

“Compounding” risks is the possibility that Temu’s Chinese owners, PDD Holdings, are legally obligated to share data with the Chinese government, the lawsuit said, due to Chinese “laws that mandate secret cooperation with China’s intelligence apparatus regardless of any data protection guarantees existing in the United States.”

Griffin’s suit cited an extensive forensic investigation into Temu by Grizzly Research—which analyzes publicly traded companies to inform investors—last September. In their report, Grizzly Research alleged that PDD Holdings is a “fraudulent company” and that “Temu is cleverly hidden spyware that poses an urgent security threat to United States national interests.”

As Griffin sees it, Temu baits users with misleading promises of discounted, quality goods, angling to get access to as much user data as possible by adding addictive features that keep users logged in, like spinning a wheel for deals. Meanwhile hundreds of complaints to the Better Business Bureau showed that Temu’s goods are actually low-quality, Griffin alleged, apparently supporting his claim that Temu’s end goal isn’t to be the world’s biggest shopping platform but to steal data.

Investigators agreed, the lawsuit said, concluding “we strongly suspect that Temu is already, or intends to, illegally sell stolen data from Western country customers to sustain a business model that is otherwise doomed for failure.”

Seeking an injunction to stop Temu from allegedly spying on users, Griffin is hoping a jury will find that Temu’s alleged practices violated the Arkansas Deceptive Trade Practices Act (ADTPA) and the Arkansas Personal Information Protection Act. If Temu loses, it could be on the hook for $10,000 per violation of the ADTPA and ordered to disgorge profits from data sales and deceptive sales on the app.

Temu “surprised” by lawsuit

The company that owns Temu, PDD Holdings, was founded in 2015 by a former Google employee, Colin Huang. It was originally based in China, but after security concerns were raised, the company relocated its “principal executive offices” to Ireland, Griffin’s complaint said. This, Griffin suggested, was intended to distance the company from debate over national security risks posed by China, but because the majority of its business operations remain in China, risks allegedly remain.

PDD Holdings’ relocation came amid heightened scrutiny of Pinduoduo, the Chinese app on which Temu’s shopping platform is based. Last year, Pinduoduo came under fire for privacy and security risks that got the app suspended from Google Play as suspected malware. Experts said Pinduoduo took security and privacy risks “to the next level,” the lawsuit said. And “around the same time,” Apple’s App Store also flagged Temu’s data privacy terms as misleading, further heightening scrutiny of two of PDD Holdings’ biggest apps, the complaint noted.

Researchers found that Pinduoduo “was programmed to bypass users’ cell phone security in order to monitor activities on other apps, check notifications, read private messages, and change settings,” the lawsuit said. “It also could spy on competitors by tracking activity on other shopping apps and getting information from them,” as well as “run in the background and prevent itself from being uninstalled.” The motivation behind the malicious design was apparently “to boost sales.”

According to Griffin, the same concerns that got Pinduoduo suspended last year remain today for Temu users, but the App Store and Google Play have allegedly failed to take action to prevent unauthorized access to user data. Within a year of Temu’s launch, the “same software engineers and product managers who developed Pinduoduo” allegedly “were transitioned to working on the Temu app.”

Google and Apple did not immediately respond to Ars’ request for comment.

A Temu spokesperson provided a statement to Ars, discrediting Grizzly Research’s investigation and confirming that the company was “surprised and disappointed by the Arkansas Attorney General’s Office for filing the lawsuit without any independent fact-finding.”

“The allegations in the lawsuit are based on misinformation circulated online, primarily from a short-seller, and are totally unfounded,” Temu’s spokesperson said. “We categorically deny the allegations and will vigorously defend ourselves.”

While Temu plans to defend against claims, the company also seems to potentially be open to making changes based on criticism lobbed in Griffin’s complaint.

“We understand that as a new company with an innovative supply chain model, some may misunderstand us at first glance and not welcome us,” Temu’s spokesperson said. “We are committed to the long-term and believe that scrutiny will ultimately benefit our development. We are confident that our actions and contributions to the community will speak for themselves over time.”

Shopping app Temu is “dangerous malware,” spying on your texts, lawsuit claims Read More »