Policy

india’s-plan-to-let-1998-digital-trade-deal-expire-may-worsen-chip-shortage

India’s plan to let 1998 digital trade deal expire may worsen chip shortage

India’s plan to let 1998 digital trade deal expire may worsen chip shortage

India’s plan to let a moratorium on imposing customs duties on cross-border digital e-commerce transactions expire may end up hurting India’s more ambitious plans to become a global chip leader in the next five years, Reuters reported.

It could also worsen the global chip shortage by spiking semiconductor industry costs at a time when many governments worldwide are investing heavily in expanding domestic chip supplies in efforts to keep up with rapidly advancing technologies.

Early next week, world leaders will convene at a World Trade Organization (WTO) meeting, just before the deadline to extend the moratorium hits in March. In place since 1998, the moratorium has been renewed every two years since—but India has grown concerned that it’s losing significant revenues from not imposing taxes as demand rises for its digital goods, like movies, e-books, or games.

Hoping to change India’s mind, a global consortium of semiconductor industry associations known as the World Semiconductor Council (WSC) sent a letter to Indian Prime Minister Narendra Modi on Thursday.

Reuters reviewed the letter, reporting that the WSC warned Modi that ending the moratorium “would mean tariffs on digital e-commerce and an innumerable number of transfers of chip design data across countries, raising costs and worsening chip shortages.”

Pointing to Modi’s $10 billion semiconductor incentive package—which Modi has said is designed to advance India’s industry through “giant leaps” in its mission to become a technology superpower—the WSC cautioned Modi that pushing for customs duties may dash those global chip leader dreams.

Studies suggest that India should be offering tax incentives, not potentially threatening to impose duties on chip design data. That includes a study from earlier this year, released after the Semiconductor Industry Association and the India Electronics and Semiconductor Association commissioned a report from the Information Technology and Innovation Foundation (ITIF).

ITIF’s goal was to evaluate “India’s existing semiconductor ecosystem and policy frameworks” and offer “recommendations to facilitate longer-term strategic development of complementary semiconductor ecosystems in the US and India,” a press release said, partly in order to “deepen commercial ties” between the countries. The Prime Minister’s Office (PMO) has also reported a similar goal to deepen commercial ties with the European Union.

Among recommendations to “strengthen India’s semiconductor competitiveness,” ITIF’s report encouraged India to advance cooperation with the US and introduce policy reforms that “lower the cost of doing business for semiconductor companies in India”—by “offering tax breaks to chip companies” and “expediting clearance times for goods entering the country.”

Because the duties could spike chip industry costs at a time when global cross-border data transmissions are expected to reach $11 trillion by 2025, WSC wrote, the duties may “impede India’s efforts to advance its semiconductor industry and attract semiconductor investment,” which could negatively impact “more than 20 percent of the world’s semiconductor design workforce,” which is based in India.

The prime minister’s office did not immediately respond to Ars’ request to comment.

India’s plan to let 1998 digital trade deal expire may worsen chip shortage Read More »

reddit-admits-more-moderator-protests-could-hurt-its-business

Reddit admits more moderator protests could hurt its business

SEC filing —

Losing third-party tools “could harm our moderators’ ability to review content…”

Reddit logo on website displayed on a laptop screen is seen in this illustration photo taken in Krakow, Poland on February 22, 2024.

Reddit filed to go public on Thursday (PDF), revealing various details of the social media company’s inner workings. Among the revelations, Reddit acknowledged the threat of future user protests and the value of third-party Reddit apps.

On July 1, Reddit enacted API rule changes—including new, expensive pricing —that resulted in many third-party Reddit apps closing. Disturbed by the changes, the timeline of the changes, and concerns that Reddit wasn’t properly appreciating third-party app developers and moderators, thousands of Reddit users protested by making the subreddits they moderate private, read-only, and/or engaging in other forms of protest, such as only discussing John Oliver or porn.

Protests went on for weeks and, at their onset, crashed Reddit for three hours. At the time, Reddit CEO Steve Huffman said the protests did not have “any significant revenue impact so far.”

In its filing with the Securities and Exchange Commission (SEC), though, Reddit acknowledged that another such protest could hurt its pockets:

While these activities have not historically had a material impact on our business or results of operations, similar actions by moderators and/or their communities in the future could adversely affect our business, results of operations, financial condition, and prospects.

The company also said that bad publicity and media coverage, such as the kind that stemmed from the API protests, could be a risk to Reddit’s success. The Form S-1 said bad PR around Reddit, including its practices, prices, and mods, “could adversely affect the size, demographics, engagement, and loyalty of our user base,” adding:

For instance, in May and June 2023, we experienced negative publicity as a result of our API policy changes.

Reddit’s filing also said that negative publicity and moderators disrupting the normal operation of subreddits could hurt user growth and engagement goals. The company highlighted financial incentives associated with having good relationships with volunteer moderators, noting that if enough mods decided to disrupt Reddit (like they did when they led protests last year), “results of operations, financial condition, and prospects could be adversely affected.” Reddit infamously forcibly removed moderators from their posts during the protests, saying they broke Reddit rules by refusing to reopen the subreddits they moderated.

“As communities grow, it can become more and more challenging for communities to find qualified people willing to act as moderators,” the filing says.

Losing third-party tools could hurt Reddit’s business

Much of the momentum for last year’s protests came from users, including long-time Redditors, mods, and people with accessibility needs, feeling that third-party apps were necessary to enjoyably and properly access and/or moderate Reddit. Reddit’s own technology has disappointed users in the past (leading some to cling to Old Reddit, which uses an older interface, for example). In its SEC filing, Reddit pointed to the value of third-party “tools” despite its API pricing killing off many of the most popular examples.

Reddit’s filing discusses losing moderators as a business risk and notes how important third-party tools are in maintaining mods:

While we provide tools to our communities to manage their subreddits, our moderators also rely on their own and third-party tools. Any disruption to, or lack of availability of, these third-party tools could harm our moderators’ ability to review content and enforce community rules. Further, if we are unable to provide effective support for third-party moderation tools, or develop our own such tools, our moderators could decide to leave our platform and may encourage their communities to follow them to a new platform, which would adversely affect our business, results of operations, financial condition, and prospects.

Since Reddit’s API policy changes, a small number of third-party Reddit apps remain available. But some of the remaining third-party Reddit app developers have previously told Ars Technica that they’re unsure of their app’s tenability under Reddit’s terms. Nondisclosure agreement requirements and the lack of a finalized developer platform also drive uncertainty around the longevity of the third-party Reddit app ecosystem, according to devs Ars spoke with this year.

Reddit admits more moderator protests could hurt its business Read More »

isps-keep-giving-false-broadband-coverage-data-to-the-fcc,-groups-say

ISPs keep giving false broadband coverage data to the FCC, groups say

Illustration of a US map with crisscrossing lines representing a broadband network.

Getty Images | Andrey Denisyuk

Internet service providers are still providing false coverage information to the Federal Communications Commission, and the FCC process for challenging errors isn’t good enough to handle all the false claims, the agency was told by several groups this week.

The latest complaints focus on fixed wireless providers that offer home Internet service via signals sent to antennas. ISPs that compete against these wireless providers say that exaggerated coverage data prevents them from obtaining government funding designed to subsidize the building of networks in areas with limited coverage.

The wireless company LTD Broadband (which has been renamed GigFire) came under particular scrutiny in an FCC filing submitted by the Accurate Broadband Data Alliance, a group of about 50 ISPs in the Midwest.

“A number of carriers, including LTD Broadband/GigFire LLC and others, continue to overreport Internet service availability, particularly in relation to fixed wireless network capabilities and reach,” the group said. “These errors and irregularities in the Map will hinder and, in many cases, prevent deployment of essential broadband services by redirecting funds away from areas truly lacking sufficient broadband.”

ISPs are required to submit coverage data for the FCC’s broadband map, and there is a challenge process in which false claims can be contested. The FCC recently sought comment on how well the challenge process is working.

CEO blasts “100-year-old telcos”

The Accurate Broadband Data Alliance accused GigFire of behaving badly in the challenge process, saying “LTD Broadband/GigFire LLC often continues to assert unrealistic broadband claims without evidence and even accuses the challenger of falsifying information during the challenge process.”

GigFire CEO Corey Hauer disputed the Accurate Broadband Data Alliance’s accusations. Hauer told Ars today that “GigFire evaluated over 5 million locations and established that 339,598 are eligible to get service and that is accurately reflected in our BDC [Broadband Data Collection] filings.”

Hauer said GigFire offers service in Illinois, Iowa, Minnesota, Missouri, Nebraska, North Dakota, South Dakota, Tennessee, and Wisconsin. The company’s service area is mostly wireless but includes about 20,000 homes passed by fiber lines, he said.

“GigFire wants to service as many customers as we can, but we have no interest in falsely telling customers that they qualify for service,” Hauer told us.

Hauer also said that “GigFire uses widely accepted wireless propagation models to compute our coverage. It’s just math, there is no way to game the system.” He said that telcos “feel they should get an additional wheelbarrow full of ratepayer money, and because of our coverage, they will not.”

“Many of these 100-year-old telcos were so used to being monopolies, that it appears they struggle with consumers that live in their legacy telco boundaries having competitive choices,” Hauer said.

Wireless claims hard to verify, groups say

Wireline providers have also exaggerated coverage, as we’ve reported. Comcast admitted to mistakes last year after previously insisting that false data it gave the FCC was correct. In another case, a small Ohio ISP called Jefferson County Cable admitted lying to the FCC about the size of its network in order to block funding to rivals.

But it can be especially hard to verify the claims made by fixed wireless providers, several groups that represented wireline providers told the FCC. The FCC says that fixed wireless providers can submit either lists of locations, or polygon coverage maps based on propagation modeling. GigFire submitted a list of locations.

Both the list and polygon models drew criticism from telco groups. The Minnesota Telecom Alliance told the FCC this week that “the highly generalized nature of the polygon coverage maps has tempted some competitive fixed wireless providers to exaggerate the extent of their service areas and the speeds of their services.”

The Minnesota group said that “polygon coverage maps are able to show only an alleged unsubsidized fixed wireless competitor’s theoretical potential signal coverage over a general area,” and don’t account for problems like “line-of-sight obstructions, terrain, foliage, weather conditions, and busy hour congestion” that can restrict coverage at specific locations.

“MTA members are aware that many fixed wireless broadband service providers are unable to determine whether they actually can serve a specific location and what level of service they can provide to that location unless and until they send a technician to the site to attempt to install service,” the group said.

The Minnesota telco group complained that inaccurate filings reduce the number of locations at which a telco can receive Universal Service Fund (USF) money. It is often virtually impossible to successfully “challenge the accuracy of fixed wireless service availability claims that can adversely impact USF support,” the group said.

ISPs keep giving false broadband coverage data to the FCC, groups say Read More »

snapchat-isn’t-liable-for-connecting-12-year-old-to-convicted-sex-offenders

Snapchat isn’t liable for connecting 12-year-old to convicted sex offenders

Snapchat isn’t liable for connecting 12-year-old to convicted sex offenders

A judge has dismissed a complaint from a parent and guardian of a girl, now 15, who was sexually assaulted when she was 12 years old after Snapchat recommended that she connect with convicted sex offenders.

According to the court filing, the abuse that the girl, C.O., experienced on Snapchat happened soon after she signed up for the app in 2019. Through its “Quick Add” feature, Snapchat “directed her” to connect with “a registered sex offender using the profile name JASONMORGAN5660.” After a little more than a week on the app, C.O. was bombarded with inappropriate images and subjected to sextortion and threats before the adult user pressured her to meet up, then raped her. Cops arrested the adult user the next day, resulting in his incarceration, but his Snapchat account remained active for three years despite reports of harassment, the complaint alleged.

Two years later, at 14, C.O. connected with another convicted sex offender on Snapchat, a former police officer who offered to give C.O. a ride to school and then sexually assaulted her. The second offender is also currently incarcerated, the judge’s opinion noted.

The lawsuit painted a picture of Snapchat’s ongoing neglect of minors it knows are being targeted by sexual predators. Prior to C.O.’s attacks, both adult users sent and requested sexually explicit photos, seemingly without the app detecting any child sexual abuse materials exchanged on the platform. C.O. had previously reported other adult accounts sending her photos of male genitals, but Snapchat allegedly “did nothing to block these individuals from sending her inappropriate photographs.”

Among other complaints, C.O.’s lawsuit alleged that Snapchat’s algorithm for its “Quick Add” feature was the problem. It allegedly recklessly works to detect when adult accounts are seeking to connect with young girls and, by design, sends more young girls their way—continually directing sexual predators toward vulnerable targets. Snapchat is allegedly aware of these abuses and, therefore, should be held liable for harm caused to C.O., the lawsuit argued.

Although C.O.’s case raised difficult questions, Judge Barbara Bellis ultimately agreed with Snapchat that Section 230 of the Communications Decency Act barred all claims and shielded Snap because “the allegations of this case fall squarely within the ambit of the immunity afforded to” platforms publishing third-party content.

According to Bellis, C.O.’s family had “clearly alleged” that Snap had failed to design its recommendations systems to block young girls from receiving messages from sexual predators. Specifically, Section 230 immunity shields Snap from liability in this case because Bellis considered the messages exchanged to be third-party content. Snapchat designing its recommendation systems to deliver content is a protected activity, Bellis ruled.

Internet law professor Eric Goldman wrote in his blog that Bellis’ “well-drafted and no-nonsense opinion” is “grounded” in precedent. Pointing to an “extremely similar” 2008 case against MySpace—”which reached the same outcome that Section 230 applies to offline sexual abuse following online messaging”—Goldman suggested that “the law has been quite consistent for a long time.”

However, as this case was being decided, a seemingly conflicting ruling in a Los Angeles court found that “Section 230 didn’t protect Snapchat from liability for allegedly connecting teens with drug dealers,” MediaPost noted. Bellis acknowledged this outlier opinion but did not appear to consider it persuasive.

Yet, at the end of her opinion, Bellis seemed to take aim at Section 230 as perhaps being too broad.

She quoted a ruling from the First Circuit Court of Appeals, which noted that some Section 230 cases, presumably like C.O.’s, are “hard” for courts not because “the legal issues defy resolution,” but because Section 230 requires that the court “deny relief to plaintiffs whose circumstances evoke outrage.” She then went on to quote an appellate court ruling on a similarly “difficult” Section 230 case that warned “without further legislative action,” there is “little” that courts can do “but join with other courts and commentators in expressing concern” with Section 230’s “broad scope.”

Ars could not immediately reach Snapchat or lawyers representing C.O.’s family for comment.

Snapchat isn’t liable for connecting 12-year-old to convicted sex offenders Read More »

does-fubo’s-antitrust-lawsuit-against-espn,-fox,-and-wbd-stand-a-chance?

Does Fubo’s antitrust lawsuit against ESPN, Fox, and WBD stand a chance?

Collaborating conglomerates —

Fubo: Media giants’ anticompetitive tactics already killed PS Vue, other streamers.

In this photo illustration, the FuboTV Inc. logo is displayed on a smartphone screen and ESPN, Warner Bros. Discovery and FOX logos in the background.

Fubo is suing Fox Corporation, The Walt Disney Company, and Warner Bros. Discovery (WBD) over their plans to launch a unified sports streaming app. Fubo, a live sports streaming service that has business relationships with the three companies, claims the firms have engaged in anticompetitive practices for years, leading to higher prices for consumers.

In an attempt to understand how much potential the allegations have to derail the app’s launch, Ars Technica read the 73-page sealed complaint and sought opinions from some antitrust experts. While some of Fubo’s allegations could be hard to prove, Fubo isn’t the only one concerned about the joint app’s potential to make it hard for streaming services to compete fairly.

Fubo wants to kill ESPN, Fox, and WBD’s joint sports app

Earlier this month, Disney, which owns ESPN, WBD (whose sports channels include TBS and TNT), and Fox, which owns Fox broadcast stations and Fox Sports channels like FS1, announced plans to launch an equally owned live sports streaming app this fall. Pricing hasn’t been confirmed but is expected to be in the $30-to-$50-per-month range. Fubo, for comparison, starts at $80 per month for English-language channels.

Via a lawsuit filed on Tuesday in US District Court for the Southern District of New York, Fubo is seeking an injunction against the app and joint venture (JV), a jury trial, and damages for an unspecified figure. There have been reports that Fubo was suing the three companies for $1 billion, but a Fubo spokesperson confirmed to Ars that this figure is incorrect.

“Insurmountable barriers”

Fubo, which was founded in 2015, is arguing that the three companies’ proposed app will result in higher prices for live sports streaming customers.

The New York City-headquartered company claims the collaboration would preclude other distributors of live sports content, like Fubo, from competing fairly. The lawsuit also claims that distributors like Fubo would see higher prices and worse agreements associated with licensing sports content due to the JV, which could even stop licensing critical sports content to companies like Fubo. Fubo’s lawsuit says that “once they have combined forces, Defendants’ incentive to exclude Fubo and other rivals will only increase.”

Disney, Fox, and WBD haven’t disclosed specifics about how their JV will impact how they license the rights to sports events to companies outside of their JV; however, they have claimed that they will license their respective entities to the JV on a non-exclusive basis.

That statement doesn’t specify, though, if the companies will try to bundle content together forcibly,

“If the three firms get together and say, ‘We’re no longer going to provide to you these streams for resale separately. You must buy a bundle as a condition of getting any of them,’ that would … be an anti-competitive bundle that can be challenged under antitrust law,” Hal Singer, an economics professor at The University of Utah and managing director at Econ One, told Ars.

Lee Hepner, counsel at the American Economic Liberties Project, shared similar concerns about the JV with Ars:

Joint ventures raise the same concerns as mergers when the effect is to shut out competitors and gain power to raise prices and reduce quality. Sports streaming is an extremely lucrative market, and a joint venture between these three powerhouses will foreclose the ability of rivals like Fubo to compete on fair terms.

Fubo’s lawsuit cites research from Citi, finding that, combined, ESPN (26.8 percent), Fox (17.3 percent), and WBD (9.9 percent) own 54 percent of the US sports rights market.

In a statement, Fubo co-founder and CEO David Gandler said the three companies “are erecting insurmountable barriers that will effectively block any new competitors” and will leave sports streamers without options.

The US Department of Justice is reportedly eyeing the JV for an antitrust review and plans to look at the finalized terms, according to a February 15 Bloomberg report citing two anonymous “people familiar with the process.”

Does Fubo’s antitrust lawsuit against ESPN, Fox, and WBD stand a chance? Read More »

twitter-security-staff-kept-firm-in-compliance-by-disobeying-musk,-ftc-says

Twitter security staff kept firm in compliance by disobeying Musk, FTC says

Close call —

Lina Khan: Musk demanded “actions that would have violated the FTC’s Order.”

Elon Musk sits on stage while being interviewed during a conference.

Enlarge / Elon Musk at the New York Times DealBook Summit on November 29, 2023, in New York City.

Getty Images | Michael Santiago

Twitter employees prevented Elon Musk from violating the company’s privacy settlement with the US government, according to Federal Trade Commission Chair Lina Khan.

After Musk bought Twitter in late 2022, he gave Bari Weiss and other journalists access to company documents in the so-called “Twitter Files” incident. The access given to outside individuals raised concerns that Twitter (which is currently named X) violated a 2022 settlement with the FTC, which has requirements designed to prevent repeats of previous security failures.

Some of Twitter’s top privacy and security executives also resigned shortly after Musk’s purchase, citing concerns that Musk’s rapid changes could cause violations of the settlement.

FTC staff deposed former Twitter employees and “learned that the access provided to the third-party individuals turned out to be more limited than the individuals’ tweets and other public reporting had indicated,” Khan wrote in a letter sent today to US Rep. Jim Jordan (R-Ohio). Khan’s letter said the access was limited because employees refused to comply with Musk’s demands:

The deposition testimony revealed that in early December 2022, Elon Musk had reportedly directed staff to grant an outside third-party individual “full access to everything at Twitter… No limits at all.” Consistent with Musk’s direction, the individual was initially assigned a company laptop and internal account, with the intent that the third-party individual be given “elevated privileges” beyond what an average company employee might have.

However, based on a concern that such an arrangement would risk exposing nonpublic user information in potential violation of the FTC’s Order, longtime information security employees at Twitter intervened and implemented safeguards to mitigate the risks. Ultimately the third-party individuals did not receive direct access to Twitter’s systems, but instead worked with other company employees who accessed the systems on the individuals’ behalf.

Khan: FTC “was right to be concerned”

Jordan is chair of the House Judiciary Committee and has criticized the investigation, claiming that “the FTC harassed Twitter in the wake of Mr. Musk’s acquisition.” Khan’s letter to Jordan today argues that the FTC investigation was justified.

“The FTC’s investigation confirmed that staff was right to be concerned, given that Twitter’s new CEO had directed employees to take actions that would have violated the FTC’s Order,” Khan wrote. “Once staff learned that the FTC’s Order had worked to ensure that Twitter employees took appropriate measures to protect consumers’ private information, compliance staff made no further inquiries to Twitter or anyone else concerning this issue.”

Khan also wrote that deep staff cuts following the Musk acquisition, and resignations of Twitter’s top privacy and compliance officials, meant that “there was no one left at the company responsible for interpreting and modifying data policies and practices to ensure Twitter was complying with the FTC’s Order to safeguard Americans’ personal data.” The letter continued:

During staff’s evaluation of the workforce reductions, one of the company’s recently departed lead privacy and security experts testified that Twitter Blue was being implemented too quickly so that the proper “security and privacy review was not conducted in accordance with the company’s process for software development.” Another expert testified that he had concerns about Mr. Musk’s “commitment to overall security and privacy of the organization.” Twitter, meanwhile, filed a motion seeking to eliminate the FTC Order that protected the privacy and security of Americans’ data. Fortunately for Twitter’s millions of users, that effort failed in court.

FTC still trying to depose Musk

While no violation was found in this case, the FTC isn’t done investigating. When contacted by Ars, an FTC spokesperson said the agency cannot rule out bringing lawsuits against Musk’s social network for violations of the settlement or US law.

“When we heard credible public reports of potential violations of protections for Twitter users’ data, we moved swiftly to investigate,” the FTC said in a statement today. “The order remains in place and the FTC continues to deploy the order’s tools to protect Twitter users’ data and ensure the company remains in compliance.”

The FTC also said it is continuing attempts to depose Musk. In July 2023, Musk’s X Corp. asked a federal court for an order that would terminate the settlement and prevent the FTC from deposing Musk. The court denied both requests in November. In a filing, US government lawyers said the FTC investigation had “revealed a chaotic environment at the company that raised serious questions about whether and how Musk and other leaders were ensuring X Corp.’s compliance with the 2022 Administrative Order.”

We contacted X today, but an auto-reply informed us that the company was busy and asked that we check back later.

Twitter security staff kept firm in compliance by disobeying Musk, FTC says Read More »

court-blocks-$1-billion-copyright-ruling-that-punished-isp-for-its-users’-piracy

Court blocks $1 billion copyright ruling that punished ISP for its users’ piracy

A man, surrounded by music CDs, uses a laptop while wearing a skull-and-crossbones pirate hat and holding one of the CDs in his mouth.

Getty Images | OcusFocus

A federal appeals court today overturned a $1 billion piracy verdict that a jury handed down against cable Internet service provider Cox Communications in 2019. Judges rejected Sony’s claim that Cox profited directly from copyright infringement committed by users of Cox’s cable broadband network.

Appeals court judges didn’t let Cox off the hook entirely, but they vacated the damages award and ordered a new damages trial, which will presumably result in a significantly smaller amount to be paid to Sony and other copyright holders. Universal and Warner are also plaintiffs in the case.

“We affirm the jury’s finding of willful contributory infringement,” said a unanimous decision by a three-judge panel at the US Court of Appeals for the 4th Circuit. “But we reverse the vicarious liability verdict and remand for a new trial on damages because Cox did not profit from its subscribers’ acts of infringement, a legal prerequisite for vicarious liability.”

If the correct legal standard had been used in the district court, “no reasonable jury could find that Cox received a direct financial benefit from its subscribers’ infringement of Plaintiffs’ copyrights,” judges wrote.

The case began when Sony and other music copyright holders sued Cox, claiming that it didn’t adequately fight piracy on its network and failed to terminate repeat infringers. A US District Court jury in the Eastern District of Virginia found the ISP liable for infringement of 10,017 copyrighted works.

Copyright owners want ISPs to disconnect users

Cox’s appeal was supported by advocacy groups concerned that the big-money judgment could force ISPs to disconnect more Internet users based merely on accusations of copyright infringement. Groups such as the Electronic Frontier Foundation also called the ruling legally flawed.

“When these music companies sued Cox Communications, an ISP, the court got the law wrong,” the EFF wrote in 2021. “It effectively decided that the only way for an ISP to avoid being liable for infringement by its users is to terminate a household or business’s account after a small number of accusations—perhaps only two. The court also allowed a damages formula that can lead to nearly unlimited damages, with no relationship to any actual harm suffered. If not overturned, this decision will lead to an untold number of people losing vital Internet access as ISPs start to cut off more and more customers to avoid massive damages.”

In today’s 4th Circuit ruling, appeals court judges wrote that “Sony failed, as a matter of law, to prove that Cox profits directly from its subscribers’ copyright infringement.”

A defendant may be vicariously liable for a third party’s copyright infringement if it profits directly from it and is in a position to supervise the infringer, the ruling said. Cox argued that it doesn’t profit directly from infringement because it receives the same monthly fee from subscribers whether they illegally download copyrighted files or not, the ruling noted.

The question in this type of case is whether there is a causal relationship between the infringement and the financial benefit. “If copyright infringement draws customers to the defendant’s service or incentivizes them to pay more for their service, that financial benefit may be profit from infringement. But in every case, the financial benefit to the defendant must flow directly from the third party’s acts of infringement to establish vicarious liability,” the court said.

Court blocks $1 billion copyright ruling that punished ISP for its users’ piracy Read More »

musk-claims-neuralink-patient-doing-ok-with-implant,-can-move-mouse-with-brain

Musk claims Neuralink patient doing OK with implant, can move mouse with brain

Neuralink brain implant —

Medical ethicists alarmed by Musk being “sole source of information” on patient.

A person's hand holidng a brain implant device that is about the size of a coin.

Enlarge / A Neuralink implant.

Neuralink

Neuralink co-founder Elon Musk said the first human to be implanted with the company’s brain chip is now able to move a mouse cursor just by thinking.

“Progress is good, and the patient seems to have made a full recovery, with no ill effects that we are aware of. Patient is able to move a mouse around the screen by just thinking,” Musk said Monday during an X Spaces event, according to Reuters.

Musk’s update came a few weeks after he announced that Neuralink implanted a chip into the human. The previous update was also made on X, the Musk-owned social network formerly named Twitter.

Musk reportedly said during yesterday’s chat, “We’re trying to get as many button presses as possible from thinking. So that’s what we’re currently working on is: can you get left mouse, right mouse, mouse down, mouse up… We want to have more than just two buttons.”

Neuralink itself doesn’t seem to have issued any statement on the patient’s progress. We contacted the company today and will update this article if we get a response.

“Basic ethical standards” not met

Neuralink’s method of releasing information was criticized last week by Arthur Caplan, a bioethics professor and head of the Division of Medical Ethics at NYU Grossman School of Medicine, and Jonathan Moreno, a University of Pennsylvania medical ethics professor.

“Science by press release, while increasingly common, is not science,” Caplan and Moreno wrote in an essay published by the nonprofit Hastings Center. “When the person paying for a human experiment with a huge financial stake in the outcome is the sole source of information, basic ethical standards have not been met.”

Caplan and Moreno acknowledged that Neuralink and Musk seem to be “in the clear” legally:

Assuming that some brain-computer interface device was indeed implanted in some patient with severe paralysis by some surgeons somewhere, it would be reasonable to expect some formal reporting about the details of an unprecedented experiment involving a vulnerable person. But unlike drug studies in which there are phases that must be registered in a public database, the Food and Drug Administration does not require reporting of early feasibility studies of devices. From a legal standpoint Musk’s company is in the clear, a fact that surely did not escape the tactical notice of his company’s lawyers.

But they argue that opening “the brain of a living human being to insert a device” should have been accompanied with more public detail. There is an ethical obligation “to avoid the risk of giving false hope to countless thousands of people with serious neurological disabilities,” they wrote.

A brain implant could have complications that leave a patient in worse condition, the ethics professors noted. “We are not even told what plans there are to remove the device if things go wrong or the subject simply wants to stop,” Caplan and Moreno wrote. “Nor do we know the findings of animal research that justified beginning a first-in-human experiment at this time, especially since it is not lifesaving research.”

Clinical trial still to come

Neuralink has been criticized for alleged mistreatment of animals in research and was reportedly fined $2,480 for violating US Department of Transportation rules on the movement of hazardous materials after inspections of company facilities last year.

People “should continue to be skeptical of the safety and functionality of any device produced by Neuralink,” the nonprofit Physicians Committee for Responsible Medicine said after last month’s announcement of the first implant.

“The Physicians Committee continues to urge Elon Musk and Neuralink to shift to developing a noninvasive brain-computer interface,” the group said. “Researchers elsewhere have already made progress to improve patient health using such noninvasive methods, which do not come with the risk of surgical complications, infections, or additional operations to repair malfunctioning implants.”

In May 2023, Neuralink said it obtained Food and Drug Administration approval for clinical trials. The company’s previous attempt to gain approval was reportedly denied by the FDA over safety concerns and other “deficiencies.”

In September, the company said it was recruiting volunteers, specifically people with quadriplegia due to cervical spinal cord injury or amyotrophic lateral sclerosis. Neuralink said the first human clinical trial for PRIME (Precise Robotically Implanted Brain-Computer Interface) will evaluate the safety of its implant and surgical robot, “and assess the initial functionality of our BCI [brain-computer interface] for enabling people with paralysis to control external devices with their thoughts.”

Musk claims Neuralink patient doing OK with implant, can move mouse with brain Read More »

eu-accuses-tiktok-of-failing-to-stop-kids-pretending-to-be-adults

EU accuses TikTok of failing to stop kids pretending to be adults

Getting TikTok’s priorities straight —

TikTok becomes the second platform suspected of Digital Services Act breaches.

EU accuses TikTok of failing to stop kids pretending to be adults

The European Commission (EC) is concerned that TikTok isn’t doing enough to protect kids, alleging that the short-video app may be sending kids down rabbit holes of harmful content while making it easy for kids to pretend to be adults and avoid the protective content filters that do exist.

The allegations came Monday when the EC announced a formal investigation into how TikTok may be breaching the Digital Services Act (DSA) “in areas linked to the protection of minors, advertising transparency, data access for researchers, as well as the risk management of addictive design and harmful content.”

“We must spare no effort to protect our children,” Thierry Breton, European Commissioner for Internal Market, said in the press release, reiterating that the “protection of minors is a top enforcement priority for the DSA.”

This makes TikTok the second platform investigated for possible DSA breaches after X (aka Twitter) came under fire last December. Both are being scrutinized after submitting transparency reports in September that the EC said failed to satisfy the DSA’s strict standards on predictable things like not providing enough advertising transparency or data access for researchers.

But while X is additionally being investigated over alleged dark patterns and disinformation—following accusations last October that X wasn’t stopping the spread of Israel/Hamas disinformation—it’s TikTok’s young user base that appears to be the focus of the EC’s probe into its platform.

“As a platform that reaches millions of children and teenagers, TikTok must fully comply with the DSA and has a particular role to play in the protection of minors online,” Breton said. “We are launching this formal infringement proceeding today to ensure that proportionate action is taken to protect the physical and emotional well-being of young Europeans.”

Likely over the coming months, the EC will request more information from TikTok, picking apart its DSA transparency report. The probe could require interviews with TikTok staff or inspections of TikTok’s offices.

Upon concluding its investigation, the EC could require TikTok to take interim measures to fix any issues that are flagged. The Commission could also make a decision regarding non-compliance, potentially subjecting TikTok to fines of up to 6 percent of its global turnover.

An EC press officer, Thomas Regnier, told Ars that the Commission suspected that TikTok “has not diligently conducted” risk assessments to properly maintain mitigation efforts protecting “the physical and mental well-being of their users, and the rights of the child.”

In particular, its algorithm may risk “stimulating addictive behavior,” and its recommender systems “might drag its users, in particular minors and vulnerable users, into a so-called ‘rabbit hole’ of repetitive harmful content,” Regnier told Ars. Further, TikTok’s age verification system may be subpar, with the EU alleging that TikTok perhaps “failed to diligently assess the risk of 13-17-year-olds pretending to be adults when accessing TikTok,” Regnier said.

To better protect TikTok’s young users, the EU’s investigation could force TikTok to update its age-verification system and overhaul its default privacy, safety, and security settings for minors.

“In particular, the Commission suspects that the default settings of TikTok’s recommender systems do not ensure a high level of privacy, security, and safety of minors,” Regnier said. “The Commission also suspects that the default privacy settings that TikTok has for 16-17-year-olds are not the highest by default, which would not be compliant with the DSA, and that push notifications are, by default, not switched off for minors, which could negatively impact children’s safety.”

TikTok could avoid steep fines by committing to remedies recommended by the EC at the conclusion of its investigation.

Regnier told Ars that the EC does not comment on ongoing investigations, but its probe into X has spanned three months so far. Because the DSA does not provide any deadlines that may speed up these kinds of enforcement proceedings, ultimately, the duration of both investigations will depend on how much “the company concerned cooperates,” the EU’s press release said.

A TikTok spokesperson told Ars that TikTok “would continue to work with experts and the industry to keep young people on its platform safe,” confirming that the company “looked forward to explaining this work in detail to the European Commission.”

“TikTok has pioneered features and settings to protect teens and keep under-13s off the platform, issues the whole industry is grappling with,” TikTok’s spokesperson said.

All online platforms are now required to comply with the DSA, but enforcement on TikTok began near the end of July 2023. A TikTok press release last August promised that the platform would be “embracing” the DSA. But in its transparency report, submitted the next month, TikTok acknowledged that the report only covered “one month of metrics” and may not satisfy DSA standards.

“We still have more work to do,” TikTok’s report said, promising that “we are working hard to address these points ahead of our next DSA transparency report.”

EU accuses TikTok of failing to stop kids pretending to be adults Read More »

report:-apple-is-about-to-be-fined-e500-million-by-the-eu-over-music-streaming

Report: Apple is about to be fined €500 million by the EU over music streaming

Competition concerns —

EC accuses Apple of abusing its market position after complaint by Spotify.

Report: Apple is about to be fined €500 million by the EU over music streaming

Brussels is to impose its first-ever fine on tech giant Apple for allegedly breaking EU law over access to its music streaming services, according to five people with direct knowledge of the long-running investigation.

The fine, which is in the region of €500 million and is expected to be announced early next month, is the culmination of a European Commission antitrust probe into whether Apple has used its own platform to favor its services over those of competitors.

The probe is investigating whether Apple blocked apps from informing iPhone users of cheaper alternatives to access music subscriptions outside the App Store. It was launched after music-streaming app Spotify made a formal complaint to regulators in 2019.

The Commission will say Apple’s actions are illegal and go against the bloc’s rules that enforce competition in the single market, the people familiar with the case told the Financial Times. It will ban Apple’s practice of blocking music services from letting users outside its App Store switch to cheaper alternatives.

Brussels will accuse Apple of abusing its powerful position and imposing anti-competitive trading practices on rivals, the people said, adding that the EU would say the tech giant’s terms were “unfair trading conditions.”

It is one of the most significant financial penalties levied by the EU on Big Tech companies. A series of fines against Google levied over several years and amounting to about 8 billion euros are being contested in court.

Apple has never previously been fined for antitrust infringements by Brussels, but the company was hit in 2020 with a 1.1 billion-euro fine in France for alleged anti-competitive behavior. The penalty was revised down to 372 million euros after an appeal.

The EU’s action against Apple will reignite the war between Brussels and Big Tech at a time when companies are being forced to show how they are complying with landmark new rules aimed at opening competition and allowing small tech rivals to thrive.

Companies that are defined as gatekeepers, including Apple, Amazon, and Google, need to fully comply with these rules under the Digital Markets Act by early next month.

The act requires these tech giants to comply with more stringent rules and will force them to allow rivals to share information about their services.

There are concerns that the rules are not enabling competition as fast as some had hoped, although Brussels has insisted that changes require time.

Brussels formally charged Apple in the anti-competitive probe in 2021. The commission narrowed the scope of the investigation last year and abandoned a charge of pushing developers to use its own in-app payment system.

Apple last month announced changes to its iOS mobile software, App Store, and Safari browser in efforts to appease Brussels after long resisting such steps. But Spotify said at the time that Apple’s compliance was a “complete and total farce.”

Apple responded by saying that “the changes we’re sharing for apps in the European Union give developers choice—with new options to distribute iOS apps and process payments.”

In a separate antitrust case, Brussels is consulting with Apple’s rivals over the tech giant’s concessions to appease worries that it is blocking financial groups from its Apple Pay mobile system.

The timing of the Commission’s announcement has not yet been fixed, but it will not change the direction of the antitrust investigation, the people with knowledge of the situation said.

Apple, which can appeal to the EU courts, declined to comment on the forthcoming ruling but pointed to a statement a year ago when it said it was “pleased” the Commission had narrowed the charges and said it would address concerns while promoting competition.

It added: “The App Store has helped Spotify become the top music streaming service across Europe and we hope the European Commission will end its pursuit of a complaint that has no merit.”

The Commission—the executive body of the EU—declined to comment.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Report: Apple is about to be fined €500 million by the EU over music streaming Read More »

elon-musk’s-x-allows-china-based-propaganda-banned-on-other-platforms

Elon Musk’s X allows China-based propaganda banned on other platforms

Rinse-wash-repeat. —

X accused of overlooking propaganda flagged by Meta and criminal prosecutors.

Elon Musk’s X allows China-based propaganda banned on other platforms

Lax content moderation on X (aka Twitter) has disrupted coordinated efforts between social media companies and law enforcement to tamp down on “propaganda accounts controlled by foreign entities aiming to influence US politics,” The Washington Post reported.

Now propaganda is “flourishing” on X, The Post said, while other social media companies are stuck in endless cycles, watching some of the propaganda that they block proliferate on X, then inevitably spread back to their platforms.

Meta, Google, and then-Twitter began coordinating takedown efforts with law enforcement and disinformation researchers after Russian-backed influence campaigns manipulated their platforms in hopes of swaying the 2016 US presidential election.

The next year, all three companies promised Congress to work tirelessly to stop Russian-backed propaganda from spreading on their platforms. The companies created explicit election misinformation policies and began meeting biweekly to compare notes on propaganda networks each platform uncovered, according to The Post’s interviews with anonymous sources who participated in these meetings.

However, after Elon Musk purchased Twitter and rebranded the company as X, his company withdrew from the alliance in May 2023.

Sources told The Post that the last X meeting attendee was Irish intelligence expert Aaron Rodericks—who was allegedly disciplined for liking an X post calling Musk “a dipshit.” Rodericks was subsequently laid off when Musk dismissed the entire election integrity team last September, and after that, X apparently ditched the biweekly meeting entirely and “just kind of disappeared,” a source told The Post.

In 2023, for example, Meta flagged 150 “artificial influence accounts” identified on its platform, of which “136 were still present on X as of Thursday evening,” according to The Post’s analysis. X’s seeming oversight extends to all but eight of the 123 “deceptive China-based campaigns” connected to accounts that Meta flagged last May, August, and December, The Post reported.

The Post’s report also provided an exclusive analysis from the Stanford Internet Observatory (SIO), which found that 86 propaganda accounts that Meta flagged last November “are still active on X.”

The majority of these accounts—81—were China-based accounts posing as Americans, SIO reported. These accounts frequently ripped photos from Americans’ LinkedIn profiles, then changed the real Americans’ names while posting about both China and US politics, as well as people often trending on X, such as Musk and Joe Biden.

Meta has warned that China-based influence campaigns are “multiplying,” The Post noted, while X’s standards remain seemingly too relaxed. Even accounts linked to criminal investigations remain active on X. One “account that is accused of being run by the Chinese Ministry of Public Security,” The Post reported, remains on X despite its posts being cited by US prosecutors in a criminal complaint.

Prosecutors connected that account to “dozens” of X accounts attempting to “shape public perceptions” about the Chinese Communist Party, the Chinese government, and other world leaders. The accounts also comment on hot-button topics like the fentanyl problem or police brutality, seemingly to convey “a sense of dismay over the state of America without any clear partisan bent,” Elise Thomas, an analyst for a London nonprofit called the Institute for Strategic Dialogue, told The Post.

Some X accounts flagged by The Post had more than 1 million followers. Five have paid X for verification, suggesting that their disinformation campaigns—targeting hashtags to confound discourse on US politics—are seemingly being boosted by X.

SIO technical research manager Renée DiResta criticized X’s decision to stop coordinating with other platforms.

“The presence of these accounts reinforces the fact that state actors continue to try to influence US politics by masquerading as media and fellow Americans,” DiResta told The Post. “Ahead of the 2022 midterms, researchers and platform integrity teams were collaborating to disrupt foreign influence efforts. That collaboration seems to have ground to a halt, Twitter does not seem to be addressing even networks identified by its peers, and that’s not great.”

Musk shut down X’s election integrity team because he claimed that the team was actually “undermining” election integrity. But analysts are bracing for floods of misinformation to sway 2024 elections, as some major platforms have removed election misinformation policies just as rapid advances in AI technologies have made misinformation spread via text, images, audio, and video harder for the average person to detect.

In one prominent example, a fake robocaller relied on AI voice technology to pose as Biden to tell Democrats not to vote. That incident seemingly pushed the Federal Trade Commission on Thursday to propose penalizing AI impersonation.

It seems apparent that propaganda accounts from foreign entities on X will use every tool available to get eyes on their content, perhaps expecting Musk’s platform to be the slowest to police them. According to The Post, some of the X accounts spreading propaganda are using what appears to be AI-generated images of Biden and Donald Trump to garner tens of thousands of views on posts.

It’s possible that X will start tightening up on content moderation as elections draw closer. Yesterday, X joined Amazon, Google, Meta, OpenAI, TikTok, and other Big Tech companies in signing an agreement to fight “deceptive use of AI” during 2024 elections. Among the top goals identified in the “AI Elections accord” are identifying where propaganda originates, detecting how propaganda spreads across platforms, and “undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing” with propaganda.

Elon Musk’s X allows China-based propaganda banned on other platforms Read More »

apple-disables-iphone-web-apps-in-eu,-says-it’s-too-hard-to-comply-with-rules

Apple disables iPhone web apps in EU, says it’s too hard to comply with rules

Digital Markets Act —

Apple says it can’t secure home-screen web apps with third-party browser engines.

Photo of an iPhone focusing on the app icons for Phone, Safari, Messages, and Music.

Getty Images | NurPhoto

Apple is removing the ability to install home screen web apps from iPhones and iPads in Europe when iOS 17.4 comes out, saying it’s too hard to keep offering the feature under the European Union’s new Digital Markets Act (DMA). Apple is required to comply with the law by March 6.

Apple said the change is necessitated by a requirement to let developers “use alternative browser engines—other than WebKit—for dedicated browser apps and apps providing in-app browsing experiences in the EU.” Apple explained its stance in a developer Q&A under the heading, “Why don’t users in the EU have access to Home Screen web apps?” It says:

Addressing the complex security and privacy concerns associated with web apps using alternative browser engines would require building an entirely new integration architecture that does not currently exist in iOS and was not practical to undertake given the other demands of the DMA and the very low user adoption of Home Screen web apps. And so, to comply with the DMA’s requirements, we had to remove the Home Screen web apps feature in the EU.

It will still be possible to add website bookmarks to iPhone and iPad home screens, but those bookmarks would take the user to the web browser instead of a separate web app. The change was recently rolled out to beta versions of iOS 17.4.

The Digital Markets Act targets “gatekeepers” of certain technologies such as operating systems, browsers, and search engines. It requires gatekeepers to let third parties interoperate with the gatekeepers’ own services, and prohibits them from favoring their own services at the expense of competitors. As 9to5Mac notes, allowing home screen web apps with Safari but not third-party browser engines might cause Apple to violate the rules.

Apple warns of “malicious web apps”

As Apple explains, iOS “has traditionally provided support for Home Screen web apps by building directly on WebKit and its security architecture. That integration means Home Screen web apps are managed to align with the security and privacy model for native apps on iOS, including isolation of storage and enforcement of system prompts to access privacy impacting capabilities on a per-site basis.”

Apple said it won’t be able to guarantee this isolation once alternative browser engines are supported. “Without this type of isolation and enforcement, malicious web apps could read data from other web apps and recapture their permissions to gain access to a user’s camera, microphone or location without a user’s consent. Browsers also could install web apps on the system without a user’s awareness and consent,” Apple’s FAQ said.

Despite the change, Apple said that “EU users will be able to continue accessing websites directly from their Home Screen through a bookmark with minimal impact to their functionality.”

Apple previously announced that its DMA compliance will bring sideloading to Europe, allowing developers to offer iOS apps from stores other than Apple’s official App Store.

Browser choice, security requirements

One browser-related change will be immediately obvious to EU users once they install the new iOS version. “When users in the EU first open Safari on iOS 17.4, they’ll be prompted to choose their default browser and presented with a list of the main web browsers available in their market to select as their default browser,” Apple’s developer FAQ said.

Apple said it had to prepare carefully for the requirement to let developers use alternative browser engines because browser engines “are constantly exposed to untrusted and potentially malicious content and have visibility into sensitive user data,” making them “one of the most common attack vectors for malicious actors.”

Apple said it is requiring developers who use alternative browser engines to meet certain security standards:

To help keep users safe online, Apple will only authorize developers to implement alternative browser engines after meeting specific criteria and committing to a number of ongoing privacy and security requirements, including timely security updates to address emerging threats and vulnerabilities. Apple will provide authorized developers of dedicated browser apps access to security mitigations and capabilities to enable them to build secure browser engines, and access features like passkeys for secure user login, multiprocess system capabilities to improve security and stability, web content sandboxes that combat evolving security threats, and more.

Overall, Apple said its DMA preparations have involved “an enormous amount of engineering work to add new functionality and capabilities for developers and users in the European Union—including more than 600 new APIs and a wide range of developer tools.”

Apple disables iPhone web apps in EU, says it’s too hard to comply with rules Read More »