Policy

court-blocks-$1-billion-copyright-ruling-that-punished-isp-for-its-users’-piracy

Court blocks $1 billion copyright ruling that punished ISP for its users’ piracy

A man, surrounded by music CDs, uses a laptop while wearing a skull-and-crossbones pirate hat and holding one of the CDs in his mouth.

Getty Images | OcusFocus

A federal appeals court today overturned a $1 billion piracy verdict that a jury handed down against cable Internet service provider Cox Communications in 2019. Judges rejected Sony’s claim that Cox profited directly from copyright infringement committed by users of Cox’s cable broadband network.

Appeals court judges didn’t let Cox off the hook entirely, but they vacated the damages award and ordered a new damages trial, which will presumably result in a significantly smaller amount to be paid to Sony and other copyright holders. Universal and Warner are also plaintiffs in the case.

“We affirm the jury’s finding of willful contributory infringement,” said a unanimous decision by a three-judge panel at the US Court of Appeals for the 4th Circuit. “But we reverse the vicarious liability verdict and remand for a new trial on damages because Cox did not profit from its subscribers’ acts of infringement, a legal prerequisite for vicarious liability.”

If the correct legal standard had been used in the district court, “no reasonable jury could find that Cox received a direct financial benefit from its subscribers’ infringement of Plaintiffs’ copyrights,” judges wrote.

The case began when Sony and other music copyright holders sued Cox, claiming that it didn’t adequately fight piracy on its network and failed to terminate repeat infringers. A US District Court jury in the Eastern District of Virginia found the ISP liable for infringement of 10,017 copyrighted works.

Copyright owners want ISPs to disconnect users

Cox’s appeal was supported by advocacy groups concerned that the big-money judgment could force ISPs to disconnect more Internet users based merely on accusations of copyright infringement. Groups such as the Electronic Frontier Foundation also called the ruling legally flawed.

“When these music companies sued Cox Communications, an ISP, the court got the law wrong,” the EFF wrote in 2021. “It effectively decided that the only way for an ISP to avoid being liable for infringement by its users is to terminate a household or business’s account after a small number of accusations—perhaps only two. The court also allowed a damages formula that can lead to nearly unlimited damages, with no relationship to any actual harm suffered. If not overturned, this decision will lead to an untold number of people losing vital Internet access as ISPs start to cut off more and more customers to avoid massive damages.”

In today’s 4th Circuit ruling, appeals court judges wrote that “Sony failed, as a matter of law, to prove that Cox profits directly from its subscribers’ copyright infringement.”

A defendant may be vicariously liable for a third party’s copyright infringement if it profits directly from it and is in a position to supervise the infringer, the ruling said. Cox argued that it doesn’t profit directly from infringement because it receives the same monthly fee from subscribers whether they illegally download copyrighted files or not, the ruling noted.

The question in this type of case is whether there is a causal relationship between the infringement and the financial benefit. “If copyright infringement draws customers to the defendant’s service or incentivizes them to pay more for their service, that financial benefit may be profit from infringement. But in every case, the financial benefit to the defendant must flow directly from the third party’s acts of infringement to establish vicarious liability,” the court said.

Court blocks $1 billion copyright ruling that punished ISP for its users’ piracy Read More »

musk-claims-neuralink-patient-doing-ok-with-implant,-can-move-mouse-with-brain

Musk claims Neuralink patient doing OK with implant, can move mouse with brain

Neuralink brain implant —

Medical ethicists alarmed by Musk being “sole source of information” on patient.

A person's hand holidng a brain implant device that is about the size of a coin.

Enlarge / A Neuralink implant.

Neuralink

Neuralink co-founder Elon Musk said the first human to be implanted with the company’s brain chip is now able to move a mouse cursor just by thinking.

“Progress is good, and the patient seems to have made a full recovery, with no ill effects that we are aware of. Patient is able to move a mouse around the screen by just thinking,” Musk said Monday during an X Spaces event, according to Reuters.

Musk’s update came a few weeks after he announced that Neuralink implanted a chip into the human. The previous update was also made on X, the Musk-owned social network formerly named Twitter.

Musk reportedly said during yesterday’s chat, “We’re trying to get as many button presses as possible from thinking. So that’s what we’re currently working on is: can you get left mouse, right mouse, mouse down, mouse up… We want to have more than just two buttons.”

Neuralink itself doesn’t seem to have issued any statement on the patient’s progress. We contacted the company today and will update this article if we get a response.

“Basic ethical standards” not met

Neuralink’s method of releasing information was criticized last week by Arthur Caplan, a bioethics professor and head of the Division of Medical Ethics at NYU Grossman School of Medicine, and Jonathan Moreno, a University of Pennsylvania medical ethics professor.

“Science by press release, while increasingly common, is not science,” Caplan and Moreno wrote in an essay published by the nonprofit Hastings Center. “When the person paying for a human experiment with a huge financial stake in the outcome is the sole source of information, basic ethical standards have not been met.”

Caplan and Moreno acknowledged that Neuralink and Musk seem to be “in the clear” legally:

Assuming that some brain-computer interface device was indeed implanted in some patient with severe paralysis by some surgeons somewhere, it would be reasonable to expect some formal reporting about the details of an unprecedented experiment involving a vulnerable person. But unlike drug studies in which there are phases that must be registered in a public database, the Food and Drug Administration does not require reporting of early feasibility studies of devices. From a legal standpoint Musk’s company is in the clear, a fact that surely did not escape the tactical notice of his company’s lawyers.

But they argue that opening “the brain of a living human being to insert a device” should have been accompanied with more public detail. There is an ethical obligation “to avoid the risk of giving false hope to countless thousands of people with serious neurological disabilities,” they wrote.

A brain implant could have complications that leave a patient in worse condition, the ethics professors noted. “We are not even told what plans there are to remove the device if things go wrong or the subject simply wants to stop,” Caplan and Moreno wrote. “Nor do we know the findings of animal research that justified beginning a first-in-human experiment at this time, especially since it is not lifesaving research.”

Clinical trial still to come

Neuralink has been criticized for alleged mistreatment of animals in research and was reportedly fined $2,480 for violating US Department of Transportation rules on the movement of hazardous materials after inspections of company facilities last year.

People “should continue to be skeptical of the safety and functionality of any device produced by Neuralink,” the nonprofit Physicians Committee for Responsible Medicine said after last month’s announcement of the first implant.

“The Physicians Committee continues to urge Elon Musk and Neuralink to shift to developing a noninvasive brain-computer interface,” the group said. “Researchers elsewhere have already made progress to improve patient health using such noninvasive methods, which do not come with the risk of surgical complications, infections, or additional operations to repair malfunctioning implants.”

In May 2023, Neuralink said it obtained Food and Drug Administration approval for clinical trials. The company’s previous attempt to gain approval was reportedly denied by the FDA over safety concerns and other “deficiencies.”

In September, the company said it was recruiting volunteers, specifically people with quadriplegia due to cervical spinal cord injury or amyotrophic lateral sclerosis. Neuralink said the first human clinical trial for PRIME (Precise Robotically Implanted Brain-Computer Interface) will evaluate the safety of its implant and surgical robot, “and assess the initial functionality of our BCI [brain-computer interface] for enabling people with paralysis to control external devices with their thoughts.”

Musk claims Neuralink patient doing OK with implant, can move mouse with brain Read More »

eu-accuses-tiktok-of-failing-to-stop-kids-pretending-to-be-adults

EU accuses TikTok of failing to stop kids pretending to be adults

Getting TikTok’s priorities straight —

TikTok becomes the second platform suspected of Digital Services Act breaches.

EU accuses TikTok of failing to stop kids pretending to be adults

The European Commission (EC) is concerned that TikTok isn’t doing enough to protect kids, alleging that the short-video app may be sending kids down rabbit holes of harmful content while making it easy for kids to pretend to be adults and avoid the protective content filters that do exist.

The allegations came Monday when the EC announced a formal investigation into how TikTok may be breaching the Digital Services Act (DSA) “in areas linked to the protection of minors, advertising transparency, data access for researchers, as well as the risk management of addictive design and harmful content.”

“We must spare no effort to protect our children,” Thierry Breton, European Commissioner for Internal Market, said in the press release, reiterating that the “protection of minors is a top enforcement priority for the DSA.”

This makes TikTok the second platform investigated for possible DSA breaches after X (aka Twitter) came under fire last December. Both are being scrutinized after submitting transparency reports in September that the EC said failed to satisfy the DSA’s strict standards on predictable things like not providing enough advertising transparency or data access for researchers.

But while X is additionally being investigated over alleged dark patterns and disinformation—following accusations last October that X wasn’t stopping the spread of Israel/Hamas disinformation—it’s TikTok’s young user base that appears to be the focus of the EC’s probe into its platform.

“As a platform that reaches millions of children and teenagers, TikTok must fully comply with the DSA and has a particular role to play in the protection of minors online,” Breton said. “We are launching this formal infringement proceeding today to ensure that proportionate action is taken to protect the physical and emotional well-being of young Europeans.”

Likely over the coming months, the EC will request more information from TikTok, picking apart its DSA transparency report. The probe could require interviews with TikTok staff or inspections of TikTok’s offices.

Upon concluding its investigation, the EC could require TikTok to take interim measures to fix any issues that are flagged. The Commission could also make a decision regarding non-compliance, potentially subjecting TikTok to fines of up to 6 percent of its global turnover.

An EC press officer, Thomas Regnier, told Ars that the Commission suspected that TikTok “has not diligently conducted” risk assessments to properly maintain mitigation efforts protecting “the physical and mental well-being of their users, and the rights of the child.”

In particular, its algorithm may risk “stimulating addictive behavior,” and its recommender systems “might drag its users, in particular minors and vulnerable users, into a so-called ‘rabbit hole’ of repetitive harmful content,” Regnier told Ars. Further, TikTok’s age verification system may be subpar, with the EU alleging that TikTok perhaps “failed to diligently assess the risk of 13-17-year-olds pretending to be adults when accessing TikTok,” Regnier said.

To better protect TikTok’s young users, the EU’s investigation could force TikTok to update its age-verification system and overhaul its default privacy, safety, and security settings for minors.

“In particular, the Commission suspects that the default settings of TikTok’s recommender systems do not ensure a high level of privacy, security, and safety of minors,” Regnier said. “The Commission also suspects that the default privacy settings that TikTok has for 16-17-year-olds are not the highest by default, which would not be compliant with the DSA, and that push notifications are, by default, not switched off for minors, which could negatively impact children’s safety.”

TikTok could avoid steep fines by committing to remedies recommended by the EC at the conclusion of its investigation.

Regnier told Ars that the EC does not comment on ongoing investigations, but its probe into X has spanned three months so far. Because the DSA does not provide any deadlines that may speed up these kinds of enforcement proceedings, ultimately, the duration of both investigations will depend on how much “the company concerned cooperates,” the EU’s press release said.

A TikTok spokesperson told Ars that TikTok “would continue to work with experts and the industry to keep young people on its platform safe,” confirming that the company “looked forward to explaining this work in detail to the European Commission.”

“TikTok has pioneered features and settings to protect teens and keep under-13s off the platform, issues the whole industry is grappling with,” TikTok’s spokesperson said.

All online platforms are now required to comply with the DSA, but enforcement on TikTok began near the end of July 2023. A TikTok press release last August promised that the platform would be “embracing” the DSA. But in its transparency report, submitted the next month, TikTok acknowledged that the report only covered “one month of metrics” and may not satisfy DSA standards.

“We still have more work to do,” TikTok’s report said, promising that “we are working hard to address these points ahead of our next DSA transparency report.”

EU accuses TikTok of failing to stop kids pretending to be adults Read More »

report:-apple-is-about-to-be-fined-e500-million-by-the-eu-over-music-streaming

Report: Apple is about to be fined €500 million by the EU over music streaming

Competition concerns —

EC accuses Apple of abusing its market position after complaint by Spotify.

Report: Apple is about to be fined €500 million by the EU over music streaming

Brussels is to impose its first-ever fine on tech giant Apple for allegedly breaking EU law over access to its music streaming services, according to five people with direct knowledge of the long-running investigation.

The fine, which is in the region of €500 million and is expected to be announced early next month, is the culmination of a European Commission antitrust probe into whether Apple has used its own platform to favor its services over those of competitors.

The probe is investigating whether Apple blocked apps from informing iPhone users of cheaper alternatives to access music subscriptions outside the App Store. It was launched after music-streaming app Spotify made a formal complaint to regulators in 2019.

The Commission will say Apple’s actions are illegal and go against the bloc’s rules that enforce competition in the single market, the people familiar with the case told the Financial Times. It will ban Apple’s practice of blocking music services from letting users outside its App Store switch to cheaper alternatives.

Brussels will accuse Apple of abusing its powerful position and imposing anti-competitive trading practices on rivals, the people said, adding that the EU would say the tech giant’s terms were “unfair trading conditions.”

It is one of the most significant financial penalties levied by the EU on Big Tech companies. A series of fines against Google levied over several years and amounting to about 8 billion euros are being contested in court.

Apple has never previously been fined for antitrust infringements by Brussels, but the company was hit in 2020 with a 1.1 billion-euro fine in France for alleged anti-competitive behavior. The penalty was revised down to 372 million euros after an appeal.

The EU’s action against Apple will reignite the war between Brussels and Big Tech at a time when companies are being forced to show how they are complying with landmark new rules aimed at opening competition and allowing small tech rivals to thrive.

Companies that are defined as gatekeepers, including Apple, Amazon, and Google, need to fully comply with these rules under the Digital Markets Act by early next month.

The act requires these tech giants to comply with more stringent rules and will force them to allow rivals to share information about their services.

There are concerns that the rules are not enabling competition as fast as some had hoped, although Brussels has insisted that changes require time.

Brussels formally charged Apple in the anti-competitive probe in 2021. The commission narrowed the scope of the investigation last year and abandoned a charge of pushing developers to use its own in-app payment system.

Apple last month announced changes to its iOS mobile software, App Store, and Safari browser in efforts to appease Brussels after long resisting such steps. But Spotify said at the time that Apple’s compliance was a “complete and total farce.”

Apple responded by saying that “the changes we’re sharing for apps in the European Union give developers choice—with new options to distribute iOS apps and process payments.”

In a separate antitrust case, Brussels is consulting with Apple’s rivals over the tech giant’s concessions to appease worries that it is blocking financial groups from its Apple Pay mobile system.

The timing of the Commission’s announcement has not yet been fixed, but it will not change the direction of the antitrust investigation, the people with knowledge of the situation said.

Apple, which can appeal to the EU courts, declined to comment on the forthcoming ruling but pointed to a statement a year ago when it said it was “pleased” the Commission had narrowed the charges and said it would address concerns while promoting competition.

It added: “The App Store has helped Spotify become the top music streaming service across Europe and we hope the European Commission will end its pursuit of a complaint that has no merit.”

The Commission—the executive body of the EU—declined to comment.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Report: Apple is about to be fined €500 million by the EU over music streaming Read More »

elon-musk’s-x-allows-china-based-propaganda-banned-on-other-platforms

Elon Musk’s X allows China-based propaganda banned on other platforms

Rinse-wash-repeat. —

X accused of overlooking propaganda flagged by Meta and criminal prosecutors.

Elon Musk’s X allows China-based propaganda banned on other platforms

Lax content moderation on X (aka Twitter) has disrupted coordinated efforts between social media companies and law enforcement to tamp down on “propaganda accounts controlled by foreign entities aiming to influence US politics,” The Washington Post reported.

Now propaganda is “flourishing” on X, The Post said, while other social media companies are stuck in endless cycles, watching some of the propaganda that they block proliferate on X, then inevitably spread back to their platforms.

Meta, Google, and then-Twitter began coordinating takedown efforts with law enforcement and disinformation researchers after Russian-backed influence campaigns manipulated their platforms in hopes of swaying the 2016 US presidential election.

The next year, all three companies promised Congress to work tirelessly to stop Russian-backed propaganda from spreading on their platforms. The companies created explicit election misinformation policies and began meeting biweekly to compare notes on propaganda networks each platform uncovered, according to The Post’s interviews with anonymous sources who participated in these meetings.

However, after Elon Musk purchased Twitter and rebranded the company as X, his company withdrew from the alliance in May 2023.

Sources told The Post that the last X meeting attendee was Irish intelligence expert Aaron Rodericks—who was allegedly disciplined for liking an X post calling Musk “a dipshit.” Rodericks was subsequently laid off when Musk dismissed the entire election integrity team last September, and after that, X apparently ditched the biweekly meeting entirely and “just kind of disappeared,” a source told The Post.

In 2023, for example, Meta flagged 150 “artificial influence accounts” identified on its platform, of which “136 were still present on X as of Thursday evening,” according to The Post’s analysis. X’s seeming oversight extends to all but eight of the 123 “deceptive China-based campaigns” connected to accounts that Meta flagged last May, August, and December, The Post reported.

The Post’s report also provided an exclusive analysis from the Stanford Internet Observatory (SIO), which found that 86 propaganda accounts that Meta flagged last November “are still active on X.”

The majority of these accounts—81—were China-based accounts posing as Americans, SIO reported. These accounts frequently ripped photos from Americans’ LinkedIn profiles, then changed the real Americans’ names while posting about both China and US politics, as well as people often trending on X, such as Musk and Joe Biden.

Meta has warned that China-based influence campaigns are “multiplying,” The Post noted, while X’s standards remain seemingly too relaxed. Even accounts linked to criminal investigations remain active on X. One “account that is accused of being run by the Chinese Ministry of Public Security,” The Post reported, remains on X despite its posts being cited by US prosecutors in a criminal complaint.

Prosecutors connected that account to “dozens” of X accounts attempting to “shape public perceptions” about the Chinese Communist Party, the Chinese government, and other world leaders. The accounts also comment on hot-button topics like the fentanyl problem or police brutality, seemingly to convey “a sense of dismay over the state of America without any clear partisan bent,” Elise Thomas, an analyst for a London nonprofit called the Institute for Strategic Dialogue, told The Post.

Some X accounts flagged by The Post had more than 1 million followers. Five have paid X for verification, suggesting that their disinformation campaigns—targeting hashtags to confound discourse on US politics—are seemingly being boosted by X.

SIO technical research manager Renée DiResta criticized X’s decision to stop coordinating with other platforms.

“The presence of these accounts reinforces the fact that state actors continue to try to influence US politics by masquerading as media and fellow Americans,” DiResta told The Post. “Ahead of the 2022 midterms, researchers and platform integrity teams were collaborating to disrupt foreign influence efforts. That collaboration seems to have ground to a halt, Twitter does not seem to be addressing even networks identified by its peers, and that’s not great.”

Musk shut down X’s election integrity team because he claimed that the team was actually “undermining” election integrity. But analysts are bracing for floods of misinformation to sway 2024 elections, as some major platforms have removed election misinformation policies just as rapid advances in AI technologies have made misinformation spread via text, images, audio, and video harder for the average person to detect.

In one prominent example, a fake robocaller relied on AI voice technology to pose as Biden to tell Democrats not to vote. That incident seemingly pushed the Federal Trade Commission on Thursday to propose penalizing AI impersonation.

It seems apparent that propaganda accounts from foreign entities on X will use every tool available to get eyes on their content, perhaps expecting Musk’s platform to be the slowest to police them. According to The Post, some of the X accounts spreading propaganda are using what appears to be AI-generated images of Biden and Donald Trump to garner tens of thousands of views on posts.

It’s possible that X will start tightening up on content moderation as elections draw closer. Yesterday, X joined Amazon, Google, Meta, OpenAI, TikTok, and other Big Tech companies in signing an agreement to fight “deceptive use of AI” during 2024 elections. Among the top goals identified in the “AI Elections accord” are identifying where propaganda originates, detecting how propaganda spreads across platforms, and “undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing” with propaganda.

Elon Musk’s X allows China-based propaganda banned on other platforms Read More »

apple-disables-iphone-web-apps-in-eu,-says-it’s-too-hard-to-comply-with-rules

Apple disables iPhone web apps in EU, says it’s too hard to comply with rules

Digital Markets Act —

Apple says it can’t secure home-screen web apps with third-party browser engines.

Photo of an iPhone focusing on the app icons for Phone, Safari, Messages, and Music.

Getty Images | NurPhoto

Apple is removing the ability to install home screen web apps from iPhones and iPads in Europe when iOS 17.4 comes out, saying it’s too hard to keep offering the feature under the European Union’s new Digital Markets Act (DMA). Apple is required to comply with the law by March 6.

Apple said the change is necessitated by a requirement to let developers “use alternative browser engines—other than WebKit—for dedicated browser apps and apps providing in-app browsing experiences in the EU.” Apple explained its stance in a developer Q&A under the heading, “Why don’t users in the EU have access to Home Screen web apps?” It says:

Addressing the complex security and privacy concerns associated with web apps using alternative browser engines would require building an entirely new integration architecture that does not currently exist in iOS and was not practical to undertake given the other demands of the DMA and the very low user adoption of Home Screen web apps. And so, to comply with the DMA’s requirements, we had to remove the Home Screen web apps feature in the EU.

It will still be possible to add website bookmarks to iPhone and iPad home screens, but those bookmarks would take the user to the web browser instead of a separate web app. The change was recently rolled out to beta versions of iOS 17.4.

The Digital Markets Act targets “gatekeepers” of certain technologies such as operating systems, browsers, and search engines. It requires gatekeepers to let third parties interoperate with the gatekeepers’ own services, and prohibits them from favoring their own services at the expense of competitors. As 9to5Mac notes, allowing home screen web apps with Safari but not third-party browser engines might cause Apple to violate the rules.

Apple warns of “malicious web apps”

As Apple explains, iOS “has traditionally provided support for Home Screen web apps by building directly on WebKit and its security architecture. That integration means Home Screen web apps are managed to align with the security and privacy model for native apps on iOS, including isolation of storage and enforcement of system prompts to access privacy impacting capabilities on a per-site basis.”

Apple said it won’t be able to guarantee this isolation once alternative browser engines are supported. “Without this type of isolation and enforcement, malicious web apps could read data from other web apps and recapture their permissions to gain access to a user’s camera, microphone or location without a user’s consent. Browsers also could install web apps on the system without a user’s awareness and consent,” Apple’s FAQ said.

Despite the change, Apple said that “EU users will be able to continue accessing websites directly from their Home Screen through a bookmark with minimal impact to their functionality.”

Apple previously announced that its DMA compliance will bring sideloading to Europe, allowing developers to offer iOS apps from stores other than Apple’s official App Store.

Browser choice, security requirements

One browser-related change will be immediately obvious to EU users once they install the new iOS version. “When users in the EU first open Safari on iOS 17.4, they’ll be prompted to choose their default browser and presented with a list of the main web browsers available in their market to select as their default browser,” Apple’s developer FAQ said.

Apple said it had to prepare carefully for the requirement to let developers use alternative browser engines because browser engines “are constantly exposed to untrusted and potentially malicious content and have visibility into sensitive user data,” making them “one of the most common attack vectors for malicious actors.”

Apple said it is requiring developers who use alternative browser engines to meet certain security standards:

To help keep users safe online, Apple will only authorize developers to implement alternative browser engines after meeting specific criteria and committing to a number of ongoing privacy and security requirements, including timely security updates to address emerging threats and vulnerabilities. Apple will provide authorized developers of dedicated browser apps access to security mitigations and capabilities to enable them to build secure browser engines, and access features like passkeys for secure user login, multiprocess system capabilities to improve security and stability, web content sandboxes that combat evolving security threats, and more.

Overall, Apple said its DMA preparations have involved “an enormous amount of engineering work to add new functionality and capabilities for developers and users in the European Union—including more than 600 new APIs and a wide range of developer tools.”

Apple disables iPhone web apps in EU, says it’s too hard to comply with rules Read More »

air-canada-must-honor-refund-policy-invented-by-airline’s-chatbot

Air Canada must honor refund policy invented by airline’s chatbot

Blame game —

Air Canada appears to have quietly killed its costly chatbot support.

Air Canada must honor refund policy invented by airline’s chatbot

After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline’s bereavement travel policy.

On the day Jake Moffatt’s grandmother died, Moffat immediately visited Air Canada’s website to book a flight from Vancouver to Toronto. Unsure of how Air Canada’s bereavement rates worked, Moffatt asked Air Canada’s chatbot to explain.

The chatbot provided inaccurate information, encouraging Moffatt to book a flight immediately and then request a refund within 90 days. In reality, Air Canada’s policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked. Moffatt dutifully attempted to follow the chatbot’s advice and request a refund but was shocked that the request was rejected.

Moffatt tried for months to convince Air Canada that a refund was owed, sharing a screenshot from the chatbot that clearly claimed:

If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.

Air Canada argued that because the chatbot response elsewhere linked to a page with the actual bereavement travel policy, Moffatt should have known bereavement rates could not be requested retroactively. Instead of a refund, the best Air Canada would do was to promise to update the chatbot and offer Moffatt a $200 coupon to use on a future flight.

Unhappy with this resolution, Moffatt refused the coupon and filed a small claims complaint in Canada’s Civil Resolution Tribunal.

According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot’s misleading information because Air Canada essentially argued that “the chatbot is a separate legal entity that is responsible for its own actions,” a court order said.

Experts told the Vancouver Sun that Moffatt’s case appeared to be the first time a Canadian company tried to argue that it wasn’t liable for information provided by its chatbot.

Tribunal member Christopher Rivers, who decided the case in favor of Moffatt, called Air Canada’s defense “remarkable.”

“Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot,” Rivers wrote. “It does not explain why it believes that is the case” or “why the webpage titled ‘Bereavement travel’ was inherently more trustworthy than its chatbot.”

Further, Rivers found that Moffatt had “no reason” to believe that one part of Air Canada’s website would be accurate and another would not.

Air Canada “does not explain why customers should have to double-check information found in one part of its website on another part of its website,” Rivers wrote.

In the end, Rivers ruled that Moffatt was entitled to a partial refund of $650.88 in Canadian dollars (CAD) off the original fare (about $482 USD), which was $1,640.36 CAD (about $1,216 USD), as well as additional damages to cover interest on the airfare and Moffatt’s tribunal fees.

Air Canada told Ars it will comply with the ruling and considers the matter closed.

Air Canada’s chatbot appears to be disabled

When Ars visited Air Canada’s website on Friday, there appeared to be no chatbot support available, suggesting that Air Canada has disabled the chatbot.

Air Canada did not respond to Ars’ request to confirm whether the chatbot is still part of the airline’s online support offerings.

Last March, Air Canada’s chief information officer Mel Crocker told the Globe and Mail that the airline had launched the chatbot as an AI “experiment.”

Initially, the chatbot was used to lighten the load on Air Canada’s call center when flights experienced unexpected delays or cancellations.

“So in the case of a snowstorm, if you have not been issued your new boarding pass yet and you just want to confirm if you have a seat available on another flight, that’s the sort of thing we can easily handle with AI,” Crocker told the Globe and Mail.

Over time, Crocker said, Air Canada hoped the chatbot would “gain the ability to resolve even more complex customer service issues,” with the airline’s ultimate goal to automate every service that did not require a “human touch.”

If Air Canada can use “technology to solve something that can be automated, we will do that,” Crocker said.

Air Canada was seemingly so invested in experimenting with AI that Crocker told the Globe and Mail that “Air Canada’s initial investment in customer service AI technology was much higher than the cost of continuing to pay workers to handle simple queries.” It was worth it, Crocker said, because “the airline believes investing in automation and machine learning technology will lower its expenses” and “fundamentally” create “a better customer experience.”

It’s now clear that for at least one person, the chatbot created a more frustrating customer experience.

Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt’s case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.

Because Air Canada seemingly failed to take that step, Rivers ruled that “Air Canada did not take reasonable care to ensure its chatbot was accurate.”

“It should be obvious to Air Canada that it is responsible for all the information on its website,” Rivers wrote. “It makes no difference whether the information comes from a static page or a chatbot.”

Air Canada must honor refund policy invented by airline’s chatbot Read More »

amc-to-pay-$8m-for-allegedly-violating-1988-law-with-use-of-meta-pixel

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel

Stream like no one is watching —

Proposed settlement impacts millions using AMC apps like Shudder and AMC+.

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel

On Thursday, AMC notified subscribers of a proposed $8.3 million settlement that provides awards to an estimated 6 million subscribers of its six streaming services: AMC+, Shudder, Acorn TV, ALLBLK, SundanceNow, and HIDIVE.

The settlement comes in response to allegations that AMC illegally shared subscribers’ viewing history with tech companies like Google, Facebook, and X (aka Twitter) in violation of the Video Privacy Protection Act (VPPA).

Passed in 1988, the VPPA prohibits AMC and other video service providers from sharing “information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” It was originally passed to protect individuals’ right to private viewing habits, after a journalist published the mostly unrevealing video rental history of a judge, Robert Bork, who had been nominated to the Supreme Court by Ronald Reagan.

The so-called “Bork Tapes” revealed little—other than that the judge frequently rented spy thrillers and British costume dramas—but lawmakers recognized that speech could be chilled by monitoring anyone’s viewing habits. While the law was born in the era of Blockbuster Video, subscribers suing AMC wrote in their amended complaint that “the importance of legislation like the VPPA in the modern era of datamining is more pronounced than ever before.”

According to subscribers suing, AMC allegedly installed tracking technologies—including the Meta Pixel, the X Tracking Pixel, and Google Tracking Technology—on its website, allowing their personally identifying information to be connected with their viewing history.

Some trackers, like the Meta Pixel, required AMC to choose what kind of activity can be tracked, and subscribers claimed that AMC had willingly opted into sharing video names and URLs with Meta, along with a Facebook ID. “Anyone” could use the Facebook ID, subscribers said, to identify the AMC subscribers “simply by entering https://www.facebook.com/[unencrypted FID]/” into a browser.

X’s ID could similarly be de-anonymized, subscribers alleged, by using tweeterid.com.

AMC “could easily program its AMC Services websites so that this information is not disclosed” to tech companies, subscribers alleged.

Denying wrongdoing, AMC has defended its use of tracking technologies but is proposing to settle with subscribers to avoid uncertain outcomes from litigation, the proposed settlement said.

A hearing to approve the proposed settlement has been scheduled for May 16.

If it’s approved, AMC has agreed to “suspend, remove, or modify operation of the Meta Pixel and other Third-Party Tracking Technologies so that use of such technologies on AMC Services will not result in AMC’s disclosure to the third-party technology companies of the specific video content requested or obtained by a specific individual.”

Google and X did not immediately respond to Ars’ request to comment. Meta declined to comment.

All registered users of AMC services who “requested or obtained video content on at least one of the six AMC services” between January 18, 2021, and January 10, 2024, are currently eligible to submit claims under the proposed settlement. The deadline to submit is April 9.

In addition to distributing the $8.3 million settlement fund among class members, subscribers will receive a free one-week digital subscription.

According to AMC’s notice to subscribers (full disclosure, I am one), AMC’s agreement to avoid sharing subscribers’ viewing histories may change if the VPPA is amended, repealed, or invalidated. If the law changes to permit sharing viewing data at the core of subscribers’ claim, AMC may resume sharing that information with tech companies.

That day could come soon if Patreon has its way. Recently, Patreon asked a federal judge to rule that the VPPA is unconstitutional.

Patreon’s lawsuit is similar in its use of the Meta Pixel, allegedly violating the VPPA by sharing video views on its platform with Meta.

Patreon has argued that the VPPA is unconstitutional because it chills speech. Patreon said that the law was enacted “for the express purpose of silencing disclosures about political figures and their video-watching, an issue of undisputed continuing public interest and concern.”

According to Patreon, the VPPA narrowly prohibits video service providers from sharing video titles, but not from sharing information that people may wish to keep private, such as “the genres, performers, directors, political views, sexual content, and every other detail of pre-recorded video that those consumers watch.”

Therefore, Patreon argued, the VPPA “restrains speech” while “doing little if anything to protect privacy” and never protecting privacy “by the least restrictive means.”

That lawsuit remains ongoing, but Patreon’s position is likely to be met with opposition from experts who typically also defend freedom of speech. Experts at the Electronic Privacy Information Center, like AMC subscribers suing, consider the VPPA one of America’s “strongest protections of consumer privacy against a specific form of data collection.” And the Electronic Frontier Foundation (EFF) has already moved to convince the court to reject Patreon’s claim, describing the VPPA in a blog as an “essential” privacy protection.

“EFF is second to none in fighting for everyone’s First Amendment rights in court,” EFF’s blog said. “But Patreon’s First Amendment argument is wrong and misguided. The company seeks to elevate its speech interests over those of Internet users who benefit from the VPPA’s protections.”

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel Read More »

musk’s-x-sold-checkmarks-to-hezbollah-and-other-terrorist-groups,-report-says

Musk’s X sold checkmarks to Hezbollah and other terrorist groups, report says

A photo of Elon Musk next to the logo for X, the social network formerly known as Twitter,.

Getty Images | NurPhoto

A watchdog group’s investigation found that terrorist group Hezbollah and other US-sanctioned entities have accounts with paid checkmarks on X, the Elon Musk-owned social network that still resides at the twitter.com domain.

The Tech Transparency Project (TTP), a nonprofit that is critical of Big Tech companies, said in a report today that “X, the platform formerly known as Twitter, is providing premium, paid services to accounts for two leaders of a US-designated terrorist group and several other organizations sanctioned by the US government.”

After buying Twitter for $44 billion, Musk started charging users for checkmarks that were previously intended to verify that an account was notable and authentic. “Along with the checkmarks, which are intended to confer legitimacy, X promises various perks for premium accounts, including the ability to post longer text and videos and greater visibility for some posts,” the Tech Transparency Project report noted.

The Tech Transparency Project suggests that X may be violating US sanctions. “The accounts identified by TTP include two that apparently belong to the top leaders of Lebanon-based Hezbollah and others belonging to Iranian and Russian state-run media,” the report said. “The fact that X requires users to pay a monthly or annual fee for premium service suggests that X is engaging in financial transactions with these accounts, a potential violation of US sanctions.”

Some of the accounts were verified before Musk bought Twitter, but verification was a free service at the time. Musk’s decision to charge for checkmarks means that X is “providing a premium, paid service to sanctioned entities,” which may raise “new legal issues,” the Tech Transparency Project said.

Report details 28 checkmarked accounts

Musk’s X charges $1,000 a month for a Verified Organizations subscription and last month added a basic tier for $200 a month. For individuals, the X Premium tiers that come with checkmarks cost $8 or $16 a month.

It’s possible for US companies to receive a license from the government to engage in certain transactions with sanctioned entities, but it doesn’t seem likely that X has such a license. X’s rules explicitly prohibit users from purchasing X Premium “if you are a person with whom X is not permitted to have dealings under US and any other applicable economic sanctions and trade compliance law.”

In all, the Tech Transparency Project said it found 28 “verified” accounts tied to sanctioned individuals or entities. These include individuals and groups listed by the US Treasury Department’s Office of Foreign Assets Control (OFAC) as “Specially Designated Nationals.”

“Of the 28 X accounts identified by TTP, 18 show they got verified after April 1, 2023, when X began requiring accounts to subscribe to paid plans to get a checkmark. The other 10 were legacy verified accounts, which are required to pay for a subscription to retain their checkmarks,” the group wrote, adding that it “found advertising in the replies to posts in 19 of the 28 accounts.”

We contacted X today and will update this article if we get a comment. Our email to press@x.com triggered the standard auto-reply from press+noreply@twitter.com that says, “Busy now, please check back later.”

Update at 4: 28pm ET: After this article was published, X issued the following statement: “X has a robust and secure approach in place for our monetization features, adhering to legal obligations, along with independent screening by our payments providers. Several of the accounts listed in the Tech Transparency Report are not directly named on sanction lists, while some others may have visible account check marks without receiving any services that would be subject to sanctions. Our teams have reviewed the report and will take action if necessary. We’re always committed to ensuring that we maintain a safe, secure and compliant platform.”

Musk’s X sold checkmarks to Hezbollah and other terrorist groups, report says Read More »

backdoors-that-let-cops-decrypt-messages-violate-human-rights,-eu-court-says

Backdoors that let cops decrypt messages violate human rights, EU court says

Building of the European Court of Human Rights in Strasbourg (France).

Enlarge / Building of the European Court of Human Rights in Strasbourg (France).

The European Court of Human Rights (ECHR) has ruled that weakening end-to-end encryption disproportionately risks undermining human rights. The international court’s decision could potentially disrupt the European Commission’s proposed plans to require email and messaging service providers to create backdoors that would allow law enforcement to easily decrypt users’ messages.

This ruling came after Russia’s intelligence agency, the Federal Security Service (FSS), began requiring Telegram to share users’ encrypted messages to deter “terrorism-related activities” in 2017, ECHR’s ruling said. A Russian Telegram user alleged that FSS’s requirement violated his rights to a private life and private communications, as well as all Telegram users’ rights.

The Telegram user was apparently disturbed, moving to block required disclosures after Telegram refused to comply with an FSS order to decrypt messages on six users suspected of terrorism. According to Telegram, “it was technically impossible to provide the authorities with encryption keys associated with specific users,” and therefore, “any disclosure of encryption keys” would affect the “privacy of the correspondence of all Telegram users,” the ECHR’s ruling said.

For refusing to comply, Telegram was fined, and one court even ordered the app to be blocked in Russia, while dozens of Telegram users rallied to continue challenging the order to maintain Telegram services in Russia. Ultimately, users’ multiple court challenges failed, sending the case before the ECHR while Telegram services seemingly tenuously remained available in Russia.

The Russian government told the ECHR that “allegations that the security services had access to the communications of all users” were “unsubstantiated” because their request only concerned six Telegram users.

They further argued that Telegram providing encryption keys to FSB “did not mean that the information necessary to decrypt encrypted electronic communications would become available to its entire staff.” Essentially, the government believed that FSB staff’s “duty of discretion” would prevent any intrusion on private life for Telegram users as described in the ECHR complaint.

Seemingly most critically, the government told the ECHR that any intrusion on private lives resulting from decrypting messages was “necessary” to combat terrorism in a democratic society. To back up this claim, the government pointed to a 2017 terrorist attack that was “coordinated from abroad through secret chats via Telegram.” The government claimed that a second terrorist attack that year was prevented after the government discovered it was being coordinated through Telegram chats.

However, privacy advocates backed up Telegram’s claims that the messaging services couldn’t technically build a backdoor for governments without impacting all its users. They also argued that the threat of mass surveillance could be enough to infringe on human rights. The European Information Society Institute (EISI) and Privacy International told the ECHR that even if governments never used required disclosures to mass surveil citizens, it could have a chilling effect on users’ speech or prompt service providers to issue radical software updates weakening encryption for all users.

In the end, the ECHR concluded that the Telegram user’s rights had been violated, partly due to privacy advocates and international reports that corroborated Telegram’s position that complying with the FSB’s disclosure order would force changes impacting all its users.

The “confidentiality of communications is an essential element of the right to respect for private life and correspondence,” the ECHR’s ruling said. Thus, requiring messages to be decrypted by law enforcement “cannot be regarded as necessary in a democratic society.”

Martin Husovec, a law professor who helped to draft EISI’s testimony, told Ars that EISI is “obviously pleased that the Court has recognized the value of encryption and agreed with us that state-imposed weakening of encryption is a form of indiscriminate surveillance because it affects everyone’s privacy.”

Backdoors that let cops decrypt messages violate human rights, EU court says Read More »

judge-rejects-most-chatgpt-copyright-claims-from-book-authors

Judge rejects most ChatGPT copyright claims from book authors

Insufficient evidence —

OpenAI plans to defeat authors’ remaining claim at a “later stage” of the case.

Judge rejects most ChatGPT copyright claims from book authors

A US district judge in California has largely sided with OpenAI, dismissing the majority of claims raised by authors alleging that large language models powering ChatGPT were illegally trained on pirated copies of their books without their permission.

By allegedly repackaging original works as ChatGPT outputs, authors alleged, OpenAI’s most popular chatbot was just a high-tech “grift” that seemingly violated copyright laws, as well as state laws preventing unfair business practices and unjust enrichment.

According to judge Araceli Martínez-Olguín, authors behind three separate lawsuits—including Sarah Silverman, Michael Chabon, and Paul Tremblay—have failed to provide evidence supporting any of their claims except for direct copyright infringement.

OpenAI had argued as much in their promptly filed motion to dismiss these cases last August. At that time, OpenAI said that it expected to beat the direct infringement claim at a “later stage” of the proceedings.

Among copyright claims tossed by Martínez-Olguín were accusations of vicarious copyright infringement. Perhaps most significantly, Martínez-Olguín agreed with OpenAI that the authors’ allegation that “every” ChatGPT output “is an infringing derivative work” is “insufficient” to allege vicarious infringement, which requires evidence that ChatGPT outputs are “substantially similar” or “similar at all” to authors’ books.

“Plaintiffs here have not alleged that the ChatGPT outputs contain direct copies of the copyrighted books,” Martínez-Olguín wrote. “Because they fail to allege direct copying, they must show a substantial similarity between the outputs and the copyrighted materials.”

Authors also failed to convince Martínez-Olguín that OpenAI violated the Digital Millennium Copyright Act (DMCA) by allegedly removing copyright management information (CMI)—such as author names, titles of works, and terms and conditions for use of the work—from training data.

This claim failed because authors cited “no facts” that OpenAI intentionally removed the CMI or built the training process to omit CMI, Martínez-Olguín wrote. Further, the authors cited examples of ChatGPT referencing their names, which would seem to suggest that some CMI remains in the training data.

Some of the remaining claims were dependent on copyright claims to survive, Martínez-Olguín wrote.

Arguing that OpenAI caused economic injury by unfairly repurposing authors’ works, even if authors could show evidence of a DMCA violation, authors could only speculate about what injury was caused, the judge said.

Similarly, allegations of “fraudulent” unfair conduct—accusing OpenAI of “deceptively” designing ChatGPT to produce outputs that omit CMI—”rest on a violation of the DMCA,” Martínez-Olguín wrote.

The only claim under California’s unfair competition law that was allowed to proceed alleged that OpenAI used copyrighted works to train ChatGPT without authors’ permission. Because the state law broadly defines what’s considered “unfair,” Martínez-Olguín said that it’s possible that OpenAI’s use of the training data “may constitute an unfair practice.”

Remaining claims of negligence and unjust enrichment failed, Martínez-Olguín wrote, because authors only alleged intentional acts and did not explain how OpenAI “received and unjustly retained a benefit” from training ChatGPT on their works.

Authors have been ordered to consolidate their complaints and have until March 13 to amend arguments and continue pursuing any of the dismissed claims.

To shore up the tossed copyright claims, authors would likely need to provide examples of ChatGPT outputs that are similar to their works, as well as evidence of OpenAI intentionally removing CMI to “induce, enable, facilitate, or conceal infringement,” Martínez-Olguín wrote.

Ars could not immediately reach the authors’ lawyers or OpenAI for comment.

As authors likely prepare to continue fighting OpenAI, the US Copyright Office has been fielding public input before releasing guidance that could one day help rights holders pursue legal claims and may eventually require works to be licensed from copyright owners for use as training materials. Among the thorniest questions is whether AI tools like ChatGPT should be considered authors when spouting outputs included in creative works.

While the Copyright Office prepares to release three reports this year “revealing its position on copyright law in relation to AI,” according to The New York Times, OpenAI recently made it clear that it does not plan to stop referencing copyrighted works in its training data. Last month, OpenAI said it would be “impossible” to train AI models without copyrighted materials, because “copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents.”

According to OpenAI, it doesn’t just need old copyrighted materials; it needs current copyright materials to ensure that chatbot and other AI tools’ outputs “meet the needs of today’s citizens.”

Rights holders will likely be bracing throughout this confusing time, waiting for the Copyright Office’s reports. But once there is clarity, those reports could “be hugely consequential, weighing heavily in courts, as well as with lawmakers and regulators,” The Times reported.

Judge rejects most ChatGPT copyright claims from book authors Read More »

cryptocurrency-maker-sues-former-ars-reporter-for-writing-about-fraud-lawsuit

Cryptocurrency maker sues former Ars reporter for writing about fraud lawsuit

Promotional image of a man using a machine that looks like an ATM and is labeled

Enlarge / Image from Bitcoin Latinum’s website

Bitcoin Latinum

The cryptocurrency firm Bitcoin Latinum has sued journalists at Forbes and Poker.org, claiming that the writers made false and defamatory statements in articles that described securities fraud lawsuits filed against the crypto firm.

Bitcoin Latinum and its founder, Donald Basile, filed a libel lawsuit against Forbes reporter Cyrus Farivar and another libel lawsuit against Poker.org and its reporter Haley Hintze. (Farivar was a long-time Ars Technica reporter.)

The lawsuits are surprising because the Forbes article and the Poker.org article, both published in 2022, are very much like thousands of other news stories that describe allegations in a lawsuit. In both articles, it is clear that the allegations come from the filer of the lawsuit and not from the author of the article.

But both of Bitcoin Latinum’s lawsuits, which were filed last week in Delaware’s Court of Chancery, demand that the articles be retracted. They contain the following claim in exactly the same words:

The Article contains statements which insinuate and lead the reader to believe that Assofi’s allegations against Plaintiff Latinum and Plaintiff Basile are factual and correct, and which statements are not couched as the opinion of the author, but rather, are presented as fact, and therefore do not fall under any applicable privilege.

“Assofi’s allegations” are those made in a lawsuit filed against Bitcoin Latinum and Basile in November 2022. That lawsuit from Arshad Assofi, who said he lost over $15 million investing in worthless tokens, alleged that Bitcoin Latinum “is a scam” and accused the defendants of securities fraud and other violations. Bitcoin Latinum calls itself “the future of Bitcoin.”

Lawsuit cites wrong article

It’s especially surprising that Bitcoin Latinum’s lawsuit against Hintze contains the statement about “Assofi’s allegations” because the Hintze article cited in the lawsuit never mentions Assofi. The Hintze article on Poker.org is about a different lawsuit from different plaintiffs who also alleged securities fraud.

In fact, the Hintze article was published in February 2022, 10 months before the Assofi lawsuit was filed. TechDirt’s Mike Masnick pointed out this error in an article yesterday:

It appears that Latinum’s lawyer actually meant to sue over a different Poker.org article, that was published in November about the Assofi lawsuit, but repeatedly claims that the article was published on February 5, 2022, rather than the actual publication date of the article she meant, which was November 21, 2022. Also, Latinum’s lawyer included the February 5th article as the exhibit, rather than the November 21st article. Such attention to detail to talk about the wrong article and include the wrong article as an exhibit. Top notch lawyering.

Masnick also points out that the statute of limitations is two years, and the lawsuit against Hintze was filed more than two years after her February 2022 article.

In libel cases, journalists may defend themselves with the “fair report privilege.” This applies to accurate reporting on official government matters, including court proceedings.

The lawyer for Bitcoin Latinum in the Farivar and Hintze cases is Holly Whitney, who specializes in estate planning and probate cases. We contacted Whitney and Bitcoin Latinum about the lawsuits today and will update this article if we get a response.

Cryptocurrency maker sues former Ars reporter for writing about fraud lawsuit Read More »