Policy

judge-mocks-x-for-“vapid”-argument-in-musk’s-hate-speech-lawsuit

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

It looks like Elon Musk may lose X’s lawsuit against hate speech researchers who encouraged a major brand boycott after flagging ads appearing next to extremist content on X, the social media site formerly known as Twitter.

X is trying to argue that the Center for Countering Digital Hate (CCDH) violated the site’s terms of service and illegally accessed non-public data to conduct its reporting, allegedly posing a security risk for X. The boycott, X alleged, cost the company tens of millions of dollars by spooking advertisers, while X contends that the CCDH’s reporting is misleading and ads are rarely served on extremist content.

But at a hearing Thursday, US district judge Charles Breyer told the CCDH that he would consider dismissing X’s lawsuit, repeatedly appearing to mock X’s decision to file it in the first place.

Seemingly skeptical of X’s entire argument, Breyer appeared particularly focused on how X intended to prove that the CCDH could have known that its reporting would trigger such substantial financial losses, as the lawsuit hinges on whether the alleged damages were “foreseeable,” NPR reported.

X’s lawyer, Jon Hawk, argued that when the CCDH joined Twitter in 2019, the group agreed to terms of service that noted those terms could change. So when Musk purchased Twitter and updated rules to reinstate accounts spreading hate speech, the CCDH should have been able to foresee those changes in terms and therefore anticipate that any reporting on spikes in hate speech would cause financial losses.

According to CNN, this is where Breyer became frustrated, telling Hawk, “I’m trying to figure out in my mind how that’s possibly true, because I don’t think it is.”

“What you have to tell me is, why is it foreseeable?” Breyer said. “That they should have understood that, at the time they entered the terms of service, that Twitter would then change its policy and allow this type of material to be disseminated?

“That, of course, reduces foreseeability to one of the most vapid extensions of law I’ve ever heard,” Breyer added. “‘Oh, what’s foreseeable is that things can change, and therefore, if there’s a change, it’s ‘foreseeable.’ I mean, that argument is truly remarkable.”

According to NPR, Breyer suggested that X was trying to “shoehorn” its legal theory by using language from a breach of contract claim, when what the company actually appeared to be alleging was defamation.

“You could’ve brought a defamation case; you didn’t bring a defamation case,” Breyer said. “And that’s significant.”

Breyer directly noted that one reason why X might not bring a defamation suit was if the CCDH’s reporting was accurate, NPR reported.

CCDH’s CEO and founder, Imran Ahmed, provided a statement to Ars, confirming that the group is “very pleased with how yesterday’s argument went, including many of the questions and comments from the court.”

“We remain confident in the strength of our arguments for dismissal,” Ahmed said.

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit Read More »

elon-musk-sues-openai-and-sam-altman,-accusing-them-of-chasing-profits

Elon Musk sues OpenAI and Sam Altman, accusing them of chasing profits

YA Musk lawsuit —

OpenAI is now a “closed-source de facto subsidiary” of Microsoft, says lawsuit.

Elon Musk sues OpenAI and Sam Altman, accusing them of chasing profits

Elon Musk has sued OpenAI and its chief executive Sam Altman for breach of contract, alleging they have compromised the start-up’s original mission of building artificial intelligence systems for the benefit of humanity.

In the lawsuit, filed to a San Francisco court on Thursday, Musk’s lawyers wrote that OpenAI’s multibillion-dollar alliance with Microsoft had broken an agreement to make a major breakthrough in AI “freely available to the public.”

Instead, the lawsuit said, OpenAI was working on “proprietary technology to maximise profits for literally the largest company in the world.”

The legal fight escalates a long-running dispute between Musk, who has founded his own AI company, known as xAI, and OpenAI, which has received a $13 billion investment from Microsoft.

Musk, who helped co-found OpenAI in 2015, said in his legal filing he had donated $44 million to the group, and had been “induced” to make contributions by promises, “including in writing,” that it would remain a non-profit organisation.

He left OpenAI’s board in 2018 following disagreements with Altman on the direction of research. A year later, the group established the for-profit arm that Microsoft has invested into.

Microsoft’s president Brad Smith told the Financial Times this week that while the companies were “very important partners,” “Microsoft does not control OpenAI.”

Musk’s lawsuit alleges that OpenAI’s latest AI model, GPT4, released in March last year, breached the threshold for artificial general intelligence (AGI), at which computers function at or above the level of human intelligence.

The Microsoft deal only gives the tech giant a licence to OpenAI’s pre-AGI technology, the lawsuit said, and determining when this threshold is reached is key to Musk’s case.

The lawsuit seeks a court judgment over whether GPT4 should already be considered to be AGI, arguing that OpenAI’s board was “ill-equipped” to make such a determination.

The filing adds that OpenAI is also building another model, Q*, that will be even more powerful and capable than GPT4. It argues that OpenAI is committed under the terms of its founding agreement to make such technology available publicly.

“Mr. Musk has long recognised that AGI poses a grave threat to humanity—perhaps the greatest existential threat we face today,” the lawsuit says.

“To this day, OpenAI, Inc.’s website continues to profess that its charter is to ensure that AGI ‘benefits all of humanity’,” it adds. “In reality, however, OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft.”

OpenAI maintains it has not yet achieved AGI, despite its models’ success in language and reasoning tasks. Large language models like GPT4 still generate errors, fabrications and so-called hallucinations.

The lawsuit also seeks to “compel” OpenAI to adhere to its founding agreement to build technology that does not simply benefit individuals such as Altman and corporations such as Microsoft.

Musk’s own xAI company is a direct competitor to OpenAI and launched its first product, a chatbot named Grok, in December.

OpenAI declined to comment. Representatives for Musk have been approached for comment. Microsoft did not immediately respond to a request for comment.

The Microsoft-OpenAI alliance is being reviewed by competition watchdogs in the US, EU and UK.

The US Securities and Exchange Commission issued subpoenas to OpenAI executives in November as part of an investigation into whether Altman had misled its investors, according to people familiar with the move.

That investigation came shortly after OpenAI’s board fired Altman as chief executive only to reinstate him days later. A new board has since been instituted including former Salesforce co-chief executive Bret Taylor as chair.

There is an ongoing internal review of the former board’s allegations against Altman by independent law firm WilmerHale.

© 2024 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

Elon Musk sues OpenAI and Sam Altman, accusing them of chasing profits Read More »

tesla-must-face-racism-class-action-from-6,000-black-workers,-judge-rules

Tesla must face racism class action from 6,000 Black workers, judge rules

Aerial view of a Tesla factory shows a giant Tesla logo on the side of the building, and a parking lot filled with cars.

Enlarge / Tesla factory in Fremont, California, on September 18, 2023.

Getty Images | Justin Sullivan

Tesla must face a class-action lawsuit from nearly 6,000 Black people who allege that they faced discrimination and harassment while working at the company’s Fremont factory, a California judge ruled.

The tentative ruling from Alameda County Superior Court “certifies a class defined as the specific approximately 5,977 persons self-identified as Black/African-American who worked at Tesla during the class period from November 9, 2016, through the date of the entry of this order to prosecute the claims in the complaint.”

The tentative ruling was issued Tuesday by Judge Noël Wise. Tesla can contest the ruling at a hearing on Friday, but tentative rulings are generally finalized without major changes.

The case started years ago. An amended complaint in 2017 alleged that Tesla “created an intimidating, hostile, and offensive work environment for Black and/or African-American employees that includes a routine use of the terms ‘Nr’ and ‘Na’ and other racially derogatory terms, and racist treatment and images at Tesla’s production facility in Fremont, California.”

The plaintiffs’ motion was not approved in its entirety. A request for class certification was denied for all people who are not on the list of class members.

However, plaintiffs will have five days to provide an updated list of class members. Anyone not on the list “may if they wish seek individual remedies through filing civil actions, through arbitration, or otherwise,” the ruling said.

Plaintiffs “heard the n-word” at factory

A class-action trial is scheduled to begin on October 14, 2024, the same day as a separate case against Tesla brought by the California Civil Rights Department (CRD).

As Wise’s ruling noted, “The CRD has filed and is pursuing a parallel law enforcement action that is alleging a pattern and practice of failing to prevent discrimination and harassment and seeking an injunction that would require Tesla to institute policies and procedures that will do a better job of preventing and redressing discrimination and harassment at Tesla. The EEOC [US Equal Employment Opportunity Commission] has filed a similar action.”

In the class action, plaintiffs submitted “declarations from 240 persons who stated that they observed discrimination or harassment at the Tesla Fremont facility and that some complained about it,” Wise wrote. “Of the 240 plaintiff declarations, all stated that they heard the n-word at the Tesla Fremont facility, 112 state that they complained to a supervisor, manager or HR about discrimination, but only 16 made written complaints.”

Tesla submitted declarations from 228 people “who generally stated that they did not observe discrimination or harassment at the Tesla Fremont facility or that if they observed it then Tesla took ‘immediate and appropriate corrective action,'” Wise wrote.

Tesla also said it “created a centralized internal tracking system to document complaints and investigations” in 2017 and will rely on this database “to demonstrate that Tesla was aware of complaints about race discrimination and harassment and how it responded to the complaints.”

Tesla must face racism class action from 6,000 Black workers, judge rules Read More »

centurylink-left-customers-without-internet-for-39-days—until-ars-stepped-in

CenturyLink left customers without Internet for 39 days—until Ars stepped in

A

Aurich Lawson | Getty Images

When a severe winter storm hit Oregon on January 13, Nicholas Brown’s CenturyLink fiber Internet service stopped working at his house in Portland.

The initial outage was understandable amid the widespread damage caused by the storm, but CenturyLink’s response was poor. It took about 39 days for CenturyLink to restore broadband service to Brown and even longer to restore service to one of his neighbors. Those reconnections only happened after Ars Technica contacted the telco firm on the customers’ behalf last week.

Brown had never experienced any lengthy outage in over four years of subscribing to CenturyLink, so he figured the telco firm would restore his broadband connection within a reasonable amount of time. “It had practically never gone down at all up to this point. I’ve been quite happy with it,” he said.

While CenturyLink sent trucks to his street to reconnect most of his neighbors after the storm and Brown regularly contacted CenturyLink to plead for a fix, his Internet connection remained offline. Brown had also lost power, but the electricity service was reconnected within about 48 hours, while the broadband service remained offline for well over a month.

Fearing he had exhausted his options, Brown contacted Ars. We sent an email to CenturyLink’s media department on February 21 to seek information on why the outage lasted so long.

Telco finally springs into action

Roughly four hours after we contacted the firm, a CenturyLink technician arrived at the Portland house Brown shares with his partner, Jolene Edwards. The technician was able to reconnect them that day.

“At 4: 30 pm, a CenturyLink tech showed up unannounced,” Brown told us. “No one was home at the time, but he said he would wait. I get the idea that he was told not to come back until it was fixed.”

Brown’s neighbor, Leonard Bentz, also lost Internet access on January 13 and remained offline for two days longer than Brown. The technician who arrived on February 21 didn’t reconnect Bentz’s house.

“My partner gently tried to egg him to go over there and fix them too, and he more or less said, ‘That’s not the ticket that I have,'” Brown said.

After getting Bentz’s name and address, we contacted CenturyLink again on February 22 to notify them that he also needed to be reconnected. CenturyLink later confirmed to us that it restored his Internet service on February 23.

“They kept putting me off and putting me off”

Bentz told Ars that during the month-plus outage, he called CenturyLink several times. Customer service reps and a supervisor told him the company would send someone to fix his service, but “they kept putting me off and putting me off and putting me off,” Bentz said.

On one of those calls, Bentz said that CenturyLink promised him seven free months of service in exchange for the long outage. Brown told us he received a refund for the entire length of his outage, plus a bit extra. He pays $65 a month for gigabit service.

Brown said he is “happy enough with the resolution,” at least financially since he “got all the money for the non-service.” But those 39 days without Internet service will remain a bad memory.

Unfortunately, Internet service providers like CenturyLink have a history of failing to fix problems until media coverage exposes their poor customer service. CenturyLink is officially called Lumen these days, but it still uses the CenturyLink brand name.

After fixing Brown’s service in Portland, a CenturyLink spokesperson gave us the following statement:

It’s frustrating to have your services down and for that we apologize. We’ve brought in additional resources to assist in restoring service that was knocked out due to severe storms and multiple cases of vandalism. Some services are back, and we are working diligently to completely restore everything. In fact, we have technicians there now. We appreciate our customers’ patience and understanding, and we welcome calls from our customers to discuss their service.

CenturyLink left customers without Internet for 39 days—until Ars stepped in Read More »

a-big-boost-to-europe’s-climate-change-goals

A big boost to Europe’s climate-change goals

carbon-neutral continent —

A new policy called CBAM will assist Europe’s ambition to become carbon-neutral.

Steelworker starting molten steel pour in steelworks facility.

Enlarge / Materials such as steel, cement, aluminum, electricity, fertilizer, hydrogen, and iron will soon be subject to greenhouse gas emissions fees when imported into Europe.

Monty Rakusen/Getty

The year 2023 was a big one for climate news, from record heat to world leaders finally calling for a transition away from fossil fuels. In a lesser-known milestone, it was also the year the European Union soft-launched an ambitious new initiative that could supercharge its climate policies.

Wrapped in arcane language studded with many a “thereof,” “whereas” and “having regard to” is a policy that could not only help fund the European Union’s pledge to become the world’s first carbon-neutral continent, but also push industries all over the world to cut their carbon emissions.

It’s the establishment of a carbon price that will force many heavy industries to pay for each ton of carbon dioxide, or equivalent emissions of other greenhouse gases, that they emit. But what makes this fee revolutionary is that it will apply to emissions that don’t happen on European soil. The EU already puts a price on many of the emissions created by European firms; now, through the new Carbon Border Adjustment Mechanism, or CBAM, the bloc will charge companies that import the targeted products—cement, aluminum, electricity, fertilizer, hydrogen, iron, and steel—into the EU, no matter where in the world those products are made.

These industries are often large and stubborn sources of greenhouse gas emissions, and addressing them is key in the fight against climate change, says Aaron Cosbey, an economist at the International Institute for Sustainable Development, an environmental think tank. If those companies want to continue doing business with European firms, they’ll have to clean up or pay a fee. That creates an incentive for companies worldwide to reduce emissions.

In CBAM’s first phase, which started in October 2023, companies importing those materials into the EU must report on the greenhouse gas emissions involved in making the products. Beginning in 2026, they’ll have to pay a tariff.

Even having to supply emissions data will be a big step for some producers and could provide valuable data for climate researchers and policymakers, says Cosbey.

“I don’t know how many times I’ve gone through this exercise of trying to identify, at a product level, the greenhouse gas intensity of exports from particular countries and had to go through the most amazing, torturous processes to try to do those estimates,” he says. “And now it’s going to be served to me on a plate.”

CBAM will apply to a set of products that are linked to heavy greenhouse gas emissions.

Enlarge / CBAM will apply to a set of products that are linked to heavy greenhouse gas emissions.

Side benefits at home

While this new carbon price targets companies abroad, it will also help the EU to pursue its climate ambitions at home. For one thing, the extra revenues could go toward financing climate-friendly projects and promising new technologies.

But it also allows the EU to tighten up on domestic pollution. Since 2005, the EU has set a maximum, or cap, on the emissions created by a range of industrial “installations” such as oil and metal refineries. It makes companies within the bloc use credits, or allowances, for each ton of carbon dioxide—or equivalent discharges of other greenhouse gases—that they emit, up to that cap. Some allowances are currently granted for free, but others are bought at auction or traded with other companies in a system known as a carbon market.

But this idea—of making it expensive to harm the planet—creates a conundrum. If doing business in Europe becomes too expensive, European industry could flee the continent for countries that don’t have such high fees or strict regulations. That would damage the European economy and do nothing to solve the environmental crisis. The greenhouse gases would still be emitted—perhaps more than if the products had been made in Europe—and climate change would careen forward on its destructive path.

The Carbon Border Adjustment Mechanism aims to impose the same carbon price for products made abroad as domestic producers must pay under the EU’s system. In theory, that keeps European businesses competitive with imports from international rivals. It also addresses environmental concerns by nudging companies overseas toward reducing greenhouse gas emissions rather than carrying on as usual.

This means the EU can further tighten up its carbon market system at home. With international competition hopefully less of a concern, it plans to phase out some leniencies, such as some of the free emission allowances, that existed to help keep domestic industries competitive.

That’s a big deal, says Cosbey. Dozens of countries have carbon pricing systems, but they all create exceptions to keep heavy industry from getting obliterated by international competition. The carbon border tariff could allow the EU to truly force its industries—and consumers—to pay the price, he says.

“That is ambitious; nobody in the world is doing that.”

A big boost to Europe’s climate-change goals Read More »

openai-accuses-nyt-of-hacking-chatgpt-to-set-up-copyright-suit

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit

OpenAI is now boldly claiming that The New York Times “paid someone to hack OpenAI’s products” like ChatGPT to “set up” a lawsuit against the leading AI maker.

In a court filing Monday, OpenAI alleged that “100 examples in which some version of OpenAI’s GPT-4 model supposedly generated several paragraphs of Times content as outputs in response to user prompts” do not reflect how normal people use ChatGPT.

Instead, it allegedly took The Times “tens of thousands of attempts to generate” these supposedly “highly anomalous results” by “targeting and exploiting a bug” that OpenAI claims it is now “committed to addressing.”

According to OpenAI this activity amounts to “contrived attacks” by a “hired gun”—who allegedly hacked OpenAI models until they hallucinated fake NYT content or regurgitated training data to replicate NYT articles. NYT allegedly paid for these “attacks” to gather evidence to support The Times’ claims that OpenAI’s products imperil its journalism by allegedly regurgitating reporting and stealing The Times’ audiences.

“Contrary to the allegations in the complaint, however, ChatGPT is not in any way a substitute for a subscription to The New York Times,” OpenAI argued in a motion that seeks to dismiss the majority of The Times’ claims. “In the real world, people do not use ChatGPT or any other OpenAI product for that purpose. Nor could they. In the ordinary course, one cannot use ChatGPT to serve up Times articles at will.”

In the filing, OpenAI described The Times as enthusiastically reporting on its chatbot developments for years without raising any concerns about copyright infringement. OpenAI claimed that it disclosed that The Times’ articles were used to train its AI models in 2020, but The Times only cared after ChatGPT’s popularity exploded after its debut in 2022.

According to OpenAI, “It was only after this rapid adoption, along with reports of the value unlocked by these new technologies, that the Times claimed that OpenAI had ‘infringed its copyright[s]’ and reached out to demand ‘commercial terms.’ After months of discussions, the Times filed suit two days after Christmas, demanding ‘billions of dollars.'”

Ian Crosby, Susman Godfrey partner and lead counsel for The New York Times, told Ars that “what OpenAI bizarrely mischaracterizes as ‘hacking’ is simply using OpenAI’s products to look for evidence that they stole and reproduced The Times’s copyrighted works. And that is exactly what we found. In fact, the scale of OpenAI’s copying is much larger than the 100-plus examples set forth in the complaint.”

Crosby told Ars that OpenAI’s filing notably “doesn’t dispute—nor can they—that they copied millions of The Times’ works to build and power its commercial products without our permission.”

“Building new products is no excuse for violating copyright law, and that’s exactly what OpenAI has done on an unprecedented scale,” Crosby said.

OpenAI argued that the court should dismiss claims alleging direct copyright, contributory infringement, Digital Millennium Copyright Act violations, and misappropriation, all of which it describes as “legally infirm.” Some fail because they are time-barred—seeking damages on training data for OpenAI’s older models—OpenAI claimed. Others allegedly fail because they misunderstand fair use or are preempted by federal laws.

If OpenAI’s motion is granted, the case would be substantially narrowed.

But if the motion is not granted and The Times ultimately wins—and it might—OpenAI may be forced to wipe ChatGPT and start over.

“OpenAI, which has been secretive and has deliberately concealed how its products operate, is now asserting it’s too late to bring a claim for infringement or hold them accountable. We disagree,” Crosby told Ars. “It’s noteworthy that OpenAI doesn’t dispute that it copied Times works without permission within the statute of limitations to train its more recent and current models.”

OpenAI did not immediately respond to Ars’ request to comment.

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit Read More »

kagan:-florida-social-media-law-seems-like-“classic-first-amendment-violation”

Kagan: Florida social media law seems like “classic First Amendment violation”

The US Supreme Court building is seen on a sunny day. Kids mingle around a small pool on the grounds in front of the building.

Enlarge / The Supreme Court of the United States in Washington, DC, in May 2023.

Getty Images | NurPhoto

The US Supreme Court today heard oral arguments on Florida and Texas state laws that impose limits on how social media companies can moderate user-generated content.

The Florida law prohibits large social media sites like Facebook and Twitter (aka X) from banning politicians and says they must “apply censorship, deplatforming, and shadow banning standards in a consistent manner among its users on the platform.” The Texas statute prohibits large social media companies from moderating posts based on a user’s “viewpoint.” The laws were supported by Republican officials from 20 other states.

The tech industry says both laws violate the companies’ First Amendment right to use editorial discretion in deciding what kinds of user-generated content to allow on their platforms and how to present that content. The Supreme Court will decide whether the laws can be enforced while the industry lawsuits against Florida and Texas continue in lower courts.

How the Supreme Court rules at this stage in these two cases could give one side or the other a big advantage in the ongoing litigations. Paul Clement, a lawyer for Big Tech trade group NetChoice, today urged justices to reject the idea that content moderation conducted by private companies is censorship.

“I really do think that censorship is only something that the government can do to you,” Clement said. “And if it’s not the government, you really shouldn’t label it ‘censorship.’ It’s just a category mistake.”

Companies use editorial discretion to make websites useful for users and advertisers, he said, arguing that content moderation is an expressive activity protected by the First Amendment.

Justice Kagan talks anti-vaxxers, insurrectionists

Henry Whitaker, Florida’s solicitor general, said that social media platforms marketed themselves as neutral forums for free speech but now claim to be “editors of their users’ speech, rather like a newspaper.”

“They contend that they possess a broad First Amendment right to censor anything they host on their sites, even when doing so contradicts their own representations to consumers,” he said. Social media platforms should not be allowed to censor speech any more than phone companies are allowed to, he argued.

Contending that social networks don’t really act as editors, he said that “it is a strange kind of editor that does not actually look at the material” before it is posted. He also said that “upwards of 99 percent of what goes on the platforms is basically passed through without review.”

Justice Elena Kagan replied, “But that 1 percent seems to have gotten some people extremely angry.” Describing the platforms’ moderation practices, she said the 1 percent of content that is moderated is “like, ‘we don’t want anti-vaxxers on our site or we don’t want insurrectionists on our site.’ I mean, that’s what motivated these laws, isn’t it? And that’s what’s getting people upset about them is that other people have different views about what it means to provide misinformation as to voting and things like that.”

Later, Kagan said, “I’m taking as a given that YouTube or Facebook or whatever has expressive views. There are particular kinds of expression defined by content that they don’t want anywhere near their site.”

Pointing to moderation of hate speech, bullying, and misinformation about voting and public health, Kagan asked, “Why isn’t that a classic First Amendment violation for the state to come in and say, ‘we’re not going to allow you to enforce those sorts of restrictions?'”

Whitaker urged Kagan to “look at the objective activity being regulated, namely censoring and deplatforming, and ask whether that expresses a message. Because they [the social networks] host so much content, an objective observer is not going to readily attribute any particular piece of content that appears on their site to some decision to either refrain from or to censor or deplatform.”

Thomas: Who speaks when an algorithm moderates?

Justice Clarence Thomas expressed doubts about whether content moderation conveys an editorial message. “Tell me again what the expressive conduct is that, for example, YouTube engages in when it or Twitter deplatforms someone. What is the expressive conduct and to whom is it being communicated?” Thomas asked.

Clement said the platforms “are sending a message to that person and to their broader audience that that material” isn’t allowed. As a result, users are “not going to see material that violates the terms of use. They’re not going to see a bunch of material that glorifies terrorism. They’re not going to see a bunch of material that glorifies suicide,” Clement said.

Thomas asked who is doing the “speaking” when an algorithm performs content moderation, particularly when “it’s a deep-learning algorithm which teaches itself and has very little human intervention.”

“So who’s speaking then, the algorithm or the person?” Thomas asked.

Clement said that Facebook and YouTube are “speaking, because they’re the ones that are using these devices to run their editorial discretion across these massive volumes.” The need to use algorithms to automate moderation demonstrates “the volume of material on these sites, which just shows you the volume of editorial discretion,” he said.

Kagan: Florida social media law seems like “classic First Amendment violation” Read More »

how-your-sensitive-data-can-be-sold-after-a-data-broker-goes-bankrupt

How your sensitive data can be sold after a data broker goes bankrupt

playing fast and loose —

Sensitive location data could be sold off to the highest bidder.

Blue tone city scape and network connection concept , Map pin business district

In 2021, a company specializing in collecting and selling location data called Near bragged that it was “The World’s Largest Dataset of People’s Behavior in the Real-World,” with data representing “1.6B people across 44 countries.” Last year the company went public with a valuation of $1 billion (via a SPAC). Seven months later it filed for bankruptcy and has agreed to sell the company.

But for the “1.6B people” that Near said its data represents, the important question is: What happens to Near’s mountain of location data? Any company could gain access to it through purchasing the company’s assets.

The prospect of this data, including Near’s collection of location data from sensitive locations such as abortion clinics, being sold off in bankruptcy has raised alarms in Congress. Last week, Sen. Ron Wyden (D-Ore.) wrote the Federal Trade Commission (FTC) urging the agency to “protect consumers and investors from the outrageous conduct” of Near, citing his office’s investigation into the India-based company.

Wyden’s letter also urged the FTC “to intervene in Near’s bankruptcy proceedings to ensure that all location and device data held by Near about Americans is promptly destroyed and is not sold off, including to another data broker.” The FTC took such an action in 2010 to block the use of 11 years worth of subscriber personal data during the bankruptcy proceedings of the XY Magazine, which was oriented to young gay men. The agency requested that the data be destroyed to prevent its misuse.

Wyden’s investigation was spurred by a May 2023 Wall Street Journal report that Near had licensed location data to the anti-abortion group Veritas Society so it could target ads to visitors of Planned Parenthood clinics and attempt to dissuade women from seeking abortions. Wyden’s investigation revealed that the group’s geofencing campaign focused on 600 Planned Parenthood clinics in 48 states. The Journal also revealed that Near had been selling its location data to the Department of Defense and intelligence agencies.

As of publication, Near has not responded to requests for comment.

According to Near’s privacy policy, all of the data they have collected can be transferred to the new owners. Under the heading of “Who do you share my personal data with?” It lists “Prospective buyers of our business.”

This type of clause is common in privacy policies, and is a regular part of businesses being bought and sold. Where it gets complicated is when the company being sold owns data containing sensitive information.

This week, a new bankruptcy court filing showed that Wyden’s requests were granted. The order placed restrictions on the use, sale, licensing, or transfer of location data collected from sensitive locations in the US and requires any company that purchases the data to establish a “sensitive location data program” with detailed policies for such data and ensure ongoing monitoring and compliance, including the creation of a list of sensitive locations such as reproductive health care facilities, doctor’s offices, houses of worship, mental health care providers, corrections facilities and shelters among others. The order demands that unless consumers have explicitly provided consent, the company must cease any collection, use, or transfer of location data.

In a statement emailed to The Markup, Wyden wrote, “I commend the FTC for stepping in—at my request—to ensure that this data broker’s stockpile of Americans’ sensitive location data isn’t abused, again.”

Wyden called for protecting sensitive location data from data brokers, citing the new legal threats to women since the Supreme Court’s June 2022 decision to overturn the abortion-rights ruling Roe v. Wade. Wyden wrote, “The threat posed by the sale of location data is clear, particularly to women who are seeking reproductive care.”

The bankruptcy order also provided a rare glimpse into how data brokers license data to one another. Near’s list of contracts included agreements with several location brokers, ad platforms, universities, retailers, and city governments.

It is not clear from the filing if the agreements covered Near data being licensed, Near licensing the data from the companies, or both.

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

How your sensitive data can be sold after a data broker goes bankrupt Read More »

at&t’s-botched-network-update-caused-yesterday’s-major-wireless-outage

AT&T’s botched network update caused yesterday’s major wireless outage

AT&T outage cause —

AT&T blamed itself for “incorrect process used as we were expanding our network.”

A picture of two cellular towers. Trees and aerial power lines are also in the photo.

Enlarge / Cellular towers in Redondo Beach, California on February 22, 2024.

Getty Images | Eric Thayer

AT&T said a botched update related to a network expansion caused the wireless outage that disrupted service for many mobile customers yesterday.

“Based on our initial review, we believe that today’s outage was caused by the application and execution of an incorrect process used as we were expanding our network, not a cyber attack,” AT&T said on its website last night. “We are continuing our assessment of today’s outage to ensure we keep delivering the service that our customers deserve.”

While “incorrect process” is a bit vague, an ABC News report that cited anonymous sources said it was a software update that went wrong. AT&T hasn’t said exactly how many cellular customers were affected, but there were over 70,000 problem reports on the DownDetector website yesterday morning.

The outage began early in the morning, and AT&T said at 11: 15 am ET yesterday that “three-quarters of our network has been restored.” By 3: 10 pm ET, AT&T said it had “restored wireless service to all our affected customers.”

We asked AT&T for more information on the extent of the outage and its cause today, but a spokesperson said the company had no further comment.

FCC investigates

The outage was big enough that the Federal Communications Commission said its Public Safety and Homeland Security Bureau was actively investigating. The FCC also said it was in touch with FirstNet, the nationwide public safety network that was built by AT&T. Some FirstNet users reported frustrations related to the outage.

The San Francisco Fire Department said it was monitoring the outage because it appeared to be preventing “AT&T wireless customers from making and receiving any phone calls (including to 911).” The FCC sometimes issues fines to telcos over 911 outages.

The US Cybersecurity and Infrastructure Security Agency reportedly said it was looking into the outage, and a White House spokesperson said the FBI was checking on it, too. But it was determined pretty quickly that the outage wasn’t caused by cyber-attackers.

AT&T’s botched network update caused yesterday’s major wireless outage Read More »

yelp:-it’s-gotten-worse-since-google-made-changes-to-comply-with-eu-rules

Yelp: It’s gotten worse since Google made changes to comply with EU rules

illustration of google and yelp logos

Anjali Nair; Getty Images

To comply with looming rules that ban tech giants from favoring their own services, Google has been testing new look search results for flights, trains, hotels, restaurants, and products in Europe. The EU’s Digital Markets Act is supposed to help smaller companies get more traffic from Google, but reviews service Yelp says that when it tested Google’s design tweaks with consumers it had the opposite effect—making people less likely to click through to Yelp or another Google competitor.

The results, which Yelp shared with European regulators in December and WIRED this month, put some numerical backing behind complaints from Google rivals in travel, shopping, and hospitality that its efforts to comply with the DMA are insufficient—and potentially more harmful than the status quo. Yelp and thousands of others have been demanding that the EU hold a firm line against the giant companies including Apple and Amazon that are subject to what’s widely considered the world’s strictest antitrust law, violations of which can draw fines of up to 10 percent of global annual sales.

“All the gatekeepers are trying to hold on as long as possible to the status quo and make the new world unattractive,” says Richard Stables, CEO of shopping comparison site Kelkoo, which is unhappy with how Google has tweaked shopping results to comply with the DMA. “That’s really the game plan.”

Google spokesperson Rory O’Donoghue says the more than 20 changes made to search in response to the DMA are providing more opportunities for services such as Yelp to show up in results. “To suggest otherwise is plain wrong,” he says. Overall, Google’s tests of various DMA-inspired designs show clicks to review and comparison websites are up, O’Donoghue says—at the cost of users losing shortcuts to Google tools and individual businesses like airlines and restaurants facing a drop in visits from Google search. “We’ve been seeking feedback from a range of stakeholders over many months as we try to balance the needs of different types of websites while complying with the law,” he says.

Google, which generates 30 percent of its sales from Europe, the Middle East, and Africa, views the DMA as disrespecting its expertise in what users want. Critics such as Yelp argue that Google sometimes siphons users away from the more reliable content they offer. Yelp competes with Google for advertisers but generated less than 1 percent of its record sales of $1.3 billion last year from outside the US. An increase in European traffic could significantly boost its business.

To study search changes, Yelp worked with user-research company Lyssna to watch how hundreds of consumers from around the world interacted with Google’s new EU search results page when asked to find a dinner spot in Paris. For searches like that or for other “local” businesses, as Google calls them, one new design features results from Google Maps data at the top of the page below the search bar but adds a new box widget lower down containing images from and links to reviews websites like Yelp.

The experiments found that about 73 percent of about 500 people using that new design clicked results that kept them inside Google’s ecosystem—an increase over the 55 percent who did so when the design Google is phasing out in Europe was tested with a smaller pool of roughly 250 people.

Yelp also tested a variation of the new design. In this version, which Google has shared with regulators, the new box featuring review websites is placed above the maps widget. It was more successful in drawing people to try alternatives to Google, with only about 44 percent of consumers in the experiment sticking with the search giant. Though the box and widget will be treated equally by Google’s search algorithms, the order the features appear in will vary based on those calculations. Yelp’s concern is that Google will win out too often.

Yelp proposed to EU regulators that to produce more fair outcomes, Google should instead amend the map widget on results pages to include business listings and ratings from numerous providers, placing data from Google’s directory right alongside Yelp and others.

Companies such as Yelp that are critical of the changes in testing have called on the European Commission to immediately open an investigation into Google on March 7, when enforcement of the DMA begins.

“Yelp urges regulators to compel Google to fully comply with both the letter and spirit of the DMA,” says Yelp’s vice president of public policy, David Segal. “Google will soon be in violation of both, because if you look at what Google has put forth, it’s pretty clear that its services still have the best real estate.”

Yelp: It’s gotten worse since Google made changes to comply with EU rules Read More »

vending-machine-error-reveals-secret-face-image-database-of-college-students

Vending machine error reveals secret face image database of college students

“Stupid M&M machines” —

Facial-recognition data is typically used to prompt more vending machine sales.

Vending machine error reveals secret face image database of college students

Aurich Lawson | Mars | Getty Images

Canada-based University of Waterloo is racing to remove M&M-branded smart vending machines from campus after outraged students discovered the machines were covertly collecting facial-recognition data without their consent.

The scandal started when a student using the alias SquidKid47 posted an image on Reddit showing a campus vending machine error message, “Invenda.Vending.FacialRecognitionApp.exe,” displayed after the machine failed to launch a facial recognition application that nobody expected to be part of the process of using a vending machine.

Reddit post shows error message displayed on a University of Waterloo vending machine (cropped and lightly edited for clarity).

Enlarge / Reddit post shows error message displayed on a University of Waterloo vending machine (cropped and lightly edited for clarity).

“Hey, so why do the stupid M&M machines have facial recognition?” SquidKid47 pondered.

The Reddit post sparked an investigation from a fourth-year student named River Stanley, who was writing for a university publication called MathNEWS.

Stanley sounded alarm after consulting Invenda sales brochures that promised “the machines are capable of sending estimated ages and genders” of every person who used the machines without ever requesting consent.

This frustrated Stanley, who discovered that Canada’s privacy commissioner had years ago investigated a shopping mall operator called Cadillac Fairview after discovering some of the malls’ informational kiosks were secretly “using facial recognition software on unsuspecting patrons.”

Only because of that official investigation did Canadians learn that “over 5 million nonconsenting Canadians” were scanned into Cadillac Fairview’s database, Stanley reported. Where Cadillac Fairview was ultimately forced to delete the entire database, Stanley wrote that consequences for collecting similarly sensitive facial recognition data without consent for Invenda clients like Mars remain unclear.

Stanley’s report ended with a call for students to demand that the university “bar facial recognition vending machines from campus.”

A University of Waterloo spokesperson, Rebecca Elming, eventually responded, confirming to CTV News that the school had asked to disable the vending machine software until the machines could be removed.

Students told CTV News that their confidence in the university’s administration was shaken by the controversy. Some students claimed on Reddit that they attempted to cover the vending machine cameras while waiting for the school to respond, using gum or Post-it notes. One student pondered whether “there are other places this technology could be being used” on campus.

Elming was not able to confirm the exact timeline for when machines would be removed other than telling Ars it would happen “as soon as possible.” She told Ars she is “not aware of any similar technology in use on campus.” And for any casual snackers on campus wondering, when, if ever, students could expect the vending machines to be replaced with snack dispensers not equipped with surveillance cameras, Elming confirmed that “the plan is to replace them.”

Invenda claims machines are GDPR-compliant

MathNEWS’ investigation tracked down responses from companies responsible for smart vending machines on the University of Waterloo’s campus.

Adaria Vending Services told MathNEWS that “what’s most important to understand is that the machines do not take or store any photos or images, and an individual person cannot be identified using the technology in the machines. The technology acts as a motion sensor that detects faces, so the machine knows when to activate the purchasing interface—never taking or storing images of customers.”

According to Adaria and Invenda, students shouldn’t worry about data privacy because the vending machines are “fully compliant” with the world’s toughest data privacy law, the European Union’s General Data Protection Regulation (GDPR).

“These machines are fully GDPR compliant and are in use in many facilities across North America,” Adaria’s statement said. “At the University of Waterloo, Adaria manages last mile fulfillment services—we handle restocking and logistics for the snack vending machines. Adaria does not collect any data about its users and does not have any access to identify users of these M&M vending machines.”

Under the GDPR, face image data is considered among the most sensitive data that can be collected, typically requiring explicit consent to collect, so it’s unclear how the machines may meet that high bar based on the Canadian students’ experiences.

According to a press release from Invenda, the maker of M&M candies, Mars, was a key part of Invenda’s expansion into North America. It was only after closing a $7 million funding round, including deals with Mars and other major clients like Coca-Cola, that Invenda could push for expansive global growth that seemingly vastly expands its smart vending machines’ data collection and surveillance opportunities.

“The funding round indicates confidence among Invenda’s core investors in both Invenda’s corporate culture, with its commitment to transparency, and the drive to expand global growth,” Invenda’s press release said.

But University of Waterloo students like Stanley now question Invenda’s “commitment to transparency” in North American markets, especially since the company is seemingly openly violating Canadian privacy law, Stanley told CTV News.

On Reddit, while some students joked that SquidKid47’s face “crashed” the machine, others asked if “any pre-law students wanna start up a class-action lawsuit?” One commenter summed up students’ frustration by typing in all caps, “I HATE THESE MACHINES! I HATE THESE MACHINES! I HATE THESE MACHINES!”

Vending machine error reveals secret face image database of college students Read More »

avast-ordered-to-stop-selling-browsing-data-from-its-browsing-privacy-apps

Avast ordered to stop selling browsing data from its browsing privacy apps

Security, privacy, things of that nature —

Identifiable data included job searches, map directions, “cosplay erotica.”

Avast logo on a phone in front of the words

Getty Images

Avast, a name known for its security research and antivirus apps, has long offered Chrome extensions, mobile apps, and other tools aimed at increasing privacy.

Avast’s apps would “block annoying tracking cookies that collect data on your browsing activities,” and prevent web services from “tracking your online activity.” Deep in its privacy policy, Avast said information that it collected would be “anonymous and aggregate.” In its fiercest rhetoric, Avast’s desktop software claimed it would stop “hackers making money off your searches.”

All of that language was offered up while Avast was collecting users’ browser information from 2014 to 2020, then selling it to more than 100 other companies through a since-shuttered entity known as Jumpshot, according to the Federal Trade Commission. Under a proposed recent FTC order (PDF), Avast must pay $16.5 million, which is “expected to be used to provide redress to consumers,” according to the FTC. Avast will also be prohibited from selling future browsing data, must obtain express consent on future data gathering, notify customers about prior data sales, and implement a “comprehensive privacy program” to address prior conduct.

Reached for comment, Avast provided a statement that noted the company’s closure of Jumpshot in early 2020. “We are committed to our mission of protecting and empowering people’s digital lives. While we disagree with the FTC’s allegations and characterization of the facts, we are pleased to resolve this matter and look forward to continuing to serve our millions of customers around the world,” the statement reads.

Data was far from anonymous

The FTC’s complaint (PDF) notes that after Avast acquired then-antivirus competitor Jumpshot in early 2014, it rebranded the company as an analytics seller. Jumpshot advertised that it offered “unique insights” into the habits of “[m]ore than 100 million online consumers worldwide.” That included the ability to “[s]ee where your audience is going before and after they visit your site or your competitors’ sites, and even track those who visit a specific URL.”

While Avast and Jumpshot claimed that the data had identifying information removed, the FTC argues this was “not sufficient.” Jumpshot offerings included a unique device identifier for each browser, included in data like an “All Clicks Feed,” “Search Plus Click Feed,” “Transaction Feed,” and more. The FTC’s complaint detailed how various companies would purchase these feeds, often with the express purpose of pairing them with a company’s own data, down to an individual user basis. Some Jumpshot contracts attempted to prohibit re-identifying Avast users, but “those prohibitions were limited,” the complaint notes.

The connection between Avast and Jumpshot became broadly known in January 2020, after reporting by Vice and PC Magazine revealed that clients, including Home Depot, Google, Microsoft, Pepsi, and McKinsey, were buying data from Jumpshot, as seen in confidential contracts. Data obtained by the publications showed that buyers could purchase data including Google Maps look-ups, individual LinkedIn and YouTube pages, porn sites, and more. “It’s very granular, and it’s great data for these companies, because it’s down to the device level with a timestamp,” one source told Vice.

The FTC’s complaint provides more detail on how Avast, on its own web forums, sought to downplay its Jumpshot presence. Avast suggested both that only non-aggregated data was provided to Jumpshot and that users were informed during product installation about collecting data to “better understand new and interesting trends.” Neither of these claims proved true, the FTC suggests. And the data collected was far from harmless, given its re-identifiable nature:

For example, a sample of just 100 entries out of trillions retained by Respondents

showed visits by consumers to the following pages: an academic paper on a study of symptoms

of breast cancer; Sen. Elizabeth Warren’s presidential candidacy announcement; a CLE course

on tax exemptions; government jobs in Fort Meade, Maryland with a salary greater than

$100,000; a link (then broken) to the mid-point of a FAFSA (financial aid) application;

directions on Google Maps from one location to another; a Spanish-language children’s

YouTube video; a link to a French dating website, including a unique member ID; and cosplay

erotica.

In a blog post accompanying its announcement, FTC Senior Attorney Lesley Fair writes that, in addition to the dual nature of Avast’s privacy products and Jumpshot’s extensive tracking, the FTC is increasingly viewing browsing data as “highly sensitive information that demands the utmost care.” “Data about the websites a person visits isn’t just another corporate asset open to unfettered commercial exploitation,” Fair writes.

FTC commissioners voted 3-0 to issue the complaint and accept the proposed consent agreement. Chair Lina Khan, along with commissioners Rebecca Slaughter and Alvaro Bedoya, issued a statement on their vote.

Since the time of the FTC’s complaint and its Jumpshot business, Avast has been acquired by Gen Digital, a firm that contains Norton, Avast, LifeLock, Avira, AVG, CCLeaner, and ReputationDefender, among other security businesses.

Disclosure: Condé Nast, Ars Technica’s parent company, received data from Jumpshot before its closure.

Avast ordered to stop selling browsing data from its browsing privacy apps Read More »