Policy

kagan:-florida-social-media-law-seems-like-“classic-first-amendment-violation”

Kagan: Florida social media law seems like “classic First Amendment violation”

The US Supreme Court building is seen on a sunny day. Kids mingle around a small pool on the grounds in front of the building.

Enlarge / The Supreme Court of the United States in Washington, DC, in May 2023.

Getty Images | NurPhoto

The US Supreme Court today heard oral arguments on Florida and Texas state laws that impose limits on how social media companies can moderate user-generated content.

The Florida law prohibits large social media sites like Facebook and Twitter (aka X) from banning politicians and says they must “apply censorship, deplatforming, and shadow banning standards in a consistent manner among its users on the platform.” The Texas statute prohibits large social media companies from moderating posts based on a user’s “viewpoint.” The laws were supported by Republican officials from 20 other states.

The tech industry says both laws violate the companies’ First Amendment right to use editorial discretion in deciding what kinds of user-generated content to allow on their platforms and how to present that content. The Supreme Court will decide whether the laws can be enforced while the industry lawsuits against Florida and Texas continue in lower courts.

How the Supreme Court rules at this stage in these two cases could give one side or the other a big advantage in the ongoing litigations. Paul Clement, a lawyer for Big Tech trade group NetChoice, today urged justices to reject the idea that content moderation conducted by private companies is censorship.

“I really do think that censorship is only something that the government can do to you,” Clement said. “And if it’s not the government, you really shouldn’t label it ‘censorship.’ It’s just a category mistake.”

Companies use editorial discretion to make websites useful for users and advertisers, he said, arguing that content moderation is an expressive activity protected by the First Amendment.

Justice Kagan talks anti-vaxxers, insurrectionists

Henry Whitaker, Florida’s solicitor general, said that social media platforms marketed themselves as neutral forums for free speech but now claim to be “editors of their users’ speech, rather like a newspaper.”

“They contend that they possess a broad First Amendment right to censor anything they host on their sites, even when doing so contradicts their own representations to consumers,” he said. Social media platforms should not be allowed to censor speech any more than phone companies are allowed to, he argued.

Contending that social networks don’t really act as editors, he said that “it is a strange kind of editor that does not actually look at the material” before it is posted. He also said that “upwards of 99 percent of what goes on the platforms is basically passed through without review.”

Justice Elena Kagan replied, “But that 1 percent seems to have gotten some people extremely angry.” Describing the platforms’ moderation practices, she said the 1 percent of content that is moderated is “like, ‘we don’t want anti-vaxxers on our site or we don’t want insurrectionists on our site.’ I mean, that’s what motivated these laws, isn’t it? And that’s what’s getting people upset about them is that other people have different views about what it means to provide misinformation as to voting and things like that.”

Later, Kagan said, “I’m taking as a given that YouTube or Facebook or whatever has expressive views. There are particular kinds of expression defined by content that they don’t want anywhere near their site.”

Pointing to moderation of hate speech, bullying, and misinformation about voting and public health, Kagan asked, “Why isn’t that a classic First Amendment violation for the state to come in and say, ‘we’re not going to allow you to enforce those sorts of restrictions?'”

Whitaker urged Kagan to “look at the objective activity being regulated, namely censoring and deplatforming, and ask whether that expresses a message. Because they [the social networks] host so much content, an objective observer is not going to readily attribute any particular piece of content that appears on their site to some decision to either refrain from or to censor or deplatform.”

Thomas: Who speaks when an algorithm moderates?

Justice Clarence Thomas expressed doubts about whether content moderation conveys an editorial message. “Tell me again what the expressive conduct is that, for example, YouTube engages in when it or Twitter deplatforms someone. What is the expressive conduct and to whom is it being communicated?” Thomas asked.

Clement said the platforms “are sending a message to that person and to their broader audience that that material” isn’t allowed. As a result, users are “not going to see material that violates the terms of use. They’re not going to see a bunch of material that glorifies terrorism. They’re not going to see a bunch of material that glorifies suicide,” Clement said.

Thomas asked who is doing the “speaking” when an algorithm performs content moderation, particularly when “it’s a deep-learning algorithm which teaches itself and has very little human intervention.”

“So who’s speaking then, the algorithm or the person?” Thomas asked.

Clement said that Facebook and YouTube are “speaking, because they’re the ones that are using these devices to run their editorial discretion across these massive volumes.” The need to use algorithms to automate moderation demonstrates “the volume of material on these sites, which just shows you the volume of editorial discretion,” he said.

Kagan: Florida social media law seems like “classic First Amendment violation” Read More »

how-your-sensitive-data-can-be-sold-after-a-data-broker-goes-bankrupt

How your sensitive data can be sold after a data broker goes bankrupt

playing fast and loose —

Sensitive location data could be sold off to the highest bidder.

Blue tone city scape and network connection concept , Map pin business district

In 2021, a company specializing in collecting and selling location data called Near bragged that it was “The World’s Largest Dataset of People’s Behavior in the Real-World,” with data representing “1.6B people across 44 countries.” Last year the company went public with a valuation of $1 billion (via a SPAC). Seven months later it filed for bankruptcy and has agreed to sell the company.

But for the “1.6B people” that Near said its data represents, the important question is: What happens to Near’s mountain of location data? Any company could gain access to it through purchasing the company’s assets.

The prospect of this data, including Near’s collection of location data from sensitive locations such as abortion clinics, being sold off in bankruptcy has raised alarms in Congress. Last week, Sen. Ron Wyden (D-Ore.) wrote the Federal Trade Commission (FTC) urging the agency to “protect consumers and investors from the outrageous conduct” of Near, citing his office’s investigation into the India-based company.

Wyden’s letter also urged the FTC “to intervene in Near’s bankruptcy proceedings to ensure that all location and device data held by Near about Americans is promptly destroyed and is not sold off, including to another data broker.” The FTC took such an action in 2010 to block the use of 11 years worth of subscriber personal data during the bankruptcy proceedings of the XY Magazine, which was oriented to young gay men. The agency requested that the data be destroyed to prevent its misuse.

Wyden’s investigation was spurred by a May 2023 Wall Street Journal report that Near had licensed location data to the anti-abortion group Veritas Society so it could target ads to visitors of Planned Parenthood clinics and attempt to dissuade women from seeking abortions. Wyden’s investigation revealed that the group’s geofencing campaign focused on 600 Planned Parenthood clinics in 48 states. The Journal also revealed that Near had been selling its location data to the Department of Defense and intelligence agencies.

As of publication, Near has not responded to requests for comment.

According to Near’s privacy policy, all of the data they have collected can be transferred to the new owners. Under the heading of “Who do you share my personal data with?” It lists “Prospective buyers of our business.”

This type of clause is common in privacy policies, and is a regular part of businesses being bought and sold. Where it gets complicated is when the company being sold owns data containing sensitive information.

This week, a new bankruptcy court filing showed that Wyden’s requests were granted. The order placed restrictions on the use, sale, licensing, or transfer of location data collected from sensitive locations in the US and requires any company that purchases the data to establish a “sensitive location data program” with detailed policies for such data and ensure ongoing monitoring and compliance, including the creation of a list of sensitive locations such as reproductive health care facilities, doctor’s offices, houses of worship, mental health care providers, corrections facilities and shelters among others. The order demands that unless consumers have explicitly provided consent, the company must cease any collection, use, or transfer of location data.

In a statement emailed to The Markup, Wyden wrote, “I commend the FTC for stepping in—at my request—to ensure that this data broker’s stockpile of Americans’ sensitive location data isn’t abused, again.”

Wyden called for protecting sensitive location data from data brokers, citing the new legal threats to women since the Supreme Court’s June 2022 decision to overturn the abortion-rights ruling Roe v. Wade. Wyden wrote, “The threat posed by the sale of location data is clear, particularly to women who are seeking reproductive care.”

The bankruptcy order also provided a rare glimpse into how data brokers license data to one another. Near’s list of contracts included agreements with several location brokers, ad platforms, universities, retailers, and city governments.

It is not clear from the filing if the agreements covered Near data being licensed, Near licensing the data from the companies, or both.

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

How your sensitive data can be sold after a data broker goes bankrupt Read More »

at&t’s-botched-network-update-caused-yesterday’s-major-wireless-outage

AT&T’s botched network update caused yesterday’s major wireless outage

AT&T outage cause —

AT&T blamed itself for “incorrect process used as we were expanding our network.”

A picture of two cellular towers. Trees and aerial power lines are also in the photo.

Enlarge / Cellular towers in Redondo Beach, California on February 22, 2024.

Getty Images | Eric Thayer

AT&T said a botched update related to a network expansion caused the wireless outage that disrupted service for many mobile customers yesterday.

“Based on our initial review, we believe that today’s outage was caused by the application and execution of an incorrect process used as we were expanding our network, not a cyber attack,” AT&T said on its website last night. “We are continuing our assessment of today’s outage to ensure we keep delivering the service that our customers deserve.”

While “incorrect process” is a bit vague, an ABC News report that cited anonymous sources said it was a software update that went wrong. AT&T hasn’t said exactly how many cellular customers were affected, but there were over 70,000 problem reports on the DownDetector website yesterday morning.

The outage began early in the morning, and AT&T said at 11: 15 am ET yesterday that “three-quarters of our network has been restored.” By 3: 10 pm ET, AT&T said it had “restored wireless service to all our affected customers.”

We asked AT&T for more information on the extent of the outage and its cause today, but a spokesperson said the company had no further comment.

FCC investigates

The outage was big enough that the Federal Communications Commission said its Public Safety and Homeland Security Bureau was actively investigating. The FCC also said it was in touch with FirstNet, the nationwide public safety network that was built by AT&T. Some FirstNet users reported frustrations related to the outage.

The San Francisco Fire Department said it was monitoring the outage because it appeared to be preventing “AT&T wireless customers from making and receiving any phone calls (including to 911).” The FCC sometimes issues fines to telcos over 911 outages.

The US Cybersecurity and Infrastructure Security Agency reportedly said it was looking into the outage, and a White House spokesperson said the FBI was checking on it, too. But it was determined pretty quickly that the outage wasn’t caused by cyber-attackers.

AT&T’s botched network update caused yesterday’s major wireless outage Read More »

yelp:-it’s-gotten-worse-since-google-made-changes-to-comply-with-eu-rules

Yelp: It’s gotten worse since Google made changes to comply with EU rules

illustration of google and yelp logos

Anjali Nair; Getty Images

To comply with looming rules that ban tech giants from favoring their own services, Google has been testing new look search results for flights, trains, hotels, restaurants, and products in Europe. The EU’s Digital Markets Act is supposed to help smaller companies get more traffic from Google, but reviews service Yelp says that when it tested Google’s design tweaks with consumers it had the opposite effect—making people less likely to click through to Yelp or another Google competitor.

The results, which Yelp shared with European regulators in December and WIRED this month, put some numerical backing behind complaints from Google rivals in travel, shopping, and hospitality that its efforts to comply with the DMA are insufficient—and potentially more harmful than the status quo. Yelp and thousands of others have been demanding that the EU hold a firm line against the giant companies including Apple and Amazon that are subject to what’s widely considered the world’s strictest antitrust law, violations of which can draw fines of up to 10 percent of global annual sales.

“All the gatekeepers are trying to hold on as long as possible to the status quo and make the new world unattractive,” says Richard Stables, CEO of shopping comparison site Kelkoo, which is unhappy with how Google has tweaked shopping results to comply with the DMA. “That’s really the game plan.”

Google spokesperson Rory O’Donoghue says the more than 20 changes made to search in response to the DMA are providing more opportunities for services such as Yelp to show up in results. “To suggest otherwise is plain wrong,” he says. Overall, Google’s tests of various DMA-inspired designs show clicks to review and comparison websites are up, O’Donoghue says—at the cost of users losing shortcuts to Google tools and individual businesses like airlines and restaurants facing a drop in visits from Google search. “We’ve been seeking feedback from a range of stakeholders over many months as we try to balance the needs of different types of websites while complying with the law,” he says.

Google, which generates 30 percent of its sales from Europe, the Middle East, and Africa, views the DMA as disrespecting its expertise in what users want. Critics such as Yelp argue that Google sometimes siphons users away from the more reliable content they offer. Yelp competes with Google for advertisers but generated less than 1 percent of its record sales of $1.3 billion last year from outside the US. An increase in European traffic could significantly boost its business.

To study search changes, Yelp worked with user-research company Lyssna to watch how hundreds of consumers from around the world interacted with Google’s new EU search results page when asked to find a dinner spot in Paris. For searches like that or for other “local” businesses, as Google calls them, one new design features results from Google Maps data at the top of the page below the search bar but adds a new box widget lower down containing images from and links to reviews websites like Yelp.

The experiments found that about 73 percent of about 500 people using that new design clicked results that kept them inside Google’s ecosystem—an increase over the 55 percent who did so when the design Google is phasing out in Europe was tested with a smaller pool of roughly 250 people.

Yelp also tested a variation of the new design. In this version, which Google has shared with regulators, the new box featuring review websites is placed above the maps widget. It was more successful in drawing people to try alternatives to Google, with only about 44 percent of consumers in the experiment sticking with the search giant. Though the box and widget will be treated equally by Google’s search algorithms, the order the features appear in will vary based on those calculations. Yelp’s concern is that Google will win out too often.

Yelp proposed to EU regulators that to produce more fair outcomes, Google should instead amend the map widget on results pages to include business listings and ratings from numerous providers, placing data from Google’s directory right alongside Yelp and others.

Companies such as Yelp that are critical of the changes in testing have called on the European Commission to immediately open an investigation into Google on March 7, when enforcement of the DMA begins.

“Yelp urges regulators to compel Google to fully comply with both the letter and spirit of the DMA,” says Yelp’s vice president of public policy, David Segal. “Google will soon be in violation of both, because if you look at what Google has put forth, it’s pretty clear that its services still have the best real estate.”

Yelp: It’s gotten worse since Google made changes to comply with EU rules Read More »

vending-machine-error-reveals-secret-face-image-database-of-college-students

Vending machine error reveals secret face image database of college students

“Stupid M&M machines” —

Facial-recognition data is typically used to prompt more vending machine sales.

Vending machine error reveals secret face image database of college students

Aurich Lawson | Mars | Getty Images

Canada-based University of Waterloo is racing to remove M&M-branded smart vending machines from campus after outraged students discovered the machines were covertly collecting facial-recognition data without their consent.

The scandal started when a student using the alias SquidKid47 posted an image on Reddit showing a campus vending machine error message, “Invenda.Vending.FacialRecognitionApp.exe,” displayed after the machine failed to launch a facial recognition application that nobody expected to be part of the process of using a vending machine.

Reddit post shows error message displayed on a University of Waterloo vending machine (cropped and lightly edited for clarity).

Enlarge / Reddit post shows error message displayed on a University of Waterloo vending machine (cropped and lightly edited for clarity).

“Hey, so why do the stupid M&M machines have facial recognition?” SquidKid47 pondered.

The Reddit post sparked an investigation from a fourth-year student named River Stanley, who was writing for a university publication called MathNEWS.

Stanley sounded alarm after consulting Invenda sales brochures that promised “the machines are capable of sending estimated ages and genders” of every person who used the machines without ever requesting consent.

This frustrated Stanley, who discovered that Canada’s privacy commissioner had years ago investigated a shopping mall operator called Cadillac Fairview after discovering some of the malls’ informational kiosks were secretly “using facial recognition software on unsuspecting patrons.”

Only because of that official investigation did Canadians learn that “over 5 million nonconsenting Canadians” were scanned into Cadillac Fairview’s database, Stanley reported. Where Cadillac Fairview was ultimately forced to delete the entire database, Stanley wrote that consequences for collecting similarly sensitive facial recognition data without consent for Invenda clients like Mars remain unclear.

Stanley’s report ended with a call for students to demand that the university “bar facial recognition vending machines from campus.”

A University of Waterloo spokesperson, Rebecca Elming, eventually responded, confirming to CTV News that the school had asked to disable the vending machine software until the machines could be removed.

Students told CTV News that their confidence in the university’s administration was shaken by the controversy. Some students claimed on Reddit that they attempted to cover the vending machine cameras while waiting for the school to respond, using gum or Post-it notes. One student pondered whether “there are other places this technology could be being used” on campus.

Elming was not able to confirm the exact timeline for when machines would be removed other than telling Ars it would happen “as soon as possible.” She told Ars she is “not aware of any similar technology in use on campus.” And for any casual snackers on campus wondering, when, if ever, students could expect the vending machines to be replaced with snack dispensers not equipped with surveillance cameras, Elming confirmed that “the plan is to replace them.”

Invenda claims machines are GDPR-compliant

MathNEWS’ investigation tracked down responses from companies responsible for smart vending machines on the University of Waterloo’s campus.

Adaria Vending Services told MathNEWS that “what’s most important to understand is that the machines do not take or store any photos or images, and an individual person cannot be identified using the technology in the machines. The technology acts as a motion sensor that detects faces, so the machine knows when to activate the purchasing interface—never taking or storing images of customers.”

According to Adaria and Invenda, students shouldn’t worry about data privacy because the vending machines are “fully compliant” with the world’s toughest data privacy law, the European Union’s General Data Protection Regulation (GDPR).

“These machines are fully GDPR compliant and are in use in many facilities across North America,” Adaria’s statement said. “At the University of Waterloo, Adaria manages last mile fulfillment services—we handle restocking and logistics for the snack vending machines. Adaria does not collect any data about its users and does not have any access to identify users of these M&M vending machines.”

Under the GDPR, face image data is considered among the most sensitive data that can be collected, typically requiring explicit consent to collect, so it’s unclear how the machines may meet that high bar based on the Canadian students’ experiences.

According to a press release from Invenda, the maker of M&M candies, Mars, was a key part of Invenda’s expansion into North America. It was only after closing a $7 million funding round, including deals with Mars and other major clients like Coca-Cola, that Invenda could push for expansive global growth that seemingly vastly expands its smart vending machines’ data collection and surveillance opportunities.

“The funding round indicates confidence among Invenda’s core investors in both Invenda’s corporate culture, with its commitment to transparency, and the drive to expand global growth,” Invenda’s press release said.

But University of Waterloo students like Stanley now question Invenda’s “commitment to transparency” in North American markets, especially since the company is seemingly openly violating Canadian privacy law, Stanley told CTV News.

On Reddit, while some students joked that SquidKid47’s face “crashed” the machine, others asked if “any pre-law students wanna start up a class-action lawsuit?” One commenter summed up students’ frustration by typing in all caps, “I HATE THESE MACHINES! I HATE THESE MACHINES! I HATE THESE MACHINES!”

Vending machine error reveals secret face image database of college students Read More »

avast-ordered-to-stop-selling-browsing-data-from-its-browsing-privacy-apps

Avast ordered to stop selling browsing data from its browsing privacy apps

Security, privacy, things of that nature —

Identifiable data included job searches, map directions, “cosplay erotica.”

Avast logo on a phone in front of the words

Getty Images

Avast, a name known for its security research and antivirus apps, has long offered Chrome extensions, mobile apps, and other tools aimed at increasing privacy.

Avast’s apps would “block annoying tracking cookies that collect data on your browsing activities,” and prevent web services from “tracking your online activity.” Deep in its privacy policy, Avast said information that it collected would be “anonymous and aggregate.” In its fiercest rhetoric, Avast’s desktop software claimed it would stop “hackers making money off your searches.”

All of that language was offered up while Avast was collecting users’ browser information from 2014 to 2020, then selling it to more than 100 other companies through a since-shuttered entity known as Jumpshot, according to the Federal Trade Commission. Under a proposed recent FTC order (PDF), Avast must pay $16.5 million, which is “expected to be used to provide redress to consumers,” according to the FTC. Avast will also be prohibited from selling future browsing data, must obtain express consent on future data gathering, notify customers about prior data sales, and implement a “comprehensive privacy program” to address prior conduct.

Reached for comment, Avast provided a statement that noted the company’s closure of Jumpshot in early 2020. “We are committed to our mission of protecting and empowering people’s digital lives. While we disagree with the FTC’s allegations and characterization of the facts, we are pleased to resolve this matter and look forward to continuing to serve our millions of customers around the world,” the statement reads.

Data was far from anonymous

The FTC’s complaint (PDF) notes that after Avast acquired then-antivirus competitor Jumpshot in early 2014, it rebranded the company as an analytics seller. Jumpshot advertised that it offered “unique insights” into the habits of “[m]ore than 100 million online consumers worldwide.” That included the ability to “[s]ee where your audience is going before and after they visit your site or your competitors’ sites, and even track those who visit a specific URL.”

While Avast and Jumpshot claimed that the data had identifying information removed, the FTC argues this was “not sufficient.” Jumpshot offerings included a unique device identifier for each browser, included in data like an “All Clicks Feed,” “Search Plus Click Feed,” “Transaction Feed,” and more. The FTC’s complaint detailed how various companies would purchase these feeds, often with the express purpose of pairing them with a company’s own data, down to an individual user basis. Some Jumpshot contracts attempted to prohibit re-identifying Avast users, but “those prohibitions were limited,” the complaint notes.

The connection between Avast and Jumpshot became broadly known in January 2020, after reporting by Vice and PC Magazine revealed that clients, including Home Depot, Google, Microsoft, Pepsi, and McKinsey, were buying data from Jumpshot, as seen in confidential contracts. Data obtained by the publications showed that buyers could purchase data including Google Maps look-ups, individual LinkedIn and YouTube pages, porn sites, and more. “It’s very granular, and it’s great data for these companies, because it’s down to the device level with a timestamp,” one source told Vice.

The FTC’s complaint provides more detail on how Avast, on its own web forums, sought to downplay its Jumpshot presence. Avast suggested both that only non-aggregated data was provided to Jumpshot and that users were informed during product installation about collecting data to “better understand new and interesting trends.” Neither of these claims proved true, the FTC suggests. And the data collected was far from harmless, given its re-identifiable nature:

For example, a sample of just 100 entries out of trillions retained by Respondents

showed visits by consumers to the following pages: an academic paper on a study of symptoms

of breast cancer; Sen. Elizabeth Warren’s presidential candidacy announcement; a CLE course

on tax exemptions; government jobs in Fort Meade, Maryland with a salary greater than

$100,000; a link (then broken) to the mid-point of a FAFSA (financial aid) application;

directions on Google Maps from one location to another; a Spanish-language children’s

YouTube video; a link to a French dating website, including a unique member ID; and cosplay

erotica.

In a blog post accompanying its announcement, FTC Senior Attorney Lesley Fair writes that, in addition to the dual nature of Avast’s privacy products and Jumpshot’s extensive tracking, the FTC is increasingly viewing browsing data as “highly sensitive information that demands the utmost care.” “Data about the websites a person visits isn’t just another corporate asset open to unfettered commercial exploitation,” Fair writes.

FTC commissioners voted 3-0 to issue the complaint and accept the proposed consent agreement. Chair Lina Khan, along with commissioners Rebecca Slaughter and Alvaro Bedoya, issued a statement on their vote.

Since the time of the FTC’s complaint and its Jumpshot business, Avast has been acquired by Gen Digital, a firm that contains Norton, Avast, LifeLock, Avira, AVG, CCLeaner, and ReputationDefender, among other security businesses.

Disclosure: Condé Nast, Ars Technica’s parent company, received data from Jumpshot before its closure.

Avast ordered to stop selling browsing data from its browsing privacy apps Read More »

india’s-plan-to-let-1998-digital-trade-deal-expire-may-worsen-chip-shortage

India’s plan to let 1998 digital trade deal expire may worsen chip shortage

India’s plan to let 1998 digital trade deal expire may worsen chip shortage

India’s plan to let a moratorium on imposing customs duties on cross-border digital e-commerce transactions expire may end up hurting India’s more ambitious plans to become a global chip leader in the next five years, Reuters reported.

It could also worsen the global chip shortage by spiking semiconductor industry costs at a time when many governments worldwide are investing heavily in expanding domestic chip supplies in efforts to keep up with rapidly advancing technologies.

Early next week, world leaders will convene at a World Trade Organization (WTO) meeting, just before the deadline to extend the moratorium hits in March. In place since 1998, the moratorium has been renewed every two years since—but India has grown concerned that it’s losing significant revenues from not imposing taxes as demand rises for its digital goods, like movies, e-books, or games.

Hoping to change India’s mind, a global consortium of semiconductor industry associations known as the World Semiconductor Council (WSC) sent a letter to Indian Prime Minister Narendra Modi on Thursday.

Reuters reviewed the letter, reporting that the WSC warned Modi that ending the moratorium “would mean tariffs on digital e-commerce and an innumerable number of transfers of chip design data across countries, raising costs and worsening chip shortages.”

Pointing to Modi’s $10 billion semiconductor incentive package—which Modi has said is designed to advance India’s industry through “giant leaps” in its mission to become a technology superpower—the WSC cautioned Modi that pushing for customs duties may dash those global chip leader dreams.

Studies suggest that India should be offering tax incentives, not potentially threatening to impose duties on chip design data. That includes a study from earlier this year, released after the Semiconductor Industry Association and the India Electronics and Semiconductor Association commissioned a report from the Information Technology and Innovation Foundation (ITIF).

ITIF’s goal was to evaluate “India’s existing semiconductor ecosystem and policy frameworks” and offer “recommendations to facilitate longer-term strategic development of complementary semiconductor ecosystems in the US and India,” a press release said, partly in order to “deepen commercial ties” between the countries. The Prime Minister’s Office (PMO) has also reported a similar goal to deepen commercial ties with the European Union.

Among recommendations to “strengthen India’s semiconductor competitiveness,” ITIF’s report encouraged India to advance cooperation with the US and introduce policy reforms that “lower the cost of doing business for semiconductor companies in India”—by “offering tax breaks to chip companies” and “expediting clearance times for goods entering the country.”

Because the duties could spike chip industry costs at a time when global cross-border data transmissions are expected to reach $11 trillion by 2025, WSC wrote, the duties may “impede India’s efforts to advance its semiconductor industry and attract semiconductor investment,” which could negatively impact “more than 20 percent of the world’s semiconductor design workforce,” which is based in India.

The prime minister’s office did not immediately respond to Ars’ request to comment.

India’s plan to let 1998 digital trade deal expire may worsen chip shortage Read More »

reddit-admits-more-moderator-protests-could-hurt-its-business

Reddit admits more moderator protests could hurt its business

SEC filing —

Losing third-party tools “could harm our moderators’ ability to review content…”

Reddit logo on website displayed on a laptop screen is seen in this illustration photo taken in Krakow, Poland on February 22, 2024.

Reddit filed to go public on Thursday (PDF), revealing various details of the social media company’s inner workings. Among the revelations, Reddit acknowledged the threat of future user protests and the value of third-party Reddit apps.

On July 1, Reddit enacted API rule changes—including new, expensive pricing —that resulted in many third-party Reddit apps closing. Disturbed by the changes, the timeline of the changes, and concerns that Reddit wasn’t properly appreciating third-party app developers and moderators, thousands of Reddit users protested by making the subreddits they moderate private, read-only, and/or engaging in other forms of protest, such as only discussing John Oliver or porn.

Protests went on for weeks and, at their onset, crashed Reddit for three hours. At the time, Reddit CEO Steve Huffman said the protests did not have “any significant revenue impact so far.”

In its filing with the Securities and Exchange Commission (SEC), though, Reddit acknowledged that another such protest could hurt its pockets:

While these activities have not historically had a material impact on our business or results of operations, similar actions by moderators and/or their communities in the future could adversely affect our business, results of operations, financial condition, and prospects.

The company also said that bad publicity and media coverage, such as the kind that stemmed from the API protests, could be a risk to Reddit’s success. The Form S-1 said bad PR around Reddit, including its practices, prices, and mods, “could adversely affect the size, demographics, engagement, and loyalty of our user base,” adding:

For instance, in May and June 2023, we experienced negative publicity as a result of our API policy changes.

Reddit’s filing also said that negative publicity and moderators disrupting the normal operation of subreddits could hurt user growth and engagement goals. The company highlighted financial incentives associated with having good relationships with volunteer moderators, noting that if enough mods decided to disrupt Reddit (like they did when they led protests last year), “results of operations, financial condition, and prospects could be adversely affected.” Reddit infamously forcibly removed moderators from their posts during the protests, saying they broke Reddit rules by refusing to reopen the subreddits they moderated.

“As communities grow, it can become more and more challenging for communities to find qualified people willing to act as moderators,” the filing says.

Losing third-party tools could hurt Reddit’s business

Much of the momentum for last year’s protests came from users, including long-time Redditors, mods, and people with accessibility needs, feeling that third-party apps were necessary to enjoyably and properly access and/or moderate Reddit. Reddit’s own technology has disappointed users in the past (leading some to cling to Old Reddit, which uses an older interface, for example). In its SEC filing, Reddit pointed to the value of third-party “tools” despite its API pricing killing off many of the most popular examples.

Reddit’s filing discusses losing moderators as a business risk and notes how important third-party tools are in maintaining mods:

While we provide tools to our communities to manage their subreddits, our moderators also rely on their own and third-party tools. Any disruption to, or lack of availability of, these third-party tools could harm our moderators’ ability to review content and enforce community rules. Further, if we are unable to provide effective support for third-party moderation tools, or develop our own such tools, our moderators could decide to leave our platform and may encourage their communities to follow them to a new platform, which would adversely affect our business, results of operations, financial condition, and prospects.

Since Reddit’s API policy changes, a small number of third-party Reddit apps remain available. But some of the remaining third-party Reddit app developers have previously told Ars Technica that they’re unsure of their app’s tenability under Reddit’s terms. Nondisclosure agreement requirements and the lack of a finalized developer platform also drive uncertainty around the longevity of the third-party Reddit app ecosystem, according to devs Ars spoke with this year.

Reddit admits more moderator protests could hurt its business Read More »

isps-keep-giving-false-broadband-coverage-data-to-the-fcc,-groups-say

ISPs keep giving false broadband coverage data to the FCC, groups say

Illustration of a US map with crisscrossing lines representing a broadband network.

Getty Images | Andrey Denisyuk

Internet service providers are still providing false coverage information to the Federal Communications Commission, and the FCC process for challenging errors isn’t good enough to handle all the false claims, the agency was told by several groups this week.

The latest complaints focus on fixed wireless providers that offer home Internet service via signals sent to antennas. ISPs that compete against these wireless providers say that exaggerated coverage data prevents them from obtaining government funding designed to subsidize the building of networks in areas with limited coverage.

The wireless company LTD Broadband (which has been renamed GigFire) came under particular scrutiny in an FCC filing submitted by the Accurate Broadband Data Alliance, a group of about 50 ISPs in the Midwest.

“A number of carriers, including LTD Broadband/GigFire LLC and others, continue to overreport Internet service availability, particularly in relation to fixed wireless network capabilities and reach,” the group said. “These errors and irregularities in the Map will hinder and, in many cases, prevent deployment of essential broadband services by redirecting funds away from areas truly lacking sufficient broadband.”

ISPs are required to submit coverage data for the FCC’s broadband map, and there is a challenge process in which false claims can be contested. The FCC recently sought comment on how well the challenge process is working.

CEO blasts “100-year-old telcos”

The Accurate Broadband Data Alliance accused GigFire of behaving badly in the challenge process, saying “LTD Broadband/GigFire LLC often continues to assert unrealistic broadband claims without evidence and even accuses the challenger of falsifying information during the challenge process.”

GigFire CEO Corey Hauer disputed the Accurate Broadband Data Alliance’s accusations. Hauer told Ars today that “GigFire evaluated over 5 million locations and established that 339,598 are eligible to get service and that is accurately reflected in our BDC [Broadband Data Collection] filings.”

Hauer said GigFire offers service in Illinois, Iowa, Minnesota, Missouri, Nebraska, North Dakota, South Dakota, Tennessee, and Wisconsin. The company’s service area is mostly wireless but includes about 20,000 homes passed by fiber lines, he said.

“GigFire wants to service as many customers as we can, but we have no interest in falsely telling customers that they qualify for service,” Hauer told us.

Hauer also said that “GigFire uses widely accepted wireless propagation models to compute our coverage. It’s just math, there is no way to game the system.” He said that telcos “feel they should get an additional wheelbarrow full of ratepayer money, and because of our coverage, they will not.”

“Many of these 100-year-old telcos were so used to being monopolies, that it appears they struggle with consumers that live in their legacy telco boundaries having competitive choices,” Hauer said.

Wireless claims hard to verify, groups say

Wireline providers have also exaggerated coverage, as we’ve reported. Comcast admitted to mistakes last year after previously insisting that false data it gave the FCC was correct. In another case, a small Ohio ISP called Jefferson County Cable admitted lying to the FCC about the size of its network in order to block funding to rivals.

But it can be especially hard to verify the claims made by fixed wireless providers, several groups that represented wireline providers told the FCC. The FCC says that fixed wireless providers can submit either lists of locations, or polygon coverage maps based on propagation modeling. GigFire submitted a list of locations.

Both the list and polygon models drew criticism from telco groups. The Minnesota Telecom Alliance told the FCC this week that “the highly generalized nature of the polygon coverage maps has tempted some competitive fixed wireless providers to exaggerate the extent of their service areas and the speeds of their services.”

The Minnesota group said that “polygon coverage maps are able to show only an alleged unsubsidized fixed wireless competitor’s theoretical potential signal coverage over a general area,” and don’t account for problems like “line-of-sight obstructions, terrain, foliage, weather conditions, and busy hour congestion” that can restrict coverage at specific locations.

“MTA members are aware that many fixed wireless broadband service providers are unable to determine whether they actually can serve a specific location and what level of service they can provide to that location unless and until they send a technician to the site to attempt to install service,” the group said.

The Minnesota telco group complained that inaccurate filings reduce the number of locations at which a telco can receive Universal Service Fund (USF) money. It is often virtually impossible to successfully “challenge the accuracy of fixed wireless service availability claims that can adversely impact USF support,” the group said.

ISPs keep giving false broadband coverage data to the FCC, groups say Read More »

snapchat-isn’t-liable-for-connecting-12-year-old-to-convicted-sex-offenders

Snapchat isn’t liable for connecting 12-year-old to convicted sex offenders

Snapchat isn’t liable for connecting 12-year-old to convicted sex offenders

A judge has dismissed a complaint from a parent and guardian of a girl, now 15, who was sexually assaulted when she was 12 years old after Snapchat recommended that she connect with convicted sex offenders.

According to the court filing, the abuse that the girl, C.O., experienced on Snapchat happened soon after she signed up for the app in 2019. Through its “Quick Add” feature, Snapchat “directed her” to connect with “a registered sex offender using the profile name JASONMORGAN5660.” After a little more than a week on the app, C.O. was bombarded with inappropriate images and subjected to sextortion and threats before the adult user pressured her to meet up, then raped her. Cops arrested the adult user the next day, resulting in his incarceration, but his Snapchat account remained active for three years despite reports of harassment, the complaint alleged.

Two years later, at 14, C.O. connected with another convicted sex offender on Snapchat, a former police officer who offered to give C.O. a ride to school and then sexually assaulted her. The second offender is also currently incarcerated, the judge’s opinion noted.

The lawsuit painted a picture of Snapchat’s ongoing neglect of minors it knows are being targeted by sexual predators. Prior to C.O.’s attacks, both adult users sent and requested sexually explicit photos, seemingly without the app detecting any child sexual abuse materials exchanged on the platform. C.O. had previously reported other adult accounts sending her photos of male genitals, but Snapchat allegedly “did nothing to block these individuals from sending her inappropriate photographs.”

Among other complaints, C.O.’s lawsuit alleged that Snapchat’s algorithm for its “Quick Add” feature was the problem. It allegedly recklessly works to detect when adult accounts are seeking to connect with young girls and, by design, sends more young girls their way—continually directing sexual predators toward vulnerable targets. Snapchat is allegedly aware of these abuses and, therefore, should be held liable for harm caused to C.O., the lawsuit argued.

Although C.O.’s case raised difficult questions, Judge Barbara Bellis ultimately agreed with Snapchat that Section 230 of the Communications Decency Act barred all claims and shielded Snap because “the allegations of this case fall squarely within the ambit of the immunity afforded to” platforms publishing third-party content.

According to Bellis, C.O.’s family had “clearly alleged” that Snap had failed to design its recommendations systems to block young girls from receiving messages from sexual predators. Specifically, Section 230 immunity shields Snap from liability in this case because Bellis considered the messages exchanged to be third-party content. Snapchat designing its recommendation systems to deliver content is a protected activity, Bellis ruled.

Internet law professor Eric Goldman wrote in his blog that Bellis’ “well-drafted and no-nonsense opinion” is “grounded” in precedent. Pointing to an “extremely similar” 2008 case against MySpace—”which reached the same outcome that Section 230 applies to offline sexual abuse following online messaging”—Goldman suggested that “the law has been quite consistent for a long time.”

However, as this case was being decided, a seemingly conflicting ruling in a Los Angeles court found that “Section 230 didn’t protect Snapchat from liability for allegedly connecting teens with drug dealers,” MediaPost noted. Bellis acknowledged this outlier opinion but did not appear to consider it persuasive.

Yet, at the end of her opinion, Bellis seemed to take aim at Section 230 as perhaps being too broad.

She quoted a ruling from the First Circuit Court of Appeals, which noted that some Section 230 cases, presumably like C.O.’s, are “hard” for courts not because “the legal issues defy resolution,” but because Section 230 requires that the court “deny relief to plaintiffs whose circumstances evoke outrage.” She then went on to quote an appellate court ruling on a similarly “difficult” Section 230 case that warned “without further legislative action,” there is “little” that courts can do “but join with other courts and commentators in expressing concern” with Section 230’s “broad scope.”

Ars could not immediately reach Snapchat or lawyers representing C.O.’s family for comment.

Snapchat isn’t liable for connecting 12-year-old to convicted sex offenders Read More »

does-fubo’s-antitrust-lawsuit-against-espn,-fox,-and-wbd-stand-a-chance?

Does Fubo’s antitrust lawsuit against ESPN, Fox, and WBD stand a chance?

Collaborating conglomerates —

Fubo: Media giants’ anticompetitive tactics already killed PS Vue, other streamers.

In this photo illustration, the FuboTV Inc. logo is displayed on a smartphone screen and ESPN, Warner Bros. Discovery and FOX logos in the background.

Fubo is suing Fox Corporation, The Walt Disney Company, and Warner Bros. Discovery (WBD) over their plans to launch a unified sports streaming app. Fubo, a live sports streaming service that has business relationships with the three companies, claims the firms have engaged in anticompetitive practices for years, leading to higher prices for consumers.

In an attempt to understand how much potential the allegations have to derail the app’s launch, Ars Technica read the 73-page sealed complaint and sought opinions from some antitrust experts. While some of Fubo’s allegations could be hard to prove, Fubo isn’t the only one concerned about the joint app’s potential to make it hard for streaming services to compete fairly.

Fubo wants to kill ESPN, Fox, and WBD’s joint sports app

Earlier this month, Disney, which owns ESPN, WBD (whose sports channels include TBS and TNT), and Fox, which owns Fox broadcast stations and Fox Sports channels like FS1, announced plans to launch an equally owned live sports streaming app this fall. Pricing hasn’t been confirmed but is expected to be in the $30-to-$50-per-month range. Fubo, for comparison, starts at $80 per month for English-language channels.

Via a lawsuit filed on Tuesday in US District Court for the Southern District of New York, Fubo is seeking an injunction against the app and joint venture (JV), a jury trial, and damages for an unspecified figure. There have been reports that Fubo was suing the three companies for $1 billion, but a Fubo spokesperson confirmed to Ars that this figure is incorrect.

“Insurmountable barriers”

Fubo, which was founded in 2015, is arguing that the three companies’ proposed app will result in higher prices for live sports streaming customers.

The New York City-headquartered company claims the collaboration would preclude other distributors of live sports content, like Fubo, from competing fairly. The lawsuit also claims that distributors like Fubo would see higher prices and worse agreements associated with licensing sports content due to the JV, which could even stop licensing critical sports content to companies like Fubo. Fubo’s lawsuit says that “once they have combined forces, Defendants’ incentive to exclude Fubo and other rivals will only increase.”

Disney, Fox, and WBD haven’t disclosed specifics about how their JV will impact how they license the rights to sports events to companies outside of their JV; however, they have claimed that they will license their respective entities to the JV on a non-exclusive basis.

That statement doesn’t specify, though, if the companies will try to bundle content together forcibly,

“If the three firms get together and say, ‘We’re no longer going to provide to you these streams for resale separately. You must buy a bundle as a condition of getting any of them,’ that would … be an anti-competitive bundle that can be challenged under antitrust law,” Hal Singer, an economics professor at The University of Utah and managing director at Econ One, told Ars.

Lee Hepner, counsel at the American Economic Liberties Project, shared similar concerns about the JV with Ars:

Joint ventures raise the same concerns as mergers when the effect is to shut out competitors and gain power to raise prices and reduce quality. Sports streaming is an extremely lucrative market, and a joint venture between these three powerhouses will foreclose the ability of rivals like Fubo to compete on fair terms.

Fubo’s lawsuit cites research from Citi, finding that, combined, ESPN (26.8 percent), Fox (17.3 percent), and WBD (9.9 percent) own 54 percent of the US sports rights market.

In a statement, Fubo co-founder and CEO David Gandler said the three companies “are erecting insurmountable barriers that will effectively block any new competitors” and will leave sports streamers without options.

The US Department of Justice is reportedly eyeing the JV for an antitrust review and plans to look at the finalized terms, according to a February 15 Bloomberg report citing two anonymous “people familiar with the process.”

Does Fubo’s antitrust lawsuit against ESPN, Fox, and WBD stand a chance? Read More »

twitter-security-staff-kept-firm-in-compliance-by-disobeying-musk,-ftc-says

Twitter security staff kept firm in compliance by disobeying Musk, FTC says

Close call —

Lina Khan: Musk demanded “actions that would have violated the FTC’s Order.”

Elon Musk sits on stage while being interviewed during a conference.

Enlarge / Elon Musk at the New York Times DealBook Summit on November 29, 2023, in New York City.

Getty Images | Michael Santiago

Twitter employees prevented Elon Musk from violating the company’s privacy settlement with the US government, according to Federal Trade Commission Chair Lina Khan.

After Musk bought Twitter in late 2022, he gave Bari Weiss and other journalists access to company documents in the so-called “Twitter Files” incident. The access given to outside individuals raised concerns that Twitter (which is currently named X) violated a 2022 settlement with the FTC, which has requirements designed to prevent repeats of previous security failures.

Some of Twitter’s top privacy and security executives also resigned shortly after Musk’s purchase, citing concerns that Musk’s rapid changes could cause violations of the settlement.

FTC staff deposed former Twitter employees and “learned that the access provided to the third-party individuals turned out to be more limited than the individuals’ tweets and other public reporting had indicated,” Khan wrote in a letter sent today to US Rep. Jim Jordan (R-Ohio). Khan’s letter said the access was limited because employees refused to comply with Musk’s demands:

The deposition testimony revealed that in early December 2022, Elon Musk had reportedly directed staff to grant an outside third-party individual “full access to everything at Twitter… No limits at all.” Consistent with Musk’s direction, the individual was initially assigned a company laptop and internal account, with the intent that the third-party individual be given “elevated privileges” beyond what an average company employee might have.

However, based on a concern that such an arrangement would risk exposing nonpublic user information in potential violation of the FTC’s Order, longtime information security employees at Twitter intervened and implemented safeguards to mitigate the risks. Ultimately the third-party individuals did not receive direct access to Twitter’s systems, but instead worked with other company employees who accessed the systems on the individuals’ behalf.

Khan: FTC “was right to be concerned”

Jordan is chair of the House Judiciary Committee and has criticized the investigation, claiming that “the FTC harassed Twitter in the wake of Mr. Musk’s acquisition.” Khan’s letter to Jordan today argues that the FTC investigation was justified.

“The FTC’s investigation confirmed that staff was right to be concerned, given that Twitter’s new CEO had directed employees to take actions that would have violated the FTC’s Order,” Khan wrote. “Once staff learned that the FTC’s Order had worked to ensure that Twitter employees took appropriate measures to protect consumers’ private information, compliance staff made no further inquiries to Twitter or anyone else concerning this issue.”

Khan also wrote that deep staff cuts following the Musk acquisition, and resignations of Twitter’s top privacy and compliance officials, meant that “there was no one left at the company responsible for interpreting and modifying data policies and practices to ensure Twitter was complying with the FTC’s Order to safeguard Americans’ personal data.” The letter continued:

During staff’s evaluation of the workforce reductions, one of the company’s recently departed lead privacy and security experts testified that Twitter Blue was being implemented too quickly so that the proper “security and privacy review was not conducted in accordance with the company’s process for software development.” Another expert testified that he had concerns about Mr. Musk’s “commitment to overall security and privacy of the organization.” Twitter, meanwhile, filed a motion seeking to eliminate the FTC Order that protected the privacy and security of Americans’ data. Fortunately for Twitter’s millions of users, that effort failed in court.

FTC still trying to depose Musk

While no violation was found in this case, the FTC isn’t done investigating. When contacted by Ars, an FTC spokesperson said the agency cannot rule out bringing lawsuits against Musk’s social network for violations of the settlement or US law.

“When we heard credible public reports of potential violations of protections for Twitter users’ data, we moved swiftly to investigate,” the FTC said in a statement today. “The order remains in place and the FTC continues to deploy the order’s tools to protect Twitter users’ data and ensure the company remains in compliance.”

The FTC also said it is continuing attempts to depose Musk. In July 2023, Musk’s X Corp. asked a federal court for an order that would terminate the settlement and prevent the FTC from deposing Musk. The court denied both requests in November. In a filing, US government lawyers said the FTC investigation had “revealed a chaotic environment at the company that raised serious questions about whether and how Musk and other leaders were ensuring X Corp.’s compliance with the 2022 Administrative Order.”

We contacted X today, but an auto-reply informed us that the company was busy and asked that we check back later.

Twitter security staff kept firm in compliance by disobeying Musk, FTC says Read More »