Policy

x-filing-“thermonuclear-lawsuit”-in-texas-should-be-“fatal,”-media-matters-says

X filing “thermonuclear lawsuit” in Texas should be “fatal,” Media Matters says

X filing “thermonuclear lawsuit” in Texas should be “fatal,” Media Matters says

Ever since Elon Musk’s X Corp sued Media Matters for America (MMFA) over a pair of reports that X (formerly Twitter) claims caused an advertiser exodus in 2023, one big question has remained for onlookers: Why is this fight happening in Texas?

In a motion to dismiss filed in Texas’ northern district last month, MMFA argued that X’s lawsuit should be dismissed not just because of a “fatal jurisdictional defect,” but “dismissal is also required for lack of venue.”

Notably, MMFA is based in Washington, DC, while “X is organized under Nevada law and maintains its principal place of business in San Francisco, California, where its own terms of service require users of its platform to litigate any disputes.”

“Texas is not a fair or reasonable forum for this lawsuit,” MMFA argued, suggesting that “the case must be dismissed or transferred” because “neither the parties nor the cause of action has any connection to Texas.”

Last Friday, X responded to the motion to dismiss, claiming that the lawsuit—which Musk has described as “thermonuclear”—was appropriately filed in Texas because MMFA “intentionally” targeted readers and at least two X advertisers located in Texas, Oracle and AT&T. According to X, because MMFA “identified Oracle, a Texas-based corporation, by name in its coverage,” MMFA “cannot claim surprise at being held to answer for its conduct in Texas.” X also claimed that Texas has jurisdiction because Musk resides in Texas and “makes numerous critical business decisions about X while in Texas.”

This so-called targeting of Texans caused a “substantial part” of alleged financial harms that X attributes to MMFA’s reporting, X alleged.

According to X, MMFA specifically targeted X in Texas by sending newsletters sharing its reports with “hundreds or thousands” of Texas readers and by allegedly soliciting donations from Texans to support MMFA’s reporting.

But MMFA pushed back, saying that “Texas subscribers comprise a disproportionately small percentage of Media Matters’ newsletter recipients” and that MMFA did “not solicit Texas donors to fund Media Matters’s journalism concerning X.” Because of this, X’s “efforts to concoct claim-related Texas contacts amount to a series of shots in the dark, uninformed guesses, and irrelevant tangents,” MMFA argued.

On top of that, MMFA argued that X could not attribute any financial harms allegedly caused by MMFA’s reports to either of the two Texas-based advertisers that X named in its court filings. Oracle, MMFA said, “by X’s own admission,… did not withdraw its ads” from X, and AT&T was not named in MMFA’s reporting, and thus, “any investigation AT&T did into its ad placement on X was of its own volition and is not plausibly connected to Media Matters.” MMFA has argued that advertisers, particularly sophisticated Fortune 500 companies, made their own decisions to stop advertising on X, perhaps due to widely reported increases in hate speech on X or even Musk’s own seemingly antisemitic posting.

Ars could not immediately reach X, Oracle, or AT&T for comment.

X’s suit allegedly designed to break MMFA

MMFA President Angelo Carusone, who is a defendant in X’s lawsuit, told Ars that X’s recent filing has continued to “expose” the lawsuit as a “meritless and vexatious effort to inflict maximum damage on critical research and reporting about the platform.”

“It’s solely designed to basically break us or stop us from doing the work that we were doing originally,” Carusone said, confirming that the lawsuit has negatively impacted MMFA’s hate speech research on X.

MMFA argued that Musk could have sued in other jurisdictions, such as Maryland, DC, or California, and MMFA would not have disputed the venue, but Carusone suggested that Musk sued in Texas in hopes that it would be “a more friendly jurisdiction.”

X filing “thermonuclear lawsuit” in Texas should be “fatal,” Media Matters says Read More »

apple-wouldn’t-let-jon-stewart-interview-ftc-chair-lina-khan,-tv-host-claims

Apple wouldn’t let Jon Stewart interview FTC Chair Lina Khan, TV host claims

The Problem with Jon Stewart —

Tech company also didn’t want a segment on Stewart’s show criticizing AI.

The Daily Show host Jon Stewart’s interview with FTC Chair Lina Khan. The conversation about Apple begins around 16: 30 in the video.

Before the cancellation of The Problem with Jon Stewart on Apple TV+, Apple forbade the inclusion of Federal Trade Commission Chair Lina Khan as a guest and steered the show away from confronting issues related to artificial intelligence, according to Jon Stewart.

This isn’t the first we’ve heard of this rift between Apple and Stewart. When the Apple TV+ show was canceled last October, reports circulated that he told his staff that creative differences over guests and topics were a factor in the decision.

The New York Times reported that both China and AI were sticking points between Apple and Stewart. Stewart confirmed the broad strokes of that narrative in a CBS Morning Show interview after it was announced that he would return to The Daily Show.

“They decided that they felt that they didn’t want me to say things that might get me into trouble,” he explained.

Stewart’s comments during his interview with Khan yesterday were the first time he’s gotten more specific publicly.

“I’ve got to tell you, I wanted to have you on a podcast, and Apple asked us not to do it—to have you. They literally said, ‘Please don’t talk to her,'” Stewart said while interviewing Khan on the April 1, 2024, episode of The Daily Show.

Khan appeared on the show to explain and evangelize the FTC’s efforts to battle corporate monopolies both in and outside the tech industry in the US and to explain the challenges the organization faces.

She became the FTC chair in 2021 and has since garnered a reputation for an aggressive and critical stance against monopolistic tendencies or practices among Big Tech companies like Amazon and Meta.

Stewart also confirmed previous reports that AI was a sensitive topic for Apple. “They wouldn’t let us do that dumb thing we did in the first act on AI,” he said, referring to the desk monologue segment that preceded the Khan interview in the episode.

The segment on AI in the first act of the episode mocked various tech executives for their utopian framing of AI and interspersed those claims with acknowledgments from many of the same leaders that AI would replace many people’s jobs. (It did not mention Apple or its leadership, though.)

Stewart and The Daily Show‘s staff also included clips of current tech leaders suggesting that workers be retrained to work with or on AI when their current roles are disrupted by it. That was followed by a montage of US political leaders promising to retrain workers after various technological and economic disruptions over the years, with the implication that those retraining efforts were rarely as successful as promised.

The segment effectively lampooned some of the doublespeak about AI, though Stewart stopped short of venturing any solutions or alternatives to the current path, so it mostly just prompted outrage and laughs.

The Daily Show host Jon Stewart’s segment criticizing tech and political leaders on the topic of AI.

Apple currently uses AI-related technologies in its software, services, and devices, but so far it has not launched anything tapping into generative AI, which is the new frontier in AI that has attracted worry, optimism, and criticism from various parties.

However, the company is expected to roll out its first generative AI features as part of iOS 18, a new operating system update for iPhones. iOS 18 will likely be detailed during Apple’s annual developer conference in June and will reach users’ devices sometime in the fall.

Listing image by Paramount

Apple wouldn’t let Jon Stewart interview FTC Chair Lina Khan, TV host claims Read More »

google-agrees-to-delete-incognito-data-despite-prior-claim-that’s-“impossible”

Google agrees to delete Incognito data despite prior claim that’s “impossible”

Deleting files —

What a lawyer calls “a historic step,” Google considers not that “significant.”

Google agrees to delete Incognito data despite prior claim that’s “impossible”

To settle a class-action dispute over Chrome’s “Incognito” mode, Google has agreed to delete billions of data records reflecting users’ private browsing activities.

In a statement provided to Ars, users’ lawyer, David Boies, described the settlement as “a historic step in requiring honesty and accountability from dominant technology companies.” Based on Google’s insights, users’ lawyers valued the settlement between $4.75 billion and $7.8 billion, the Monday court filing said.

Under the settlement, Google agreed to delete class-action members’ private browsing data collected in the past, as well as to “maintain a change to Incognito mode that enables Incognito users to block third-party cookies by default.” This, plaintiffs’ lawyers noted, “ensures additional privacy for Incognito users going forward, while limiting the amount of data Google collects from them” over the next five years. Plaintiffs’ lawyers said that this means that “Google will collect less data from users’ private browsing sessions” and “Google will make less money from the data.”

“The settlement stops Google from surreptitiously collecting user data worth, by Google’s own estimates, billions of dollars,” Boies said. “Moreover, the settlement requires Google to delete and remediate, in unprecedented scope and scale, the data it improperly collected in the past.”

Google had already updated disclosures to users, changing the splash screen displayed “at the beginning of every Incognito session” to inform users that Google was still collecting private browsing data. Under the settlement, those disclosures to all users must be completed by March 31, after which the disclosures must remain. Google also agreed to “no longer track people’s choice to browse privately,” and the court filing said that “Google cannot roll back any of these important changes.”

Notably, the settlement does not award monetary damages to class members. Instead, Google agreed that class members retain “rights to sue Google individually for damages” through arbitration, which, users’ lawyers wrote, “is important given the significant statutory damages available under the federal and state wiretap statutes.”

“These claims remain available for every single class member, and a very large number of class members recently filed and are continuing to file complaints in California state court individually asserting those damages claims in their individual capacities,” the court filing said.

While “Google supports final approval of the settlement,” the company “disagrees with the legal and factual characterizations contained in the motion,” the court filing said. Google spokesperson José Castañeda told Ars that the tech giant thinks that the “data being deleted isn’t as significant” as Boies represents, confirming that Google was “pleased to settle this lawsuit, which we always believed was meritless.”

“The plaintiffs originally wanted $5 billion and are receiving zero,” Castañeda said. “We never associate data with users when they use Incognito mode. We are happy to delete old technical data that was never associated with an individual and was never used for any form of personalization.”

While Castañeda said that Google was happy to delete the data, a footnote in the court filing noted that initially, “Google claimed in the litigation that it was impossible to identify (and therefore delete) private browsing data because of how it stored data.” Now, under the settlement, however, Google has agreed “to remediate 100 percent of the data set at issue.”

Mitigation efforts include deleting fields Google used to detect users in Incognito mode, “partially redacting IP addresses,” and deleting “detailed URLs, which will prevent Google from knowing the specific pages on a website a user visited when in private browsing mode.” Keeping “only the domain-level portion of the URL (i.e., only the name of the website) will vastly improve user privacy by preventing Google (or anyone who gets their hands on the data) from knowing precisely what users were browsing,” the court filing said.

Because Google did not oppose the motion for final approval, US District Judge Yvonne Gonzalez Rogers is expected to issue an order approving the settlement on July 30.

Google agrees to delete Incognito data despite prior claim that’s “impossible” Read More »

at&t-acknowledges-data-leak-that-hit-73-million-current-and-former-users

AT&T acknowledges data leak that hit 73 million current and former users

A lot of leaked data —

Data leak hit 7.6 million current AT&T users, 65.4 million former subscribers.

A person walks past an AT&T store on a city street.

Getty Images | VIEW press

AT&T reset passcodes for millions of customers after acknowledging a massive leak involving the data of 73 million current and former subscribers.

“Based on our preliminary analysis, the data set appears to be from 2019 or earlier, impacting approximately 7.6 million current AT&T account holders and approximately 65.4 million former account holders,” AT&T said in an update posted to its website on Saturday.

An AT&T support article said the carrier is “reaching out to all 7.6 million impacted customers and have reset their passcodes. In addition, we will be communicating with current and former account holders with compromised sensitive personal information.” AT&T said the leaked information varied by customer but included full names, email addresses, mailing addresses, phone numbers, Social Security numbers, dates of birth, AT&T account numbers, and passcodes.

AT&T’s acknowledgement of the leak described it as “AT&T data-specific fields [that] were contained in a data set released on the dark web.” But the same data appears to be on the open web as well. As security researcher Troy Hunt wrote, the data is “out there in plain sight on a public forum easily accessed by a normal web browser.”

The hacking forum has a public version accessible with any browser and a hidden service that requires a Tor network connection. Based on forum posts we viewed today, the leak seems to have appeared on both the public and Tor versions of the hacking forum on March 17 of this year. Viewing the AT&T data requires a hacking forum account and site “credits” that can be purchased or earned by posting on the forum.

Hunt told Ars today that the term “dark web” is “incorrect and misleading” in this case. The forum where the AT&T data appeared “does not meet the definition of dark web,” he wrote in an email. “No special software, no special network, just a plain old browser. It’s easily discoverable via a Google search and immediately shows many PII [Personal Identifiable Information] records from the AT&T breach. Registration is then free for anyone with the only remaining barrier being obtaining credits.”

We contacted AT&T today and will update this article if we get a response.

49 million email addresses

Hunt’s post on March 19 said the leaked information included a file with 73,481,539 lines of data that contained 49,102,176 unique email addresses. Another file with decrypted Social Security numbers had 43,989,217 lines, he wrote.

Hunt, who runs the “Have I Been Pwned” database that lets you check if your email was in a data breach, says the 49 million email addresses in the AT&T leak have been added to his database.

BleepingComputer covered the leak two weeks ago, writing that it is the same data involved in a 2021 incident in which a hacker shared samples of the data and attempted to sell the entire data set for $1 million. In 2021, AT&T told BleepingComputer that “the information that appeared in an Internet chat room does not appear to have come from our systems.”

AT&T maintained that position last month. “AT&T continues to tell BleepingComputer today that they still see no evidence of a breach in their systems and still believe that this data did not originate from them,” the news site’s March 17, 2024, article said.

AT&T says data may have come from itself or vendor

AT&T’s update on March 30 acknowledged that the data may have come from AT&T itself, but said it also may have come from an AT&T vendor:

AT&T has determined that AT&T data-specific fields were contained in a data set released on the dark web approximately two weeks ago. While AT&T has made this determination, it is not yet known whether the data in those fields originated from AT&T or one of its vendors. With respect to the balance of the data set, which includes personal information such as Social Security numbers, the source of the data is still being assessed.

“Currently, AT&T does not have evidence of unauthorized access to its systems resulting in exfiltration of the data set,” the company update also said. AT&T said it “is communicating proactively with those impacted and will be offering credit monitoring at our expense where applicable.”

AT&T said the passcodes that it reset are generally four digits and are different from AT&T account passwords. The passcodes are used when calling customer support, when managing an account at a retail store, and when signing in to the AT&T website “if you’ve chosen extra security.”

AT&T acknowledges data leak that hit 73 million current and former users Read More »

after-overreaching-tos-angers-users,-cloud-provider-vultr-backs-off

After overreaching TOS angers users, cloud provider Vultr backs off

“Clearly causing confusion” —

Terms seemed to grant an “irrevocable” right to commercialize any user content.

After overreaching TOS angers users, cloud provider Vultr backs off

After backlash, the cloud provider Vultr has updated its terms to remove a clause that a Reddit user feared required customers to “fork over rights” to “anything” hosted on its platform.

The alarming clause seemed to grant Vultr a “non-exclusive, perpetual, irrevocable” license to “use and commercialize” any user content uploaded, posted, hosted, or stored on Vultr “in any way that Vultr deems appropriate, without any further consent” or compensation to users or third parties.

Here’s the full clause that was removed:

You hereby grant to Vultr a non-exclusive, perpetual, irrevocable, royalty-free, fully paid-up, worldwide license (including the right to sublicense through multiple tiers) to use, reproduce, process, adapt, publicly perform, publicly display, modify, prepare derivative works, publish, transmit and distribute each of your User Content, or any portion thereof, in any form, medium or distribution method now known or hereafter existing, known or developed, and otherwise use and commercialize the User Content in any way that Vultr deems appropriate, without any further consent, notice and/or compensation to you or to any third parties, for purposes of providing the Services to you.

In a statement provided to Ars, Vultr CEO J.J. Kardwell said that the terms were revised to “simplify and clarify” language causing confusion for some users.

“A Reddit post incorrectly took portions of our Terms of Service out of context, which only pertain to content provided to Vultr on our public mediums (community-related content on public forums, as an example) for purposes of rendering the needed services—e.g., publishing comments, posts, or ratings,” Kardwell said. “This is separate from a user’s own, private content that is deployed on Vultr services.”

It’s easy to see why the Reddit user was confused, as the previous terms did not clearly differentiate between a user’s public and “private content” in the paragraph where it was included. Kardwell told The Register that the old terms, which were drafted in 2021, were “clearly causing confusion for some portion of users” and were updated because Vultr recognized “that the average user doesn’t have a law degree.”

According to Kardwell, the part of the removed clause that “ends with ‘for purposes of providing the Services to you'” was “intended to make it clear that any rights referenced are solely for the purposes of providing the Services to you.” Kevin Cochrane, Vultr’s chief marketing officer, told Ars that users were intended to scroll down to understand that the line only applied to community content described in a section labeled “content that you make publicly available.” He said that the removed clause was necessary in 2021 when Vultr provided forums and collected ratings, but that the clause could be stripped now because “we don’t actually use” that kind of community content “any longer.”

“We’re very focused on being responsive to the community and the concerns people have, and we believe the strongest thing we can do to demonstrate that there is no bad intent here is to remove it,” Kardwell told The Register.

A plain read of the terms without scrolling seemed to substantiate the Reddit user’s worst fears that “it’s possible Vultr may want the expansive license grant to do AI/Machine Learning based on the data they host. Or maybe they could mine database contents to resell [personally identifiable information]. Given the (perpetual!) license, there’s not really any limit to what they might do. They could even clone someone’s app and sell their own rebranded version, and they’d be legally in the clear.”

The user claimed to have been locked out of their Vultr account for five days after refusing to agree to the terms, with Vultr’s support team seemingly providing little recourse to migrate data to a new cloud provider.

“Migrating all my servers and DNS without being able to log in to my account is going to be both a headache and error prone,” the Reddit user wrote. “I feel like they’re holding my business hostage and extorting me into accepting a license I would never consent to under duress.”

Ars was not able to reach the Reddit user to see if Vultr removing the line from the terms has resolved the issue. Other users on the thread claimed that they had terminated their Vultr accounts over the controversy. Cochrane told Ars that they had been contacted by many customers over the past two days and had no way to identify the Reddit user to confirm if they had terminated their account. Cochrane said the support team was actively reaching out to users to verify if their complaints stemmed from discomfort with the previous terms.

In his statement, Kardwell reiterated that Vultr “customers own 100 percent of their content,” clarifying that Vultr “has never claimed any rights to, used, accessed, nor allowed access to or shared” user content, “other than as may be required by law or for security purposes.”

He also confirmed that Vultr would be conducting a “full review” of its terms and publishing another update “soon.” Kardwell told The Register that the most recent update to its terms that led the Reddit user to call out the company was “actually spurred by unrelated Microsoft licensing changes,” promising that Vultr has no plans to use or commercialize user data.

“We do not use user data,” Kardwell told The Register. “We never have, and we never will. We take privacy and security very seriously. It’s at the core of what we do globally.”

After overreaching TOS angers users, cloud provider Vultr backs off Read More »

nyc’s-government-chatbot-is-lying-about-city-laws-and-regulations

NYC’s government chatbot is lying about city laws and regulations

Close enough for government work? —

You can be evicted for not paying rent, despite what the “MyCity” chatbot says.

Has a government employee checked all those zeroes and ones floating above the skyline?

Enlarge / Has a government employee checked all those zeroes and ones floating above the skyline?

If you follow generative AI news at all, you’re probably familiar with LLM chatbots’ tendency to “confabulate” incorrect information while presenting that information as authoritatively true. That tendency seems poised to cause some serious problems now that a chatbot run by the New York City government is making up incorrect answers to some important questions of local law and municipal policy.

NYC’s “MyCity” ChatBot launched as a “pilot” program last October. The announcement touted the ChatBot as a way for business owners to “save … time and money by instantly providing them with actionable and trusted information from more than 2,000 NYC Business webpages and articles on topics such as compliance with codes and regulations, available business incentives, and best practices to avoid violations and fines.”

But a new report from The Markup and local nonprofit news site The City found the MyCity chatbot giving dangerously wrong information about some pretty basic city policies. To cite just one example, the bot said that NYC buildings “are not required to accept Section 8 vouchers,” when an NYC government info page says clearly that Section 8 housing subsidies are one of many lawful sources of income that landlords are required to accept without discrimination. The Markup also received incorrect information in response to chatbot queries regarding worker pay and work hour regulations, as well as industry-specific information like funeral home pricing.

Welcome news for people who think the rent is too damn high, courtesy of the MyCity chatbot.

Enlarge / Welcome news for people who think the rent is too damn high, courtesy of the MyCity chatbot.

Further testing from BlueSky user Kathryn Tewson shows the MyCity chatbot giving some dangerously wrong answers regarding the treatment of workplace whistleblowers, as well as some hilariously bad answers regarding the need to pay rent.

This is going to keep happening

The result isn’t too surprising if you dig into the token-based predictive models that power these kinds of chatbots. MyCity’s Microsoft Azure-powered chatbot uses a complex process of statistical associations across millions of tokens to essentially guess at the most likely next word in any given sequence, without any real understanding of the underlying information being conveyed.

That can cause problems when a single factual answer to a question might not be reflected precisely in the training data. In fact, The Markup said that at least one of its tests resulted in the correct answer on the same query about accepting Section 8 housing vouchers (even as “ten separate Markup staffers” got the incorrect answer when repeating the same question).

The MyCity Chatbot—which is prominently labeled as a “Beta” product—tells users who bother to read the warnings that it “may occasionally produce incorrect, harmful or biased content” and that users should “not rely on its responses as a substitute for professional advice.” But the page also states front and center that it is “trained to provide you official NYC Business information” and is being sold as a way “to help business owners navigate government.”

Andrew Rigie, executive director of the NYC Hospitality Alliance, told The Markup that he had encountered inaccuracies from the bot himself and had received reports of the same from at least one local business owner. But NYC Office of Technology and Innovation Spokesperson Leslie Brown told The Markup that the bot “has already provided thousands of people with timely, accurate answers” and that “we will continue to focus on upgrading this tool so that we can better support small businesses across the city.”

NYC Mayor Eric Adams touts the MyCity chatbot in an October announcement event.

The Markup’s report highlights the danger of governments and corporations rolling out chatbots to the public before their accuracy and reliability have been fully vetted. Last month, a court forced Air Canada to honor a fraudulent refund policy invented by a chatbot available on its website. A recent Washington Post report found that chatbots integrated into major tax preparation software provides “random, misleading, or inaccurate … answers” to many tax queries. And some crafty prompt engineers have reportedly been able to trick car dealership chatbots into accepting a “legally binding offer – no take backsies” for a $1 car.

These kinds of issues are already leading some companies away from more generalized LLM-powered chatbots and toward more specifically trained Retrieval-Augmented Generation models, which have been tuned only on a small set of relevant information. That kind of focus could become that much more important if the FTC is successful in its efforts to make chatbots liable for “false, misleading, or disparaging” information.

NYC’s government chatbot is lying about city laws and regulations Read More »

jails-banned-visits-in-“quid-pro-quo”-with-prison-phone-companies,-lawsuits-say

Jails banned visits in “quid pro quo” with prison phone companies, lawsuits say

The bars of a jail cell are pictured along with a man's hand turning a key in the lock of the cell door.

Getty Images | Charles O’Rear

Two lawsuits filed by a civil rights group allege that county jails in Michigan banned in-person visits in order to maximize revenue from voice and video calls as part of a “quid pro quo kickback scheme” with prison phone companies.

Civil Rights Corps filed the lawsuits on March 15 against the county governments, two county sheriffs, and two prison phone companies. The suits filed in county courts seek class-action status on behalf of people unable to visit family members detained in the local jails, including children who have been unable to visit their parents.

Defendants in one lawsuit include St. Clair County Sheriff Mat King, prison phone company Securus Technologies, and Securus owner Platinum Equity. In the other lawsuit, defendants include Genesee County Sheriff Christopher Swanson and prison phone company ViaPath Technologies. ViaPath was formerly called Global Tel*Link Corporation (GTL), and the lawsuit primarily refers to the company as GTL.

Each year, thousands of people spend months in the county jails, the lawsuit said. Many of the detainees have not been convicted of any crime and are awaiting trial; if they are convicted and receive long sentences, they are transferred to the Michigan Department of Corrections.

The named plaintiffs in both cases include family members, including children identified by their initials.

“Hundreds of jails” eliminated visits

The Michigan counties are far from alone in implementing visitation bans, Civil Rights Corps said in a lawsuit announcement. “Across the United States, hundreds of jails have eliminated in-person family visits over the last decade,” the group said, adding:

Why has this happened? The answer highlights a profound flaw in how decisions too often get made in our legal system: for-profit jail telecom companies realized that they could earn more profit from phone and video calls if jails eliminated free in-person visits for families. So the companies offered sheriffs and county jails across the country a deal: if you eliminate family visits, we’ll give you a cut of the increased profits from the larger number of calls. This led to a wave across the country, as local jails sought to supplement their budgets with hundreds of millions of dollars in cash from some of the poorest families in our society.

St. Clair County implemented its family visitation ban in September 2017, “prohibiting people from visiting their family members detained inside the county jail,” Civil Rights Corps alleged. This “decision was part of a quid pro quo kickback scheme with Securus Technologies, a for-profit company that contracts with jails to charge the families of incarcerated persons exorbitant rates to communicate with one another through ‘services’ such as low-quality phone and video calls,” the lawsuit said.

Under the contract, “Securus pays the County 50 percent of the $12.99 price tag for every 20-minute video call and 78 percent of the $0.21 per minute cost of every phone call,” the lawsuit said. The contract has “a guarantee that Securus would pay the County at least $190,000 each year,” the St. Clair County lawsuit said.

Jails banned visits in “quid pro quo” with prison phone companies, lawsuits say Read More »

biden-orders-every-us-agency-to-appoint-a-chief-ai-officer

Biden orders every US agency to appoint a chief AI officer

Mission control —

Federal agencies rush to appoint chief AI officers with “significant expertise.”

Biden orders every US agency to appoint a chief AI officer

The White House has announced the “first government-wide policy to mitigate risks of artificial intelligence (AI) and harness its benefits.” To coordinate these efforts, every federal agency must appoint a chief AI officer with “significant expertise in AI.”

Some agencies have already appointed chief AI officers, but any agency that has not must appoint a senior official over the next 60 days. If an official already appointed as a chief AI officer does not have the necessary authority to coordinate AI use in the agency, they must be granted additional authority or else a new chief AI officer must be named.

Ideal candidates, the White House recommended, might include chief information officers, chief data officers, or chief technology officers, the Office of Management and Budget (OMB) policy said.

As chief AI officers, appointees will serve as senior advisers on AI initiatives, monitoring and inventorying all agency uses of AI. They must conduct risk assessments to consider whether any AI uses are impacting “safety, security, civil rights, civil liberties, privacy, democratic values, human rights, equal opportunities, worker well-being, access to critical resources and services, agency trust and credibility, and market competition,” OMB said.

Perhaps most urgently, by December 1, the officers must correct all non-compliant AI uses in government, unless an extension of up to one year is granted.

The chief AI officers will seemingly enjoy a lot of power and oversight over how the government uses AI. It’s up to the chief AI officers to develop a plan to comply with minimum safety standards and to work with chief financial and human resource officers to develop the necessary budgets and workforces to use AI to further each agency’s mission and ensure “equitable outcomes,” OMB said. Here’s a brief summary of OMB’s ideals:

Agencies are encouraged to prioritize AI development and adoption for the public good and where the technology can be helpful in understanding and tackling large societal challenges, such as using AI to improve the accessibility of government services, reduce food insecurity, address the climate crisis, improve public health, advance equitable outcomes, protect democracy and human rights, and grow economic competitiveness in a way that benefits people across the United States.

Among the chief AI officer’s primary responsibilities is determining what AI uses might impact the safety or rights of US citizens. They’ll do this by assessing AI impacts, conducting real-world tests, independently evaluating AI, regularly evaluating risks, properly training staff, providing additional human oversight where necessary, and giving public notice of any AI use that could have a “significant impact on rights or safety,” OMB said.

OMB breaks down several AI uses that could impact safety, including controlling “safety-critical functions” within everything from emergency services to food-safety mechanisms to systems controlling nuclear reactors. Using AI to maintain election integrity could be safety-impacting, too, as could using AI to move industrial waste, control health insurance costs, or detect the “presence of dangerous weapons.”

Uses of AI presumed to be rights-impacting include censoring protected speech and a wide range of law enforcement efforts, such as predicting crimes, sketching faces, or using license plate readers to track personal vehicles in public spaces. Other rights-impacting AI uses include “risk assessments related to immigration,” “replicating a person’s likeness or voice without express consent,” or detecting students cheating.

Chief AI officers will ultimately decide if any AI use is safety- or rights-impacting and must adhere to OMB’s minimum standards for responsible AI use. Once a determination is made, the officers will “centrally track” the determinations, informing OMB of any major changes to “conditions or context in which the AI is used.” The officers will also regularly convene “a new Chief AI Officer Council to coordinate” efforts and share innovations government-wide.

As agencies advance AI uses—which the White House says is critical to “strengthen AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more”—chief AI officers will become the public-facing figures accountable for decisions made. In that role, the officer must consult with the public and incorporate “feedback from affected communities,” notify “negatively affected individuals” of new AI uses, and maintain options to opt-out of “AI-enabled decisions,” OMB said.

However, OMB noted that chief AI officers also have the power to waive opt-out options “if they can demonstrate that a human alternative would result in a service that is less fair (e.g., produces a disparate impact on protected classes) or if an opt-out would impose undue hardship on the agency.”

Biden orders every US agency to appoint a chief AI officer Read More »

starlink-mobile-plans-hit-snag-as-fcc-dismisses-spacex-spectrum-application

Starlink mobile plans hit snag as FCC dismisses SpaceX spectrum application

Snow and ice cover part of a Starlink satellite dish.

Enlarge / A Starlink user terminal during winter.

Getty Images | AntaresNS

Starlink’s mobile ambitions were dealt at least a temporary blow yesterday when the Federal Communications Commission dismissed SpaceX’s application to use several spectrum bands for mobile service.

SpaceX is seeking approval to use up to 7,500 second-generation Starlink satellites with spectrum in the 1.6 GHz, 2 GHz, and 2.4 GHz bands. SpaceX could still end up getting what it wants but will have to go through new rulemaking processes in which the FCC will evaluate whether the spectrum bands can handle the system without affecting existing users.

The FCC Space Bureau’s ruling dismissed the SpaceX application yesterday as “unacceptable for filing.” The application was filed over a year ago.

The FCC said the SpaceX requests “do not substantially comply with Commission requirements established in rulemaking proceedings which determined that the 1.6/2.4 GHz and 2 GHz bands are not available for additional MSS [mobile-satellite service] applications.”

But the FCC yesterday also issued two public notices seeking comment on SpaceX petitions to revise the commission’s spectrum-sharing rules for the bands. Dish Network and Globalstar oppose the SpaceX requests, and SpaceX will have to prove to the FCC that its plan won’t cause harmful interference to other systems.

T-Mobile deal still on, but SpaceX wants more capacity

The FCC order won’t stop SpaceX’s partnership with T-Mobile, which uses T-Mobile’s licensed spectrum in the 1.9 GHz band. In January, Starlink demonstrated the first text messages sent between T-Mobile phones via one of Starlink’s low-Earth orbit satellites. Texting service for T-Mobile users is expected sometime during 2024 with voice and data service beginning later.

But SpaceX wants to use more spectrum bands to increase capacity in the US and elsewhere. Space has Starlink partnerships with several carriers outside the US.

SpaceX filed its application in February 2023. “Granting this application will enable SpaceX to augment its MSS capabilities and leverage its next-generation satellite constellation to provide increased capacity, reduced latency, and broader service coverage for mobile users across the United States and the world, including those users underserved or unserved by existing networks,” the application said.

Dish Network owner EchoStar is angry that the FCC is still entertaining SpaceX’s request for the 2 GHz band. “The FCC should immediately dismiss SpaceX’s petition for rulemaking without seeking comment, because the mere action of seeking comment would provide it with undeserved credibility and threaten the certainty that has allowed EchoStar to innovate in this band leading to significant public interest benefits,” the company told the FCC yesterday.

Starlink mobile plans hit snag as FCC dismisses SpaceX spectrum application Read More »

facebook-secretly-spied-on-snapchat-usage-to-confuse-advertisers,-court-docs-say

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say

“I can’t think of a good argument for why this is okay” —

Zuckerberg told execs to “figure out” how to spy on encrypted Snapchat traffic.

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say

Unsealed court documents have revealed more details about a secret Facebook project initially called “Ghostbusters,” designed to sneakily access encrypted Snapchat usage data to give Facebook a leg up on its rival, just when Snapchat was experiencing rapid growth in 2016.

The documents were filed in a class-action lawsuit from consumers and advertisers, accusing Meta of anticompetitive behavior that blocks rivals from competing in the social media ads market.

“Whenever someone asks a question about Snapchat, the answer is usually that because their traffic is encrypted, we have no analytics about them,” Facebook CEO Mark Zuckerberg (who has since rebranded his company as Meta) wrote in a 2016 email to Javier Olivan.

“Given how quickly they’re growing, it seems important to figure out a new way to get reliable analytics about them,” Zuckerberg continued. “Perhaps we need to do panels or write custom software. You should figure out how to do this.”

At the time, Olivan was Facebook’s head of growth, but now he’s Meta’s chief operating officer. He responded to Zuckerberg’s email saying that he would have the team from Onavo—a controversial traffic-analysis app acquired by Facebook in 2013—look into it.

Olivan told the Onavo team that he needed “out of the box thinking” to satisfy Zuckerberg’s request. He “suggested potentially paying users to ‘let us install a really heavy piece of software'” to intercept users’ Snapchat data, a court document shows.

What the Onavo team eventually came up with was a project internally known as “Ghostbusters,” an obvious reference to Snapchat’s logo featuring a white ghost. Later, as the project grew to include other Facebook rivals, including YouTube and Amazon, the project was called the “In-App Action Panel” (IAAP).

The IAAP program’s purpose was to gather granular insights into users’ engagement with rival apps to help Facebook develop products as needed to stay ahead of competitors. For example, two months after Zuckerberg’s 2016 email, Meta launched Stories, a Snapchat copycat feature, on Instagram, which the Motley Fool noted rapidly became a key ad revenue source for Meta.

In an email to Olivan, the Onavo team described the “technical solution” devised to help Zuckerberg figure out how to get reliable analytics about Snapchat users. It worked by “develop[ing] ‘kits’ that can be installed on iOS and Android that intercept traffic for specific sub-domains, allowing us to read what would otherwise be encrypted traffic so we can measure in-app usage,” the Onavo team said.

Olivan was told that these so-called “kits” used a “man-in-the-middle” attack typically employed by hackers to secretly intercept data passed between two parties. Users were recruited by third parties who distributed the kits “under their own branding” so that they wouldn’t connect the kits to Onavo unless they used a specialized tool like Wireshark to analyze the kits. TechCrunch reported in 2019 that sometimes teens were paid to install these kits. After that report, Facebook promptly shut down the project.

This “man-in-the-middle” tactic, consumers and advertisers suing Meta have alleged, “was not merely anticompetitive, but criminal,” seemingly violating the Wiretap Act. It was used to snoop on Snapchat starting in 2016, on YouTube from 2017 to 2018, and on Amazon in 2018, relying on creating “fake digital certificates to impersonate trusted Snapchat, YouTube, and Amazon analytics servers to redirect and decrypt secure traffic from those apps for Facebook’s strategic analysis.”

Ars could not reach Snapchat, Google, or Amazon for comment.

Facebook allegedly sought to confuse advertisers

Not everyone at Facebook supported the IAAP program. “The company’s highest-level engineering executives thought the IAAP Program was a legal, technical, and security nightmare,” another court document said.

Pedro Canahuati, then-head of security engineering, warned that incentivizing users to install the kits did not necessarily mean that users understood what they were consenting to.

“I can’t think of a good argument for why this is okay,” Canahuati said. “No security person is ever comfortable with this, no matter what consent we get from the general public. The general public just doesn’t know how this stuff works.”

Mike Schroepfer, then-chief technology officer, argued that Facebook wouldn’t want rivals to employ a similar program analyzing their encrypted user data.

“If we ever found out that someone had figured out a way to break encryption on [WhatsApp] we would be really upset,” Schroepfer said.

While the unsealed emails detailing the project have recently raised eyebrows, Meta’s spokesperson told Ars that “there is nothing new here—this issue was reported on years ago. The plaintiffs’ claims are baseless and completely irrelevant to the case.”

According to Business Insider, advertisers suing said that Meta never disclosed its use of Onavo “kits” to “intercept rivals’ analytics traffic.” This is seemingly relevant to their case alleging anticompetitive behavior in the social media ads market, because Facebook’s conduct, allegedly breaking wiretapping laws, afforded Facebook an opportunity to raise its ad rates “beyond what it could have charged in a competitive market.”

Since the documents were unsealed, Meta has responded with a court filing that said: “Snapchat’s own witness on advertising confirmed that Snap cannot ‘identify a single ad sale that [it] lost from Meta’s use of user research products,’ does not know whether other competitors collected similar information, and does not know whether any of Meta’s research provided Meta with a competitive advantage.”

This conflicts with testimony from a Snapchat executive, who alleged that the project “hamper[ed] Snap’s ability to sell ads” by causing “advertisers to not have a clear narrative differentiating Snapchat from Facebook and Instagram.” Both internally and externally, “the intelligence Meta gleaned from this project was described” as “devastating to Snapchat’s ads business,” a court filing said.

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say Read More »

scotus-mifepristone-case:-justices-focus-on-anti-abortion-groups’-legal-standing

SCOTUS mifepristone case: Justices focus on anti-abortion groups’ legal standing

Demonstrators participate in an abortion-rights rally outside the Supreme Court as the justices of the court hear oral arguments in the case of the <em>US Food and Drug Administration v. Alliance for Hippocratic Medicine</em> on March 26, 2024 in Washington, DC.” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/03/GettyImages-2115237711-800×533.jpeg”></img><figcaption>
<p><a data-height=Enlarge / Demonstrators participate in an abortion-rights rally outside the Supreme Court as the justices of the court hear oral arguments in the case of the US Food and Drug Administration v. Alliance for Hippocratic Medicine on March 26, 2024 in Washington, DC.

The US Supreme Court on Tuesday heard arguments in a case seeking to limit access to the abortion and miscarriage drug mifepristone, with a majority of justices expressing skepticism that the anti-abortion groups that brought the case have the legal standing to do so.

The case threatens to dramatically alter access to a drug that has been safely used for decades and, according to the Guttmacher Institute, was used in 63 percent of abortions documented in the health care system in 2023. But, it also has sweeping implications for the Food and Drug Administration’s authority over drugs, marking the first time that courts have second-guessed the agency’s expert scientific analysis and moved to restrict access to an FDA-approved drug.

As such, the case has rattled health experts, reproductive health care advocates, the FDA, and the pharmaceutical industry alike. But, based on the line of questioning in today’s oral arguments, they have reason to breathe a sigh of relief.

Standing

The case was initially filed in 2022 by a group of anti-abortion organizations led by the Alliance for Hippocratic Medicine. They collectively claimed that the FDA’s approval of mifepristone in 2000 was unlawful, as were FDA actions in 2016 and 2021 that eased access to the drug, allowing for it to be prescribed via telemedicine and dispensed through the mail. The anti-abortion groups justified bringing the lawsuit by claiming that the doctors in their ranks are harmed by the FDA’s actions because they are forced to treat girls and women seeking emergency medical care after taking mifepristone and experiencing complications.

The FDA and numerous medical organizations have emphatically noted that mifepristone is extremely safe and the complications the lawsuit references are exceedingly rare. Serious side effects occur in less than 1 percent of patients, and major adverse events, including infection, blood loss, or hospitalization, occur in less than 0.3 percent, according to the American College of Obstetricians and Gynecologists. Deaths are almost non-existent.

Still, a conservative federal judge in Texas sided with the anti-abortion groups last year, revoking the FDA’s 2000 approval. A conservative panel of judges for the Court of Appeals for the 5th Circuit in New Orleans then partially overturned the ruling, undoing the lower court’s ruling on the 2000 approval, allowing the FDA’s approval to stand, but still finding the FDA’s 2016 and 2021 actions unlawful. The ruling was frozen until the Supreme Court weighed in.

Today, many of the Supreme Court Justices went back to the very beginning: the claimed scenario that the plaintiff doctors have been or will imminently be harmed by the FDA’s actions. At the outset of the hearings, Solicitor General Elizabeth Prelogar argued that the plaintiffs had not been harmed, and, even if they were, they already had federal protections and recourse. Any doctor who consciously objects to caring for a patient who has had an abortion already has federal protections that prevent them from being forced to provide that care, Prelogar argued. As such, hospitals have legal obligations and have set up contingency and staffing plans to prevent violating those doctors’ federal conscious objection protections.

SCOTUS mifepristone case: Justices focus on anti-abortion groups’ legal standing Read More »

missouri-ag-sues-media-matters-over-its-x-research,-demands-donor-names

Missouri AG sues Media Matters over its X research, demands donor names

A photo of Elon Musk next to the logo for X, the social network formerly known as Twitter,.

Getty Images | NurPhoto

Missouri Attorney General Andrew Bailey yesterday sued Media Matters in an attempt to protect Elon Musk and X from the nonprofit watchdog group’s investigations into hate speech on the social network. Bailey’s lawsuit claims that “Media Matters has used fraud to solicit donations from Missourians in order to trick advertisers into removing their advertisements from X, formerly Twitter, one of the last platforms dedicated to free speech in America.”

Bailey didn’t provide much detail on the alleged fraud but claimed that Media Matters is guilty of “fraudulent manipulation of data on X.com.” That’s apparently a reference to Media Matters reporting that X placed ads for major brands next to posts touting Hitler and Nazis. X has accused Media Matters of manipulating the site’s algorithm by endlessly scrolling and refreshing.

Bailey yesterday issued an investigative demand seeking names and addresses of all Media Matters donors who live in Missouri and a range of internal communications and documents regarding the group’s research on Musk and X. Bailey anticipates that Media Matters won’t provide the requested materials, so he filed the lawsuit asking Cole County Circuit Court for an order to enforce the investigative demand.

“Because Media Matters has refused such efforts in other states and made clear that it will refuse any such efforts, the Attorney General seeks an order… compelling Media Matters to comply with the CID [Civil Investigative Demand] within 20 days,” the lawsuit said.

Media Matters slams Musk and Missouri AG

Media Matters, which is separately fighting similar demands made by Texas, responded to Missouri’s legal action in a statement provided to Ars today.

“Far from the free speech advocate he claims to be, Elon Musk has actually intensified his efforts to undermine free speech by enlisting Republican attorneys general across the country to initiate meritless, expensive, and harassing investigations against Media Matters in an attempt to punish critics,” Media Matters President Angelo Carusone said. “This Missouri investigation is the latest in a transparent endeavor to squelch the First Amendment rights of researchers and reporters; it will have a chilling effect on news reporters.”

Musk thanked Bailey for filing the lawsuit in a post that said, “Media Matters is doing everything it can to undermine the First Amendment. Truly an evil organization.”

Bailey is seeking the names and addresses of all Media Matters donors from Missouri since January 1, 2023, and the amounts of each donation. He wants all promotional or marketing material sent to potential donors and documents showing how the donations were used.

Ads next to pro-Nazi content

Several of Bailey’s demands relate to the Media Matters article titled, “As Musk endorses antisemitic conspiracy theory, X has been placing ads for Apple, Bravo, IBM, Oracle, and Xfinity next to pro-Nazi content.” Bailey wants all “documents related to the article, or to the events described in the article.”

The Media Matters article displayed images of advertisements next to pro-Nazi posts. Musk previously sued Media Matters over the article, claiming the group “manipulated the algorithms governing the user experience on X to bypass safeguards and create images of X’s largest advertisers’ paid posts adjacent to racist, incendiary content.”

X said Media Matters did this by “endlessly scrolling and refreshing its unrepresentative, hand-selected feed, generating between 13 and 15 times more advertisements per hour than viewed by the average X user repeating this inauthentic activity until it finally received pages containing the result it wanted: controversial content next to X’s largest advertisers’ paid posts.”

X also sued the Center for Countering Digital Hate, but the lawsuit was thrown out by a federal judge yesterday.

Missouri AG sues Media Matters over its X research, demands donor names Read More »