Meta

facebook-let-netflix-see-user-dms,-quit-streaming-to-keep-netflix-happy:-lawsuit

Facebook let Netflix see user DMs, quit streaming to keep Netflix happy: Lawsuit

A promotional image for Sorry for Your Loss, with Elizabeth Olsen

Enlarge / A promotional image for Sorry for Your Loss, which was a Facebook Watch original scripted series.

Last April, Meta revealed that it would no longer support original shows, like Jada Pinkett Smith’s Red Table Talk talk show, on Facebook Watch. Meta’s streaming business that was once viewed as competition for the likes of YouTube and Netflix is effectively dead now; Facebook doesn’t produce original series, and Facebook Watch is no longer available as a video-streaming app.

The streaming business’ demise has seemed related to cost cuts at Meta that have also included layoffs. However, recently unsealed court documents in an antitrust suit against Meta [PDF] claim that Meta has squashed its streaming dreams in order to appease one of its biggest ad customers: Netflix.

Facebook allegedly gave Netflix creepy privileges

As spotted via Gizmodo, a letter was filed on April 14 in relation to a class-action antitrust suit that was filed by Meta customers, accusing Meta of anti-competitive practices that harm social media competition and consumers. The letter, made public Saturday, asks a court to have Reed Hastings, Netflix’s founder and former CEO, respond to a subpoena for documents that plaintiffs claim are relevant to the case. The original complaint filed in December 2020 [PDF] doesn’t mention Netflix beyond stating that Facebook “secretly signed Whitelist and Data sharing agreements” with Netflix, along with “dozens” of other third-party app developers. The case is still ongoing.

The letter alleges that Netflix’s relationship with Facebook was remarkably strong due to the former’s ad spend with the latter and that Hastings directed “negotiations to end competition in streaming video” from Facebook.

One of the first questions that may come to mind is why a company like Facebook would allow Netflix to influence such a major business decision. The litigation claims the companies formed a lucrative business relationship that included Facebook allegedly giving Netflix access to Facebook users’ private messages:

By 2013, Netflix had begun entering into a series of “Facebook Extended API” agreements, including a so-called “Inbox API” agreement that allowed Netflix programmatic access to Facebook’s users’ private message inboxes, in exchange for which Netflix would “provide to FB a written report every two weeks that shows daily counts of recommendation sends and recipient clicks by interface, initiation surface, and/or implementation variant (e.g., Facebook vs. non-Facebook recommendation recipients). … In August 2013, Facebook provided Netflix with access to its so-called “Titan API,” a private API that allowed a whitelisted partner to access, among other things, Facebook users’ “messaging app and non-app friends.”

Meta said it rolled out end-to-end encryption “for all personal chats and calls on Messenger and Facebook” in December. And in 2018, Facebook told Vox that it doesn’t use private messages for ad targeting. But a few months later, The New York Times, citing “hundreds of pages of Facebook documents,” reported that Facebook “gave Netflix and Spotify the ability to read Facebook users’ private messages.”

Meta didn’t respond to Ars Technica’s request for comment. The company told Gizmodo that it has standard agreements with Netflix currently but didn’t answer the publication’s specific questions.

Facebook let Netflix see user DMs, quit streaming to keep Netflix happy: Lawsuit Read More »

facebook-secretly-spied-on-snapchat-usage-to-confuse-advertisers,-court-docs-say

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say

“I can’t think of a good argument for why this is okay” —

Zuckerberg told execs to “figure out” how to spy on encrypted Snapchat traffic.

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say

Unsealed court documents have revealed more details about a secret Facebook project initially called “Ghostbusters,” designed to sneakily access encrypted Snapchat usage data to give Facebook a leg up on its rival, just when Snapchat was experiencing rapid growth in 2016.

The documents were filed in a class-action lawsuit from consumers and advertisers, accusing Meta of anticompetitive behavior that blocks rivals from competing in the social media ads market.

“Whenever someone asks a question about Snapchat, the answer is usually that because their traffic is encrypted, we have no analytics about them,” Facebook CEO Mark Zuckerberg (who has since rebranded his company as Meta) wrote in a 2016 email to Javier Olivan.

“Given how quickly they’re growing, it seems important to figure out a new way to get reliable analytics about them,” Zuckerberg continued. “Perhaps we need to do panels or write custom software. You should figure out how to do this.”

At the time, Olivan was Facebook’s head of growth, but now he’s Meta’s chief operating officer. He responded to Zuckerberg’s email saying that he would have the team from Onavo—a controversial traffic-analysis app acquired by Facebook in 2013—look into it.

Olivan told the Onavo team that he needed “out of the box thinking” to satisfy Zuckerberg’s request. He “suggested potentially paying users to ‘let us install a really heavy piece of software'” to intercept users’ Snapchat data, a court document shows.

What the Onavo team eventually came up with was a project internally known as “Ghostbusters,” an obvious reference to Snapchat’s logo featuring a white ghost. Later, as the project grew to include other Facebook rivals, including YouTube and Amazon, the project was called the “In-App Action Panel” (IAAP).

The IAAP program’s purpose was to gather granular insights into users’ engagement with rival apps to help Facebook develop products as needed to stay ahead of competitors. For example, two months after Zuckerberg’s 2016 email, Meta launched Stories, a Snapchat copycat feature, on Instagram, which the Motley Fool noted rapidly became a key ad revenue source for Meta.

In an email to Olivan, the Onavo team described the “technical solution” devised to help Zuckerberg figure out how to get reliable analytics about Snapchat users. It worked by “develop[ing] ‘kits’ that can be installed on iOS and Android that intercept traffic for specific sub-domains, allowing us to read what would otherwise be encrypted traffic so we can measure in-app usage,” the Onavo team said.

Olivan was told that these so-called “kits” used a “man-in-the-middle” attack typically employed by hackers to secretly intercept data passed between two parties. Users were recruited by third parties who distributed the kits “under their own branding” so that they wouldn’t connect the kits to Onavo unless they used a specialized tool like Wireshark to analyze the kits. TechCrunch reported in 2019 that sometimes teens were paid to install these kits. After that report, Facebook promptly shut down the project.

This “man-in-the-middle” tactic, consumers and advertisers suing Meta have alleged, “was not merely anticompetitive, but criminal,” seemingly violating the Wiretap Act. It was used to snoop on Snapchat starting in 2016, on YouTube from 2017 to 2018, and on Amazon in 2018, relying on creating “fake digital certificates to impersonate trusted Snapchat, YouTube, and Amazon analytics servers to redirect and decrypt secure traffic from those apps for Facebook’s strategic analysis.”

Ars could not reach Snapchat, Google, or Amazon for comment.

Facebook allegedly sought to confuse advertisers

Not everyone at Facebook supported the IAAP program. “The company’s highest-level engineering executives thought the IAAP Program was a legal, technical, and security nightmare,” another court document said.

Pedro Canahuati, then-head of security engineering, warned that incentivizing users to install the kits did not necessarily mean that users understood what they were consenting to.

“I can’t think of a good argument for why this is okay,” Canahuati said. “No security person is ever comfortable with this, no matter what consent we get from the general public. The general public just doesn’t know how this stuff works.”

Mike Schroepfer, then-chief technology officer, argued that Facebook wouldn’t want rivals to employ a similar program analyzing their encrypted user data.

“If we ever found out that someone had figured out a way to break encryption on [WhatsApp] we would be really upset,” Schroepfer said.

While the unsealed emails detailing the project have recently raised eyebrows, Meta’s spokesperson told Ars that “there is nothing new here—this issue was reported on years ago. The plaintiffs’ claims are baseless and completely irrelevant to the case.”

According to Business Insider, advertisers suing said that Meta never disclosed its use of Onavo “kits” to “intercept rivals’ analytics traffic.” This is seemingly relevant to their case alleging anticompetitive behavior in the social media ads market, because Facebook’s conduct, allegedly breaking wiretapping laws, afforded Facebook an opportunity to raise its ad rates “beyond what it could have charged in a competitive market.”

Since the documents were unsealed, Meta has responded with a court filing that said: “Snapchat’s own witness on advertising confirmed that Snap cannot ‘identify a single ad sale that [it] lost from Meta’s use of user research products,’ does not know whether other competitors collected similar information, and does not know whether any of Meta’s research provided Meta with a competitive advantage.”

This conflicts with testimony from a Snapchat executive, who alleged that the project “hamper[ed] Snap’s ability to sell ads” by causing “advertisers to not have a clear narrative differentiating Snapchat from Facebook and Instagram.” Both internally and externally, “the intelligence Meta gleaned from this project was described” as “devastating to Snapchat’s ads business,” a court filing said.

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say Read More »

apple,-google,-and-meta-are-failing-dma-compliance,-eu-suspects

Apple, Google, and Meta are failing DMA compliance, EU suspects

EU Commissioner for Internal Market Thierry Breton talks to media about non-compliance investigations against Google, Apple, and Meta under the Digital Markets Act (DMA).

Enlarge / EU Commissioner for Internal Market Thierry Breton talks to media about non-compliance investigations against Google, Apple, and Meta under the Digital Markets Act (DMA).

Not even three weeks after the European Union’s Digital Markets Act (DMA) took effect, the European Commission (EC) announced Monday that it is already probing three out of six gatekeepers—Apple, Google, and Meta—for suspected non-compliance.

Apple will need to prove that changes to its app store and existing user options to swap out default settings easily are sufficient to comply with the DMA.

Similarly, Google’s app store rules will be probed, as well as any potentially shady practices unfairly preferencing its own services—like Google Shopping and Hotels—in search results.

Finally, Meta’s “Subscription for No Ads” option—allowing Facebook and Instagram users to opt out of personalized ad targeting for a monthly fee—may not fly under the DMA. Even if Meta follows through on its recent offer to slash these fees by nearly 50 percent, the model could be deemed non-compliant.

“The DMA is very clear: gatekeepers must obtain users’ consent to use their personal data across different services,” the EC’s commissioner for internal market, Thierry Breton, said Monday. “And this consent must be free!”

In total, the EC announced five investigations: two against Apple, two against Google, and one against Meta.

“We suspect that the suggested solutions put forward by the three companies do not fully comply with the DMA,” antitrust chief Margrethe Vestager said, ordering companies to “retain certain documents” viewed as critical to assessing evidence in the probe.

The EC’s investigations are expected to conclude within one year. If tech companies are found non-compliant, they risk fines of up to 10 percent of total worldwide turnover. Any repeat violations could spike fines to 20 percent.

“Moreover, in case of systematic infringements, the Commission may also adopt additional remedies, such as obliging a gatekeeper to sell a business or parts of it or banning the gatekeeper from acquisitions of additional services related to the systemic non-compliance,” the EC’s announcement said.

In addition to probes into Apple, Google, and Meta, the EC will scrutinize Apple’s fee structure for app store alternatives and send retention orders to Amazon and Microsoft. That makes ByteDance the only gatekeeper so far to escape “investigatory steps” as the EU fights to enforce the DMA’s strict standards. (ByteDance continues to contest its gatekeeper status.)

“These are the cases where we already have concrete evidence of possible non-compliance,” Breton said. “And this in less than 20 days of DMA implementation. But our monitoring and investigative work of course doesn’t stop here,” Breton said. “We may have to open other non-compliance cases soon.

Google and Apple have both issued statements defending their current plans for DMA compliance.

“To comply with the Digital Markets Act, we have made significant changes to the way our services operate in Europe,” Google’s competition director Oliver Bethell told Ars, promising to “continue to defend our approach in the coming months.”

“We’re confident our plan complies with the DMA, and we’ll continue to constructively engage with the European Commission as they conduct their investigations,” Apple’s spokesperson told Ars. “Teams across Apple have created a wide range of new developer capabilities, features, and tools to comply with the regulation. At the same time, we’ve introduced protections to help reduce new risks to the privacy, quality, and security of our EU users’ experience. Throughout, we’ve demonstrated flexibility and responsiveness to the European Commission and developers, listening and incorporating their feedback.”

A Meta spokesperson told Ars that Meta “designed Subscription for No Ads to address several overlapping regulatory obligations, including the DMA,” promising to comply with the DMA while arguing that “subscriptions as an alternative to advertising are a well-established business model across many industries.”

The EC’s announcement came after all designated gatekeepers were required to submit DMA compliance reports and scheduled public workshops to discuss DMA compliance. Those workshops conclude tomorrow with Microsoft and appear to be partly driving the EC’s decision to probe Apple, Google, and Meta.

“Stakeholders provided feedback on the compliance solutions offered,” Vestager said. “Their feedback tells us that certain compliance measures fail to achieve their objectives and fall short of expectations.”

Apple and Google app stores probed

Under the DMA, “gatekeepers can no longer prevent their business users from informing their users within the app about cheaper options outside the gatekeeper’s ecosystem,” Vestager said. “That is called anti-steering and is now forbidden by law.”

Stakeholders told the EC that Apple’s and Google’s fee structures appear to “go against” the DMA’s “free of charge” requirement, Vestager said, because companies “still charge various recurring fees and still limit steering.”

This feedback pushed the EC to launch its first two probes under the DMA against Apple and Google.

“We will investigate to what extent these fees and limitations defeat the purpose of the anti-steering provision and by that, limit consumer choice,” Vestager said.

These probes aren’t the end of Apple’s potential app store woes in the EU, either. Breton said that the EC has “many questions on Apple’s new business model” for the app store. These include “questions on the process that Apple used for granting and terminating membership of” its developer program, following a scandal where Epic Games’ account was briefly terminated.

“We also have questions on the fee structure and several other aspects of the business model,” Breton said, vowing to “check if they allow for real opportunities for app developers in line with the letter and the spirit of the DMA.”

Apple, Google, and Meta are failing DMA compliance, EU suspects Read More »

users-shocked-to-find-instagram-limits-political-content-by-default

Users shocked to find Instagram limits political content by default

“I had no idea” —

Instagram never directly told users it was limiting political content by default.

Users shocked to find Instagram limits political content by default

Instagram users have started complaining on X (formerly Twitter) after discovering that Meta has begun limiting recommended political content by default.

“Did [y’all] know Instagram was actively limiting the reach of political content like this?!” an X user named Olayemi Olurin wrote in an X post with more than 150,000 views as of this writing. “I had no idea ’til I saw this comment and I checked my settings and sho nuff political content was limited.”

“Instagram quietly introducing a ‘political’ content preference and turning on ‘limit’ by default is insane?” wrote another X user named Matt in a post with nearly 40,000 views.

Instagram apparently did not notify users directly on the platform when this change happened.

Instead, Instagram rolled out the change in February, announcing in a blog that the platform doesn’t “want to proactively recommend political content from accounts you don’t follow.” That post confirmed that Meta “won’t proactively recommend content about politics on recommendation surfaces across Instagram and Threads,” so that those platforms can remain “a great experience for everyone.”

“This change does not impact posts from accounts people choose to follow; it impacts what the system recommends, and people can control if they want more,” Meta’s spokesperson Dani Lever told Ars. “We have been working for years to show people less political content based on what they told us they want, and what posts they told us are political.”

To change the setting, users can navigate to Instagram’s menu for “settings and activity” in their profiles, where they can update their “content preferences.” On this menu, “political content” is the last item under a list of “suggested content” controls that allow users to set preferences for what content is recommended in their feeds.

There are currently two options for controlling what political content users see. Choosing “don’t limit” means “you might see more political or social topics in your suggested content,” the app says. By default, all users are set to “limit,” which means “you might see less political or social topics.”

“This affects suggestions in Explore, Reels, Feed, Recommendations, and Suggested Users,” Instagram’s settings menu explains. “It does not affect content from accounts you follow. This setting also applies to Threads.”

For general Instagram and Threads users, this change primarily limits what content posted can be recommended, but for influencers using professional accounts, the stakes can be higher. The Washington Post reported that news creators were angered by the update, insisting that Meta’s update diminished the value of the platform for reaching users not actively seeking political content.

“The whole value-add for social media, for political people, is that you can reach normal people who might not otherwise hear a message that they need to hear, like, abortion is on the ballot in Florida, or voting is happening today,” Keith Edwards, a Democratic political strategist and content creator, told The Post.

Meta’s blog noted that “professional accounts on Instagram will be able to use Account Status to check their eligibility to be recommended based on whether they recently posted political content. From Account Status, they can edit or remove recent posts, request a review if they disagree with our decision, or stop posting this type of content for a period of time, in order to be eligible to be recommended again.”

Ahead of a major election year, Meta’s change could impact political outreach attempting to inform voters. The change also came amid speculation that Meta was “shadowbanning” users posting pro-Palestine content since the start of the Israel-Hamas war, The Markup reported.

“Our investigation found that Instagram heavily demoted nongraphic images of war, deleted captions and hid comments without notification, suppressed hashtags, and limited users’ ability to appeal moderation decisions,” The Markup reported.

Meta appears to be interested in shifting away from its reputation as a platform where users expect political content—and misinformation—to thrive. Last year, The Wall Street Journal reported that Meta wanted out of politics and planned to “scale back how much political content it showed users,” after criticism over how the platform handled content related to the January 6 Capitol riot.

The decision to limit recommended political content on Instagram and Threads, Meta’s blog said, extends Meta’s “existing approach to how we treat political content.”

“People have told us they want to see less political content, so we have spent the last few years refining our approach on Facebook to reduce the amount of political content—including from politicians’ accounts—you see in Feed, Reels, Watch, Groups You Should Join, and Pages You May Like,” Meta wrote in a February blog update.

“As part of this, we aim to avoid making recommendations that could be about politics or political issues, in line with our approach of not recommending certain types of content to those who don’t wish to see it,” Meta’s blog continued, while at the same time, “preserving your ability to find and interact with political content that’s meaningful to you if that’s what you’re interested in.”

While platforms typically update users directly on the platform when terms of services change, that wasn’t the case for this update, which simply added new controls for users. That’s why many users who prefer to be recommended political content—and apparently missed Meta’s announcement and subsequent media coverage—expressed shock to discover that Meta was limiting what they see.

On X, even Instagram users who don’t love seeing political content are currently rallying to raise awareness and share tips on how to update the setting.

“This is actually kinda wild that Instagram defaults everyone to this,” one user named Laura wrote. “Obviously political content is toxic but during an election season it’s a little weird to just hide it from everyone?”

Users shocked to find Instagram limits political content by default Read More »

facebook,-instagram-may-cut-fees-by-nearly-50%-in-scramble-for-dma-compliance

Facebook, Instagram may cut fees by nearly 50% in scramble for DMA compliance

Facebook, Instagram may cut fees by nearly 50% in scramble for DMA compliance

Meta is considering cutting monthly subscription fees for Facebook and Instagram users in the European Union nearly in half to comply with the Digital Market Act (DMA), Reuters reported.

During a day-long public workshop on Meta’s DMA compliance, Meta’s competition and regulatory director, Tim Lamb, told the European Commission (EC) that individual subscriber fees could be slashed from 9.99 euros to 5.99 euros. Meta is hoping that reducing fees will help to speed up the EC’s process for resolving Meta’s compliance issues. If Meta’s offer is accepted, any additional accounts would then cost 4 euros instead of 6 euros.

Lamb said that these prices are “by far the lowest end of the range that any reasonable person should be paying for services of these quality,” calling it a “serious offer.”

The DMA requires that Meta’s users of Facebook, Instagram, Facebook Messenger, and Facebook Marketplace “freely” give consent to share data used for ad targeting without losing access to the platform if they’d prefer not to share data. That means services must provide an acceptable alternative for users who don’t consent to data sharing.

“Gatekeepers should enable end users to freely choose to opt-in to such data processing and sign-in practices by offering a less personalized but equivalent alternative, and without making the use of the core platform service or certain functionalities thereof conditional upon the end user’s consent,” the DMA says.

Designated gatekeepers like Meta have debated what it means for a user to “freely” give consent, suggesting that offering a paid subscription for users who decline to share data would be one route for Meta to continue offering high-quality services without routinely hoovering up data on all its users.

But EU privacy advocates like NOYB have protested Meta’s plan to offer a subscription model instead of consenting to data sharing, calling it a “pay or OK model” that forces Meta users who cannot pay the fee to consent to invasive data sharing they would otherwise decline. In a statement shared with Ars, NOYB chair Max Schrems said that even if Meta reduced its fees to 1.99 euros, it would be forcing consent from 99.9 percent of users.

“We know from all research that even a fee of just 1.99 euros or less leads to a shift in consent from 3–10 percent that genuinely want advertisement to 99.9 percent that still click yes,” Schrems said.

In the EU, the General Data Protection Regulation (GDPR) “requires that consent must be ‘freely’ given,” Schrems said. “In reality, it is not about the amount of money—it is about the ‘pay or OK’ approach as a whole. The entire purpose of ‘pay or OK’, is to get users to click on OK, even if this is not their free and genuine choice. We do not think the mere change of the amount makes this approach legal.”

Where EU stands on subscription models

Meta expects that a subscription model is a legal alternative under the DMA. The tech giant said it was launching EU subscriptions last November after the Court of Justice of the European Union (CJEU) “endorsed the subscriptions model as a way for people to consent to data processing for personalized advertising.”

It’s unclear how popular the subscriptions have been at the current higher cost. Right now in the EU, monthly Facebook and Instagram subscriptions cost 9.99 euros per month on the web or 12.99 euros per month on iOS and Android, with additional fees of 6 euros per month on the web and 8 euros per month on iOS and Android for each additional account. Meta declined to comment on how many EU users have subscribed, noting to Ars that it has no obligation to do so.

In the CJEU case, the court was reviewing Meta’s GDPR compliance, which Schrems noted is less strict than the DMA. The CJEU specifically said that under the GDPR, “users must be free to refuse individually”—”in the context of” signing up for services— “to give their consent to particular data processing operations not necessary” for Meta to provide such services “without being obliged to refrain entirely from using the service.”

Facebook, Instagram may cut fees by nearly 50% in scramble for DMA compliance Read More »

meta-sues-“brazenly-disloyal”-former-exec-over-stolen-confidential-docs

Meta sues “brazenly disloyal” former exec over stolen confidential docs

Meta sues “brazenly disloyal” former exec over stolen confidential docs

A recently unsealed court filing has revealed that Meta has sued a former senior employee for “brazenly disloyal and dishonest conduct” while leaving Meta for an AI data startup called Omniva that The Information has described as “mysterious.”

According to Meta, its former vice president of infrastructure, Dipinder Singh Khurana (also known as T.S.), allegedly used his access to “confidential, non-public, and highly sensitive” information to steal more than 100 internal documents in a rushed scheme to poach Meta employees and borrow Meta’s business plans to speed up Omniva’s negotiations with key Meta suppliers.

Meta believes that Omniva—which Data Center Dynamics (DCD) reported recently “pivoted from crypto to AI cloud”—is “seeking to provide AI cloud computing services at scale, including by designing and constructing data centers.” But it was held back by a “lack of data center expertise at the top,” DCD reported.

The Information reported that Omniva began hiring Meta employees to fill the gaps in this expertise, including wooing Khurana away from Meta.

Last year, Khurana notified Meta that he was leaving on May 15, and that’s when Meta first observed Khurana’s allegedly “utter disregard for his contractual and legal obligations to Meta—including his confidentiality obligations to Meta set forth in the Confidential Information and Invention Assignment Agreement that Khurana signed when joining Meta.”

A Meta investigation found that during Khurana’s last two weeks at the company, he allegedly uploaded confidential Meta documents—including “information about Meta’s ‘Top Talent,’ performance information for hundreds of Meta employees, and detailed employee compensation information”—on Meta’s network to a Dropbox folder labeled with his new employer’s name.

“Khurana also uploaded several of Meta’s proprietary, highly sensitive, confidential, and non-public contracts with business partners who supply Meta with crucial components for its data centers,” Meta alleged. “And other documents followed.”

In addition to pulling documents, Khurana also allegedly sent “urgent” requests to subordinates for confidential information on a key supplier, including Meta’s pricing agreement “for certain computing hardware.”

“Unaware of Khurana’s plans, the employee provided Khurana with, among other things, Meta’s pricing-form agreement with that supplier for the computing hardware and the supplier’s Meta-specific preliminary pricing for a particular chip,” Meta alleged.

Some of these documents were “expressly marked confidential,” Meta alleged. Those include a three-year business plan and PowerPoints regarding “Meta’s future ‘roadmap’ with a key supplier” and “Meta’s 2022 redesign of its global-supply-chain group” that Meta alleged “would directly aid Khurana in building his own efficient and effective supply-chain organization” and afford a path for Omniva to bypass “years of investment.” Khurana also allegedly “uploaded a PowerPoint discussing Meta’s use of GPUs for artificial intelligence.”

Meta was apparently tipped off to this alleged betrayal when Khurana used his Meta email and network access to complete a writing assignment for Omniva as part of his hiring process. For this writing assignment, Khurana “disclosed non-public information about Meta’s relationship with certain suppliers that it uses for its data centers” when asked to “explain how he would help his potential new employer develop the supply chain for a company building data centers using specific technologies.”

In a seeming attempt to cover up the alleged theft of Meta documents, Khurana apparently “attempted to scrub” one document “of its references to Meta,” as well as removing a label marking it “CONFIDENTIAL—FOR INTERNAL USE ONLY.” But when replacing “Meta” with “X,” Khurana allegedly missed the term “Meta” in “at least five locations.”

“Khurana took such action to try and benefit himself or his new employer, including to help ensure that Khurana would continue to work at his new employer, continue to receive significant compensation from his new employer, and/or to enable Khurana to take shortcuts in building his supply-chain team at his new employer and/or helping to build his new employer’s business,” Meta alleged.

Ars could not immediately reach Khurana for comment. Meta noted that he has repeatedly denied breaching his contract or initiating contact with Meta employees who later joined Omniva. He also allegedly refused to sign a termination agreement that reiterates his confidentiality obligations.

Meta sues “brazenly disloyal” former exec over stolen confidential docs Read More »

law-enforcement-doesn’t-want-to-be-“customer-service”-reps-for-meta-any-more

Law enforcement doesn’t want to be “customer service” reps for Meta any more

No help —

“Dramatic and persistent spike” in account takeovers is “substantial drain” on resources.

In this photo illustration, the icons of WhatsApp, Messenger, Instagram and Facebook are displayed on an iPhone in front of a Meta logo

Enlarge / Meta has a verified program for users of Facebook and Instagram.

Getty Images | Chesnot

Forty-one state attorneys general penned a letter to Meta’s top attorney on Wednesday saying complaints are skyrocketing across the United States about Facebook and Instagram user accounts being stolen and declaring “immediate action” necessary to mitigate the rolling threat.

The coalition of top law enforcement officials, spearheaded by New York Attorney General Letitia James, says the “dramatic and persistent spike” in complaints concerning account takeovers amounts to a “substantial drain” on governmental resources, as many stolen accounts are also tied to financial crimes—some of which allegedly profits Meta directly.

“We have received a number of complaints of threat actors fraudulently charging thousands of dollars to stored credit cards,” says the letter addressed to Meta’s chief legal officer, Jennifer Newstead. “Furthermore, we have received reports of threat actors buying advertisements to run on Meta.”

“We refuse to operate as the customer service representatives of your company,” the officials add. “Proper investment in response and mitigation is mandatory.”

In addition to New York, the letter is signed by attorneys general from Alabama, Alaska, Arizona, California, Colorado, Connecticut, Delaware, Florida, Georgia, Hawaii, Illinois, Iowa, Kentucky, Louisiana, Maryland, Massachusetts, Michigan, Minnesota, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, North Carolina, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee, Utah, Vermont, Virginia, Washington, West Virginia, Wisconsin, Wyoming, and the District of Columbia.

“Scammers use every platform available to them and constantly adapt to evade enforcement. We invest heavily in our trained enforcement and review teams and have specialized detection tools to identify compromised accounts and other fraudulent activity,” Meta says in a statement provided by spokesperson Erin McPike. “We regularly share tips and tools people can use to protect themselves, provide a means to report potential violations, work with law enforcement, and take legal action.”

Account takeovers can occur as a result of phishing as well as other more sophisticated and targeted techniques. Once an attacker gains access to an account, the owner can be easily locked out by changing passwords and contact information. Private messages and personal information are left up for grabs for a variety of nefarious purposes, from impersonation and fraud to pushing misinformation.

“It’s basically a case of identity theft and Facebook is doing nothing about it,” said one user whose complaint was cited in the letter to Meta’s Newstead.

The state officials said the accounts that were stolen to run ads on Facebook often run afoul of its rules while doing so, leading them to be permanently suspended, punishing the victims—often small business owners—twice over.

“Having your social media account taken over by a scammer can feel like having someone sneak into your home and change all of the locks,” New York’s James said in a statement. “Social media is how millions of Americans connect with family, friends, and people throughout their communities and the world. To have Meta fail to properly protect users from scammers trying to hijack accounts and lock rightful owners out is unacceptable.”

Other complaints forwarded to Newstead show hacking victims expressing frustration over Meta’s lack of response. In many cases, users report no action being taken by the company. Some say the company encourages users to report such problems but never responds, leaving them unable to salvage their accounts or the businesses they built around them.

After being hacked and defrauded of $500, one user complained that their ability to communicate with their own customer base had been “completely disrupted,” and that Meta had never responded to the report they filed, though the user had followed the instructions the company provided them to obtain help.

“I can’t get any help from Meta. There is no one to talk to and meanwhile all my personal pictures are being used. My contacts are receiving false information from the hacker,” one user wrote.

Wrote another: “This is my business account, which is important to me and my life. I have invested my life, time, money and soul in this account. All attempts to contact and get a response from the Meta company, including Instagram and Facebook, were crowned with complete failure, since the company categorically does not respond to letters.”

Figures provided by James’ office in New York show a tenfold increase in complaints between 2019 and 2023—from 73 complaints to more than 780 last year. In January alone, more than 128 complaints were received, James’ office says. Other states saw similar spikes in complaints during that period, according to the letter, with Pennsylvania recording a 270 percent increase, a 330 percent jump in North Carolina, and a 740 percent surge in Vermont.

The letter notes that, while the officials cannot be “certain of any connection,” the drastic increase in complaints occurred “around the same time” as layoffs at Meta affecting roughly 11,000 employees in November 2022, around 13 percent of its staff at the time.

This story originally appeared on wired.com.

Law enforcement doesn’t want to be “customer service” reps for Meta any more Read More »

whatsapp-finally-forces-pegasus-spyware-maker-to-share-its-secret-code

WhatsApp finally forces Pegasus spyware maker to share its secret code

In on the secret —

Israeli spyware maker loses fight to only share information on installation.

WhatsApp finally forces Pegasus spyware maker to share its secret code

WhatsApp will soon be granted access to explore the “full functionality” of the NSO Group’s Pegasus spyware—sophisticated malware the Israeli Ministry of Defense has long guarded as a “highly sought” state secret, The Guardian reported.

Since 2019, WhatsApp has pushed for access to the NSO’s spyware code after alleging that Pegasus was used to spy on 1,400 WhatsApp users over a two-week period, gaining unauthorized access to their sensitive data, including encrypted messages. WhatsApp suing the NSO, Ars noted at the time, was “an unprecedented legal action” that took “aim at the unregulated industry that sells sophisticated malware services to governments around the world.”

Initially, the NSO sought to block all discovery in the lawsuit “due to various US and Israeli restrictions,” but that blanket request was denied. Then, last week, the NSO lost another fight to keep WhatsApp away from its secret code.

As the court considered each side’s motions to compel discovery, a US district judge, Phyllis Hamilton, rejected the NSO’s argument that it should only be required to hand over information about Pegasus’ installation layer.

Hamilton sided with WhatsApp, granting the Meta-owned app’s request for “information concerning the full functionality of the relevant spyware,” writing that “information showing the functionality of only the installation layer of the relevant spyware would not allow plaintiffs to understand how the relevant spyware performs the functions of accessing and extracting data.”

WhatsApp has alleged that Pegasus can “intercept communications sent to and from a device, including communications over iMessage, Skype, Telegram, WeChat, Facebook Messenger, WhatsApp, and others” and that it could also be “customized for different purposes, including to intercept communications, capture screenshots, and exfiltrate browser history.”

To prove this, WhatsApp needs access to “all relevant spyware”—specifically “any NSO spyware targeting or directed at WhatsApp servers, or using WhatsApp in any way to access Target Devices”—for “a period of one year before the alleged attack to one year after the alleged attack,” Hamilton concluded.

The NSO has so far not commented on the order, but WhatsApp was pleased with this outcome.

“The recent court ruling is an important milestone in our long running goal of protecting WhatsApp users against unlawful attacks,” WhatsApp’s spokesperson told The Guardian. “Spyware companies and other malicious actors need to understand they can be caught and will not be able to ignore the law.”

But Hamilton did not grant all of WhatsApp’s requests for discovery, sparing the NSO from sharing specific information regarding its server architecture because WhatsApp “would be able to glean the same information from the full functionality of the alleged spyware.”

Perhaps more significantly, the NSO also won’t be compelled to identify its clients. While the NSO does not publicly name the governments that purchase its spyware, reports indicate that Poland, Saudi Arabia, Rwanda, India, Hungary, and the United Arab Emirates have used it to target dissidents, The Guardian reported. In 2021, the US blacklisted the NSO for allegedly spreading “digital tools used for repression.”

In the same order, Hamilton also denied the NSO’s request to compel WhatsApp to share its post-complaint communications with the Citizen Lab, which served as a third-party witness in the case to support WhatsApp’s argument that “Pegasus is misused by NSO’s customers against ‘civil society.’”

It appeared that the NSO sought WhatsApp’s post-complaint communications with Citizen Lab as a way to potentially pressure WhatsApp into dropping Citizen Lab’s statement from the record. Hamilton quoted a court filing from the NSO that curiously noted: “If plaintiffs would agree to withdraw from their case Citizen Lab’s contention that Pegasus was used against members of ‘civil society’ rather than to investigate terrorism and serious crime, there would be much less need for this discovery.”

Ultimately, Hamilton denied the NSO’s request because “the court fails to see the relevance of the requested discovery.”

As discovery in the case proceeds, the court expects to receive expert disclosures from each side on August 30 before the trial, which is expected to start on March 3, 2025.

WhatsApp finally forces Pegasus spyware maker to share its secret code Read More »

google-goes-“open-ai”-with-gemma,-a-free,-open-weights-chatbot-family

Google goes “open AI” with Gemma, a free, open-weights chatbot family

Free hallucinations for all —

Gemma chatbots can run locally, and they reportedly outperform Meta’s Llama 2.

The Google Gemma logo

On Wednesday, Google announced a new family of AI language models called Gemma, which are free, open-weights models built on technology similar to the more powerful but closed Gemini models. Unlike Gemini, Gemma models can run locally on a desktop or laptop computer. It’s Google’s first significant open large language model (LLM) release since OpenAI’s ChatGPT started a frenzy for AI chatbots in 2022.

Gemma models come in two sizes: Gemma 2B (2 billion parameters) and Gemma 7B (7 billion parameters), each available in pre-trained and instruction-tuned variants. In AI, parameters are values in a neural network that determine AI model behavior, and weights are a subset of these parameters stored in a file.

Developed by Google DeepMind and other Google AI teams, Gemma pulls from techniques learned during the development of Gemini, which is the family name for Google’s most capable (public-facing) commercial LLMs, including the ones that power its Gemini AI assistant. Google says the name comes from the Latin gemma, which means “precious stone.”

While Gemma is Google’s first major open LLM since the launch of ChatGPT (it has released smaller research models such as FLAN-T5 in the past), it’s not Google’s first contribution to open AI research. The company cites the development of the Transformer architecture, as well as releases like TensorFlow, BERT, T5, and JAX as key contributions, and it would not be controversial to say that those have been important to the field.

A chart of Gemma performance provided by Google. Google says that Gemma outperforms Meta's Llama 2 on several benchmarks.

Enlarge / A chart of Gemma performance provided by Google. Google says that Gemma outperforms Meta’s Llama 2 on several benchmarks.

Owing to lesser capability and high confabulation rates, smaller open-weights LLMs have been more like tech demos until recently, as some larger ones have begun to match GPT-3.5 performance levels. Still, experts see source-available and open-weights AI models as essential steps in ensuring transparency and privacy in chatbots. Google Gemma is not “open source” however, since that term usually refers to a specific type of software license with few restrictions attached.

In reality, Gemma feels like a conspicuous play to match Meta, which has made a big deal out of releasing open-weights models (such as LLaMA and Llama 2) since February of last year. That technique stands in opposition to AI models like OpenAI’s GPT-4 Turbo, which is only available through the ChatGPT application and a cloud API and cannot be run locally. A Reuters report on Gemma focuses on the Meta angle and surmises that Google hopes to attract more developers to its Vertex AI cloud platform.

We have not used Gemma yet; however, Google claims the 7B model outperforms Meta’s Llama 2 7B and 13B models on several benchmarks for math, Python code generation, general knowledge, and commonsense reasoning tasks. It’s available today through Kaggle, a machine-learning community platform, and Hugging Face.

In other news, Google paired the Gemma release with a “Responsible Generative AI Toolkit,” which Google hopes will offer guidance and tools for developing what the company calls “safe and responsible” AI applications.

Google goes “open AI” with Gemma, a free, open-weights chatbot family Read More »

elon-musk’s-x-allows-china-based-propaganda-banned-on-other-platforms

Elon Musk’s X allows China-based propaganda banned on other platforms

Rinse-wash-repeat. —

X accused of overlooking propaganda flagged by Meta and criminal prosecutors.

Elon Musk’s X allows China-based propaganda banned on other platforms

Lax content moderation on X (aka Twitter) has disrupted coordinated efforts between social media companies and law enforcement to tamp down on “propaganda accounts controlled by foreign entities aiming to influence US politics,” The Washington Post reported.

Now propaganda is “flourishing” on X, The Post said, while other social media companies are stuck in endless cycles, watching some of the propaganda that they block proliferate on X, then inevitably spread back to their platforms.

Meta, Google, and then-Twitter began coordinating takedown efforts with law enforcement and disinformation researchers after Russian-backed influence campaigns manipulated their platforms in hopes of swaying the 2016 US presidential election.

The next year, all three companies promised Congress to work tirelessly to stop Russian-backed propaganda from spreading on their platforms. The companies created explicit election misinformation policies and began meeting biweekly to compare notes on propaganda networks each platform uncovered, according to The Post’s interviews with anonymous sources who participated in these meetings.

However, after Elon Musk purchased Twitter and rebranded the company as X, his company withdrew from the alliance in May 2023.

Sources told The Post that the last X meeting attendee was Irish intelligence expert Aaron Rodericks—who was allegedly disciplined for liking an X post calling Musk “a dipshit.” Rodericks was subsequently laid off when Musk dismissed the entire election integrity team last September, and after that, X apparently ditched the biweekly meeting entirely and “just kind of disappeared,” a source told The Post.

In 2023, for example, Meta flagged 150 “artificial influence accounts” identified on its platform, of which “136 were still present on X as of Thursday evening,” according to The Post’s analysis. X’s seeming oversight extends to all but eight of the 123 “deceptive China-based campaigns” connected to accounts that Meta flagged last May, August, and December, The Post reported.

The Post’s report also provided an exclusive analysis from the Stanford Internet Observatory (SIO), which found that 86 propaganda accounts that Meta flagged last November “are still active on X.”

The majority of these accounts—81—were China-based accounts posing as Americans, SIO reported. These accounts frequently ripped photos from Americans’ LinkedIn profiles, then changed the real Americans’ names while posting about both China and US politics, as well as people often trending on X, such as Musk and Joe Biden.

Meta has warned that China-based influence campaigns are “multiplying,” The Post noted, while X’s standards remain seemingly too relaxed. Even accounts linked to criminal investigations remain active on X. One “account that is accused of being run by the Chinese Ministry of Public Security,” The Post reported, remains on X despite its posts being cited by US prosecutors in a criminal complaint.

Prosecutors connected that account to “dozens” of X accounts attempting to “shape public perceptions” about the Chinese Communist Party, the Chinese government, and other world leaders. The accounts also comment on hot-button topics like the fentanyl problem or police brutality, seemingly to convey “a sense of dismay over the state of America without any clear partisan bent,” Elise Thomas, an analyst for a London nonprofit called the Institute for Strategic Dialogue, told The Post.

Some X accounts flagged by The Post had more than 1 million followers. Five have paid X for verification, suggesting that their disinformation campaigns—targeting hashtags to confound discourse on US politics—are seemingly being boosted by X.

SIO technical research manager Renée DiResta criticized X’s decision to stop coordinating with other platforms.

“The presence of these accounts reinforces the fact that state actors continue to try to influence US politics by masquerading as media and fellow Americans,” DiResta told The Post. “Ahead of the 2022 midterms, researchers and platform integrity teams were collaborating to disrupt foreign influence efforts. That collaboration seems to have ground to a halt, Twitter does not seem to be addressing even networks identified by its peers, and that’s not great.”

Musk shut down X’s election integrity team because he claimed that the team was actually “undermining” election integrity. But analysts are bracing for floods of misinformation to sway 2024 elections, as some major platforms have removed election misinformation policies just as rapid advances in AI technologies have made misinformation spread via text, images, audio, and video harder for the average person to detect.

In one prominent example, a fake robocaller relied on AI voice technology to pose as Biden to tell Democrats not to vote. That incident seemingly pushed the Federal Trade Commission on Thursday to propose penalizing AI impersonation.

It seems apparent that propaganda accounts from foreign entities on X will use every tool available to get eyes on their content, perhaps expecting Musk’s platform to be the slowest to police them. According to The Post, some of the X accounts spreading propaganda are using what appears to be AI-generated images of Biden and Donald Trump to garner tens of thousands of views on posts.

It’s possible that X will start tightening up on content moderation as elections draw closer. Yesterday, X joined Amazon, Google, Meta, OpenAI, TikTok, and other Big Tech companies in signing an agreement to fight “deceptive use of AI” during 2024 elections. Among the top goals identified in the “AI Elections accord” are identifying where propaganda originates, detecting how propaganda spreads across platforms, and “undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing” with propaganda.

Elon Musk’s X allows China-based propaganda banned on other platforms Read More »

amc-to-pay-$8m-for-allegedly-violating-1988-law-with-use-of-meta-pixel

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel

Stream like no one is watching —

Proposed settlement impacts millions using AMC apps like Shudder and AMC+.

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel

On Thursday, AMC notified subscribers of a proposed $8.3 million settlement that provides awards to an estimated 6 million subscribers of its six streaming services: AMC+, Shudder, Acorn TV, ALLBLK, SundanceNow, and HIDIVE.

The settlement comes in response to allegations that AMC illegally shared subscribers’ viewing history with tech companies like Google, Facebook, and X (aka Twitter) in violation of the Video Privacy Protection Act (VPPA).

Passed in 1988, the VPPA prohibits AMC and other video service providers from sharing “information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” It was originally passed to protect individuals’ right to private viewing habits, after a journalist published the mostly unrevealing video rental history of a judge, Robert Bork, who had been nominated to the Supreme Court by Ronald Reagan.

The so-called “Bork Tapes” revealed little—other than that the judge frequently rented spy thrillers and British costume dramas—but lawmakers recognized that speech could be chilled by monitoring anyone’s viewing habits. While the law was born in the era of Blockbuster Video, subscribers suing AMC wrote in their amended complaint that “the importance of legislation like the VPPA in the modern era of datamining is more pronounced than ever before.”

According to subscribers suing, AMC allegedly installed tracking technologies—including the Meta Pixel, the X Tracking Pixel, and Google Tracking Technology—on its website, allowing their personally identifying information to be connected with their viewing history.

Some trackers, like the Meta Pixel, required AMC to choose what kind of activity can be tracked, and subscribers claimed that AMC had willingly opted into sharing video names and URLs with Meta, along with a Facebook ID. “Anyone” could use the Facebook ID, subscribers said, to identify the AMC subscribers “simply by entering https://www.facebook.com/[unencrypted FID]/” into a browser.

X’s ID could similarly be de-anonymized, subscribers alleged, by using tweeterid.com.

AMC “could easily program its AMC Services websites so that this information is not disclosed” to tech companies, subscribers alleged.

Denying wrongdoing, AMC has defended its use of tracking technologies but is proposing to settle with subscribers to avoid uncertain outcomes from litigation, the proposed settlement said.

A hearing to approve the proposed settlement has been scheduled for May 16.

If it’s approved, AMC has agreed to “suspend, remove, or modify operation of the Meta Pixel and other Third-Party Tracking Technologies so that use of such technologies on AMC Services will not result in AMC’s disclosure to the third-party technology companies of the specific video content requested or obtained by a specific individual.”

Google and X did not immediately respond to Ars’ request to comment. Meta declined to comment.

All registered users of AMC services who “requested or obtained video content on at least one of the six AMC services” between January 18, 2021, and January 10, 2024, are currently eligible to submit claims under the proposed settlement. The deadline to submit is April 9.

In addition to distributing the $8.3 million settlement fund among class members, subscribers will receive a free one-week digital subscription.

According to AMC’s notice to subscribers (full disclosure, I am one), AMC’s agreement to avoid sharing subscribers’ viewing histories may change if the VPPA is amended, repealed, or invalidated. If the law changes to permit sharing viewing data at the core of subscribers’ claim, AMC may resume sharing that information with tech companies.

That day could come soon if Patreon has its way. Recently, Patreon asked a federal judge to rule that the VPPA is unconstitutional.

Patreon’s lawsuit is similar in its use of the Meta Pixel, allegedly violating the VPPA by sharing video views on its platform with Meta.

Patreon has argued that the VPPA is unconstitutional because it chills speech. Patreon said that the law was enacted “for the express purpose of silencing disclosures about political figures and their video-watching, an issue of undisputed continuing public interest and concern.”

According to Patreon, the VPPA narrowly prohibits video service providers from sharing video titles, but not from sharing information that people may wish to keep private, such as “the genres, performers, directors, political views, sexual content, and every other detail of pre-recorded video that those consumers watch.”

Therefore, Patreon argued, the VPPA “restrains speech” while “doing little if anything to protect privacy” and never protecting privacy “by the least restrictive means.”

That lawsuit remains ongoing, but Patreon’s position is likely to be met with opposition from experts who typically also defend freedom of speech. Experts at the Electronic Privacy Information Center, like AMC subscribers suing, consider the VPPA one of America’s “strongest protections of consumer privacy against a specific form of data collection.” And the Electronic Frontier Foundation (EFF) has already moved to convince the court to reject Patreon’s claim, describing the VPPA in a blog as an “essential” privacy protection.

“EFF is second to none in fighting for everyone’s First Amendment rights in court,” EFF’s blog said. “But Patreon’s First Amendment argument is wrong and misguided. The company seeks to elevate its speech interests over those of Internet users who benefit from the VPPA’s protections.”

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel Read More »

facebook-rules-allowing-fake-biden-“pedophile”-video-deemed-“incoherent”

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent”

Not to be misled —

Meta may revise AI policies that experts say overlook “more misleading” content.

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent”

A fake video manipulated to falsely depict President Joe Biden inappropriately touching his granddaughter has revealed flaws in Facebook’s “deepfake” policies, Meta’s Oversight Board concluded Monday.

Last year when the Biden video went viral, Facebook repeatedly ruled that it did not violate policies on hate speech, manipulated media, or bullying and harassment. Since the Biden video is not AI-generated content and does not manipulate the president’s speech—making him appear to say things he’s never said—the video was deemed OK to remain on the platform. Meta also noted that the video was “unlikely to mislead” the “average viewer.”

“The video does not depict President Biden saying something he did not say, and the video is not the product of artificial intelligence or machine learning in a way that merges, combines, replaces, or superimposes content onto the video (the video was merely edited to remove certain portions),” Meta’s blog said.

The Oversight Board—an independent panel of experts—reviewed the case and ultimately upheld Meta’s decision despite being “skeptical” that current policies work to reduce harms.

“The board sees little sense in the choice to limit the Manipulated Media policy to cover only people saying things they did not say, while excluding content showing people doing things they did not do,” the board said, noting that Meta claimed this distinction was made because “videos involving speech were considered the most misleading and easiest to reliably detect.”

The board called upon Meta to revise its “incoherent” policies that it said appear to be more concerned with regulating how content is created, rather than with preventing harms. For example, the Biden video’s caption described the president as a “sick pedophile” and called out anyone who would vote for him as “mentally unwell,” which could affect “electoral processes” that Meta could choose to protect, the board suggested.

“Meta should reconsider this policy quickly, given the number of elections in 2024,” the Oversight Board said.

One problem, the Oversight Board suggested, is that in its rush to combat AI technologies that make generating deepfakes a fast, cheap, and easy business, Meta policies currently overlook less technical ways of manipulating content.

Instead of using AI, the Biden video relied on basic video-editing technology to edit out the president placing an “I Voted” sticker on his adult granddaughter’s chest. The crude edit looped a 7-second clip altered to make the president appear to be, as Meta described in its blog, “inappropriately touching a young woman’s chest and kissing her on the cheek.”

Meta making this distinction is confusing, the board said, partly because videos altered using non-AI technologies are not considered less misleading or less prevalent on Facebook.

The board recommended that Meta update policies to cover not just AI-generated videos, but other forms of manipulated media, including all forms of manipulated video and audio. Audio fakes currently not covered in the policy, the board warned, offer fewer cues to alert listeners to the inauthenticity of recordings and may even be considered “more misleading than video content.”

Notably, earlier this year, a fake Biden robocall attempted to mislead Democratic voters in New Hampshire by encouraging them not to vote. The Federal Communications Commission promptly responded by declaring AI-generated robocalls illegal, but the Federal Election Commission was not able to act as swiftly to regulate AI-generated misleading campaign ads easily spread on social media, AP reported. In a statement, Oversight Board Co-Chair Michael McConnell said that manipulated audio is “one of the most potent forms of electoral disinformation.”

To better combat known harms, the board suggested that Meta revise its Manipulated Media policy to “clearly specify the harms it is seeking to prevent.”

Rather than pushing Meta to remove more content, however, the board urged Meta to use “less restrictive” methods of coping with fake content, such as relying on fact-checkers applying labels noting that content is “significantly altered.” In public comments, some Facebook users agreed that labels would be most effective. Others urged Meta to “start cracking down” and remove all fake videos, with one suggesting that removing the Biden video should have been a “deeply easy call.” Another commenter suggested that the Biden video should be considered acceptable speech, as harmless as a funny meme.

While the board wants Meta to also expand its policies to cover all forms of manipulated audio and video, it cautioned that including manipulated photos in the policy could “significantly expand” the policy’s scope and make it harder to enforce.

“If Meta sought to label videos, audio, and photographs but only captured a small portion, this could create a false impression that non-labeled content is inherently trustworthy,” the board warned.

Meta should therefore stop short of adding manipulated images to the policy, the board said. Instead, Meta should conduct research into the effects of manipulated photos and then consider updates when the company is prepared to enforce a ban on manipulated photos at scale, the board recommended. In the meantime, Meta should move quickly to update policies ahead of a busy election year where experts and politicians globally are bracing for waves of misinformation online.

“The volume of misleading content is rising, and the quality of tools to create it is rapidly increasing,” McConnell said. “Platforms must keep pace with these changes, especially in light of global elections during which certain actors seek to mislead the public.”

Meta’s spokesperson told Ars that Meta is “reviewing the Oversight Board’s guidance and will respond publicly to their recommendations within 60 days.”

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent” Read More »