Malicious hackers are exploiting a critical vulnerability in a widely used security camera to spread Mirai, a family of malware that wrangles infected Internet of Things devices into large networks for use in attacks that take down websites and other Internet-connected devices.
The attacks target the AVM1203, a surveillance device from Taiwan-based manufacturer AVTECH, network security provider Akamai said Wednesday. Unknown attackers have been exploiting a 5-year-old vulnerability since March. The zero-day vulnerability, tracked as CVE-2024-7029, is easy to exploit and allows attackers to execute malicious code. The AVM1203 is no longer sold or supported, so no update is available to fix the critical zero-day.
That time a ragtag army shook the Internet
Akamai said that the attackers are exploiting the vulnerability so they can install a variant of Mirai, which arrived in September 2016 when a botnet of infected devices took down cybersecurity news site Krebs on Security. Mirai contained functionality that allowed a ragtag army of compromised webcams, routers, and other types of IoT devices to wage distributed denial-of-service attacks of record-setting sizes. In the weeks that followed, the Mirai botnet delivered similar attacks on Internet service providers and other targets. One such attack, against dynamic domain name provider Dyn paralyzed vast swaths of the Internet.
Complicating attempts to contain Mirai, its creators released the malware to the public, a move that allowed virtually anyone to create their own botnets that delivered DDoSes of once-unimaginable size.
Kyle Lefton, a security researcher with Akamai’s Security Intelligence and Response Team, said in an email that it has observed the threat actor behind the attacks perform DDoS attacks against “various organizations,” which he didn’t name or describe further. So far, the team hasn’t seen any indication the threat actors are monitoring video feeds or using the infected cameras for other purposes.
Akamai detected the activity using a “honeypot” of devices that mimic the cameras on the open Internet to observe any attacks that target them. The technique doesn’t allow the researchers to measure the botnet’s size. The US Cybersecurity and Infrastructure Security Agency warned of the vulnerability earlier this month.
The technique, however, has allowed Akamai to capture the code used to compromise the devices. It targets a vulnerability that has been known since at least 2019 when exploit code became public. The zero-day resides in the “brightness argument in the ‘action=’ parameter” and allows for command injection, researchers wrote. The zero-day, discovered by Akamai researcher Aline Eliovich, wasn’t formally recognized until this month, with the publishing of CVE-2024-7029.
Wednesday’s post went on to say:
How does it work?
This vulnerability was originally discovered by examining our honeypot logs. Figure 1 shows the decoded URL for clarity. Decoded payload
Enlarge/ Fig. 1: Decoded payload body of the exploit attempts
Akamai
Fig. 1: Decoded payload body of the exploit attempts
The vulnerability lies in the brightness function within the file /cgi-bin/supervisor/Factory.cgi (Figure 2).
In the exploit examples we observed, essentially what happened is this: The exploit of this vulnerability allows an attacker to execute remote code on a target system.
Figure 3 is an example of a threat actor exploiting this flaw to download and run a JavaScript file to fetch and load their main malware payload. Similar to many other botnets, this one is also spreading a variant of Mirai malware to its targets.
Enlarge/ Fig. 3: Strings from the JavaScript downloader
Akamai
In this instance, the botnet is likely using the Corona Mirai variant, which has been referenced by other vendors as early as 2020 in relation to the COVID-19 virus.
Upon execution, the malware connects to a large number of hosts through Telnet on ports 23, 2323, and 37215. It also prints the string “Corona” to the console on an infected host (Figure 4).
Enlarge/ Fig. 4: Execution of malware showing output to console
Akamai
Static analysis of the strings in the malware samples shows targeting of the path /ctrlt/DeviceUpgrade_1 in an attempt to exploit Huawei devices affected by CVE-2017-17215. The samples have two hard-coded command and control IP addresses, one of which is part of the CVE-2017-17215 exploit code:
The botnet also targeted several other vulnerabilities including a Hadoop YARN RCE, CVE-2014-8361, and CVE-2017-17215. We have observed these vulnerabilities exploited in the wild several times, and they continue to be successful.
Given that this camera model is no longer supported, the best course of action for anyone using one is to replace it. As with all Internet-connected devices, IoT devices should never be accessible using the default credentials that shipped with them.
Pavel Durov, Telegram founder and former CEO of Vkontakte, in happier (and younger) days.
Late this afternoon at a Parisian airport, French authorities detained Pavel Durov, the founder of the Telegram messaging/publication service. They are allegedly planning to hit him tomorrow with serious charges related to abetting terrorism, fraud, money laundering, and crimes against children, all of it apparently stemming from a near-total lack of moderation on Telegram. According to French authorities, thanks to its encryption and support for crypto, Telegram has become the new top tool for organized crime.
The French outlet TF1 had the news first from sources within the investigation. (Reuters and CNN have since run stories as well.) Their source said, “Pavel Durov will definitely end up in pretrial detention. On his platform, he allowed an incalculable number of offenses and crimes to be committed, which he does nothing to moderate nor does he cooperate.”
Durov is a 39-year-old who gained a fortune by building VKontakte, a Russian version of Facebook, before being forced out of his company by the Kremlin. He left Russia and went on to start Telegram, which became widely popular, especially in Europe. He was arrested today when his private plane flew from Azerbaijan to Paris’s Le Bourget Airport.
Telegram has become a crucial news outlet for Russians, as it is one of the few uncensored ways to hear non-Kremlin propaganda from within Russia. It has also become the top outlet for nationalistic Russian “milbloggers” writing about the Ukraine war. Durov’s arrest has already led to outright panic among many of them, in part due to secrets it might reveal—but also because it is commonly used by Russian forces to communicate.
As Rob Lee, a senior fellow at the Foreign Policy Research Institute, noted tonight, “A popular Russian channel says that Telegram is also used by Russian forces to communicate, and that if Western intelligence services gain access to it, they could obtain sensitive information about the Russian military.”
Right wing and crypto influencers are likewise angry over the arrest, writing things like, “This is a serious attack on freedom. Today, they target an app that promotes liberty tomorrow, they will go after DeFi. If you claim to support crypto, you must show your support #FreeDurov it’s time for digital resistance.”
Durov appears to be an old-school cyber-libertarian who believes in privacy and encryption. His arrest will certainly resonate in America, which has seen a similar debate over how much online services should cooperate with law enforcement. The FBI, for instance, has occasionally warned that end-to-end encryption will result in a “going dark” problem in which crime simply disappears from their view, and the US has seen repeated attempts to legislate backdoors into encryption systems. Those have all been defeated, however, and civil liberties advocates and techies generally note that creating backdoors makes such systems fundamentally insecure. The global debate over crime, encryption, civil liberties, and messaging apps is sure to heat up with Durov’s arrest.
Microsoft is stepping up its plans to make Windows more resilient to buggy software after a botched CrowdStrike update took down millions of PCs and servers in a global IT outage.
The tech giant has in the past month intensified talks with partners about adapting the security procedures around its operating system to better withstand the kind of software error that crashed 8.5 million Windows devices on July 19.
Critics say that any changes by Microsoft would amount to a concession of shortcomings in Windows’ handling of third-party security software that could have been addressed sooner.
Yet they would also prove controversial among security vendors that would have to make radical changes to their products, and force many Microsoft customers to adapt their software.
Last month’s outages—which are estimated to have caused billions of dollars in damages after grounding thousands of flights and disrupting hospital appointments worldwide—heightened scrutiny from regulators and business leaders over the extent of access that third-party software vendors have to the core, or kernel, of Windows operating systems.
Microsoft will host a summit next month for government representatives and cyber security companies, including CrowdStrike, to “discuss concrete steps we will all take to improve security and resiliency for our joint customers,” Microsoft said on Friday.
The gathering will take place on September 10 at Microsoft’s headquarters near Seattle, it said in a blog post.
Bugs in the kernel can quickly crash an entire operating system, triggering the millions of “blue screens of death” that appeared around the globe after CrowdStrike’s faulty software update was sent out to clients’ devices.
Microsoft told the Financial Times it was considering several options to make its systems more stable and had not ruled out completely blocking access to the Windows kernel—an option some rivals fear would put their software at a disadvantage to the company’s internal security product, Microsoft Defender.
“All of the competitors are concerned that [Microsoft] will use this to prefer their own products over third-party alternatives,” said Ryan Kalember, head of cyber security strategy at Proofpoint.
Microsoft may also demand new testing procedures from cyber security vendors rather than adapting the Windows system itself.
Apple, which was not hit by the outages, blocks all third-party providers from accessing the kernel of its MacOS operating system, forcing them to operate in the more limited “user-mode.”
Microsoft has previously said it could not do the same, after coming to an understanding with the European Commission in 2009 that it would give third parties the same access to its systems as that for Microsoft Defender.
Some experts said, however, that this voluntary commitment to the EU had not tied Microsoft’s hands in the way it claimed, arguing that the company had always been free to make the changes now under consideration.
“These are technical decisions of Microsoft that were not part of [the arrangement],” said Thomas Graf, a partner at Cleary Gottlieb in Brussels who was involved in the case.
“The text [of the understanding] does not require them to give access to the kernel,” added AJ Grotto, a former senior director for cyber security policy at the White House.
Grotto said Microsoft shared some of the blame for the July disruption since the outages would not have been possible without its decision to allow access to the kernel.
Nevertheless, while it might boost a system’s resilience, blocking kernel access could also bring “real trade-offs” for the compatibility with other software that had made Windows so popular among business customers, Forrester analyst Allie Mellen said.
“That would be a fundamental shift for Microsoft’s philosophy and business model,” she added.
Operating exclusively outside the kernel may lower the risk of triggering mass outages but it was also “very limiting” for security vendors and could make their products “less effective” against hackers, Mellen added.
Operating within the kernel gave security companies more information about potential threats and enabled their defensive tools to activate before malware could take hold, she added.
An alternative option could be to replicate the model used by the open-source operating system Linux, which uses a filtering mechanism that creates a segregated environment within the kernel in which software, including cyber defense tools, can run.
But the complexity of overhauling how other security software works with Windows means that any changes will be hard for regulators to police and Microsoft will have strong incentives to favor its own products, rivals said.
It “sounds good on paper, but the devil is in the details,” said Matthew Prince, chief executive of digital services group Cloudflare.
Dr. Emmanouil “Manos” Antonakakis runs a Georgia Tech cybersecurity lab and has attracted millions of dollars in the last few years from the US government for Department of Defense research projects like “Rhamnousia: Attributing Cyber Actors Through Tensor Decomposition and Novel Data Acquisition.”
The government yesterday sued Georgia Tech in federal court, singling out Antonakakis and claiming that neither he nor Georgia Tech followed basic (and required) security protocols for years, knew they were not in compliance with such protocols, and then submitted invoices for their DoD projects anyway. (Read the complaint.) The government claims this is fraud:
At bottom, DoD paid for military technology that Defendants stored in an environment that was not secure from unauthorized disclosure, and Defendants failed to even monitor for breaches so that they and DoD could be alerted if information was compromised. What DoD received for its funds was of diminished or no value, not the benefit of its bargain.
AV hate
Given the nature of his work for DoD, Antonakakis and his lab are required to abide by many sets of security rules, including those outlined in NIST Special Publication 800–171, “Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations.”
One of the rules says that machines storing or accessing such “controlled unclassified information” need to have endpoint antivirus software installed. But according to the US government, Antonakakis really, really doesn’t like putting AV detection software on his lab’s machines.
Georgia Tech admins asked him to comply with the requirement, but according to an internal 2019 email, Antonakakis “wasn’t receptive to such a suggestion.” In a follow-up email, Antonakakis himself said that “endpoint [antivirus] agent is a nonstarter.”
According to the government, “Other than Dr. Antonakakis’s opposition, there was nothing preventing the lab from running antivirus protection. Dr. Antonakakis simply did not want to run it.”
The IT director for Antonakakis’ lab was allowed to use other “mitigating measures” instead, such as relying on the school’s firewall for additional security. The IT director said that he thought Georgia Tech ran antivirus scans from its network. However, this “assumption” turned out to be completely wrong; the school’s network “has never provided” antivirus protection and, even if it had, the lab used laptops that were regularly taken outside the network perimeter.
The school realized after some time that the lab was not in compliance with the DoD contract rules, so an administrator decided to “suspend invoicing” on the lab’s contracts so that the school would not be charged with filing false claims.
According to the government, “Within a few days of the invoicing for his contracts being suspended, Dr. Antonakakis relented on his years-long opposition to the installation of antivirus software in the Astrolavos Lab. Georgia Tech’s standard antivirus software was installed throughout the lab.”
But, says the government, the school never acknowledged that it had been out of compliance for some time and that it had filed numerous invoices while noncompliant. In the government’s telling, this is fraud.
Newly discovered Android malware steals payment card data using an infected device’s NFC reader and relays it to attackers, a novel technique that effectively clones the card so it can be used at ATMs or point-of-sale terminals, security firm ESET said.
ESET researchers have named the malware NGate because it incorporates NFCGate, an open source tool for capturing, analyzing, or altering NFC traffic. Short for Near-Field Communication, NFC is a protocol that allows two devices to wirelessly communicate over short distances.
New Android attack scenario
“This is a new Android attack scenario, and it is the first time we have seen Android malware with this capability being used in the wild,” ESET researcher Lukas Stefanko said in a video demonstrating the discovery. “NGate malware can relay NFC data from a victim’s card through a compromised device to an attacker’s smartphone, which is then able to emulate the card and withdraw money from an ATM.”
Lukas Stefanko—Unmasking NGate.
The malware was installed through traditional phishing scenarios, such as the attacker messaging targets and tricking them into installing NGate from short-lived domains that impersonated the banks or official mobile banking apps available on Google Play. Masquerading as a legitimate app for a target’s bank, NGate prompts the user to enter the banking client ID, date of birth, and the PIN code corresponding to the card. The app goes on to ask the user to turn on NFC and to scan the card.
ESET said it discovered NGate being used against three Czech banks starting in November and identified six separate NGate apps circulating between then and March of this year. Some of the apps used in later months of the campaign came in the form of PWAs, short for Progressive Web Apps, which as reported Thursday can be installed on both Android and iOS devices even when settings (mandatory on iOS) prevent the installation of apps available from non-official sources.
The most likely reason the NGate campaign ended in March, ESET said, was the arrest by Czech police of a 22-year-old they said they caught wearing a mask while withdrawing money from ATMs in Prague. Investigators said the suspect had “devised a new way to con people out of money” using a scheme that sounds identical to the one involving NGate.
Stefanko and fellow ESET researcher Jakub Osmani explained how the attack worked:
The announcement by the Czech police revealed the attack scenario started with the attackers sending SMS messages to potential victims about a tax return, including a link to a phishing website impersonating banks. These links most likely led to malicious PWAs. Once the victim installed the app and inserted their credentials, the attacker gained access to the victim’s account. Then the attacker called the victim, pretending to be a bank employee. The victim was informed that their account had been compromised, likely due to the earlier text message. The attacker was actually telling the truth – the victim’s account was compromised, but this truth then led to another lie.
To “protect” their funds, the victim was requested to change their PIN and verify their banking card using a mobile app – NGate malware. A link to download NGate was sent via SMS. We suspect that within the NGate app, the victims would enter their old PIN to create a new one and place their card at the back of their smartphone to verify or apply the change.
Since the attacker already had access to the compromised account, they could change the withdrawal limits. If the NFC relay method didn’t work, they could simply transfer the funds to another account. However, using NGate makes it easier for the attacker to access the victim’s funds without leaving traces back to the attacker’s own bank account. A diagram of the attack sequence is shown in Figure 6.
The researchers said NGate or apps similar to it could be used in other scenarios, such as cloning some smart cards used for other purposes. The attack would work by copying the unique ID of the NFC tag, abbreviated as UID.
“During our testing, we successfully relayed the UID from a MIFARE Classic 1K tag, which is typically used for public transport tickets, ID badges, membership or student cards, and similar use cases,” the researchers wrote. “Using NFCGate, it’s possible to perform an NFC relay attack to read an NFC token in one location and, in real time, access premises in a different location by emulating its UID, as shown in Figure 7.”
Enlarge/ Figure 7. Android smartphone (right) that read and relayed an external NFC token’s UID to another device (left).
ESET
The cloning could all occur in situations where the attacker has physical access to a card or is able to briefly read a card in unattended purses, wallets, backpacks, or smartphone cases holding cards. To perform and emulate such attacks requires the attacker to have a rooted and customized Android device. Phones that were infected by NGate didn’t have this requirement.
Phishers are using a novel technique to trick iOS and Android users into installing malicious apps that bypass safety guardrails built by both Apple and Google to prevent unauthorized apps.
Both mobile operating systems employ mechanisms designed to help users steer clear of apps that steal their personal information, passwords, or other sensitive data. iOS bars the installation of all apps other than those available in its App Store, an approach widely known as the Walled Garden. Android, meanwhile, is set by default to allow only apps available in Google Play. Sideloading—or the installation of apps from other markets—must be manually allowed, something Google warns against.
When native apps aren’t
Phishing campaigns making the rounds over the past nine months are using previously unseen ways to workaround these protections. The objective is to trick targets into installing a malicious app that masquerades as an official one from the targets’ bank. Once installed, the malicious app steals account credentials and sends them to the attacker in real time over Telegram.
“This technique is noteworthy because it installs a phishing application from a third-party website without the user having to allow third-party app installation,” Jakub Osmani, an analyst with security firm ESET, wrote Tuesday. “For iOS users, such an action might break any ‘walled garden’ assumptions about security. On Android, this could result in the silent installation of a special kind of APK, which on further inspection even appears to be installed from the Google Play store.”
The novel method involves enticing targets to install a special type of app known as a Progressive Web App. These apps rely solely on Web standards to render functionalities that have the feel and behavior of a native app, without the restrictions that come with them. The reliance on Web standards means PWAs, as they’re abbreviated, will in theory work on any platform running a standards-compliant browser, making them work equally well on iOS and Android. Once installed, users can add PWAs to their home screen, giving them a striking similarity to native apps.
While PWAs can apply to both iOS and Android, Osmani’s post uses PWA to apply to iOS apps and WebAPK to Android apps.
Enlarge/ Installed phishing PWA (left) and real banking app (right).
ESET
Enlarge/ Comparison between an installed phishing WebAPK (left) and real banking app (right).
ESET
The attack begins with a message sent either by text message, automated call, or through a malicious ad on Facebook or Instagram. When targets click on the link in the scam message, they open a page that looks similar to the App Store or Google Play.
Example of a malicious advertisement used in these campaigns.
ESET
Phishing landing page imitating Google Play.
ESET
ESET’s Osmani continued:
From here victims are asked to install a “new version” of the banking application; an example of this can be seen in Figure 2. Depending on the campaign, clicking on the install/update button launches the installation of a malicious application from the website, directly on the victim’s phone, either in the form of a WebAPK (for Android users only), or as a PWA for iOS and Android users (if the campaign is not WebAPK based). This crucial installation step bypasses traditional browser warnings of “installing unknown apps”: this is the default behavior of Chrome’s WebAPK technology, which is abused by the attackers.
Example copycat installation page.
ESET
The process is a little different for iOS users, as an animated pop-up instructs victims how to add the phishing PWA to their home screen (see Figure 3). The pop-up copies the look of native iOS prompts. In the end, even iOS users are not warned about adding a potentially harmful app to their phone.
Figure 3 iOS pop-up instructions after clicking “Install” (credit: Michal Bláha)
ESET
After installation, victims are prompted to submit their Internet banking credentials to access their account via the new mobile banking app. All submitted information is sent to the attackers’ C&C servers.
The technique is made all the more effective because application information associated with the WebAPKs will show they were installed from Google Play and have been assigned no system privileges.
WebAPK info menu—notice the “No Permissions” at the top and “App details in store” section at the bottom.
ESET
So far, ESET is aware of the technique being used against customers of banks mostly in Czechia and less so in Hungary and Georgia. The attacks used two distinct command-and-control infrastructures, an indication that two different threat groups are using the technique.
“We expect more copycat applications to be created and distributed, since after installation it is difficult to separate the legitimate apps from the phishing ones,” Osmani said.
In business, it’s not uncommon to take a software-as-a-service (SaaS)-first approach. It makes sense—there’s no need to deal with the infrastructure, management, patching, and hardening. You just turn on the SaaS app and let it do its thing.
But there are some downsides to that approach.
The Problem with SaaS
While SaaS has many benefits, it also introduces a host of new challenges, many of which don’t get the coverage they warrant. At the top of the list of challenges is security. So, while there are some very real benefits of SaaS, it’s also important to recognize the security risk that comes with it. When we talk about SaaS security, we’re not usually talking about the security of the underlying platform, but rather how we use it.
Remember, it’s not you, it’s me!
The Shared Responsibility Model In the terms and conditions of most SaaS platforms is the “shared responsibility model.” What it usually says is that the SaaS vendor is responsible for providing a platform that is robust, resilient, and reliable—but they don’t take responsibility for how you use and configure it. And it is in these configuration changes that the security challenge lives.
SaaS platforms often come with multiple configuration options, such as ways to share data, ways to invite external users, how users can access the platform, what parts of the platform they can use, and so on. And every configuration change, every nerd knob turned, is the potential to take the platform away from its optimum security configuration or introduce an unexpected capability. While some applications, like Microsoft 365, offer guidance on security settings, this is not true for all of them. Even if they do, how easy is that to manage when you get to 10, 20, or even 100 SaaS apps?
Too Many Apps Do you know how many SaaS apps you have? It’s not the SaaS apps you know about that are the issue, it’s the ones you don’t. Because SaaS is so accessible, it can easily evade management. There are apps that people use but an organization may not be aware of—like the app the sales team signed up for, that thing that marketing uses, and of course, everyone wants a GenAI app to play with. But these aren’t the only ones; there are also the apps that are part of the SaaS platforms you sign up for. Yes, even the ones you know about can contain additional apps you don’t know about. This is how an average enterprise gets to more than 100 SaaS applications. How do you manage each of those? How do you ensure you know they exist and they are configured in a way that meets good security practices and protects your information? Therein lies the challenge.
Introducing SSPM
SSPM can be the answer. It is designed to initially integrate with your managed SaaS applications to provide visibility into how they are configured, where configurations present risks, and how to address them. It will continually monitor them for new threats and configuration changes that introduce risk. It will also discover unmanaged SaaS applications that are in use, evaluate their posture and present risk profiles of both the application and the SaaS vendor itself. It centralizes the management and security of a SaaS infrastructure and where its management and configuration present risk.
Overlap with CASB and DLP There is some overlap in the market, particularly with cloud access security broker (CASB) and data loss prevention (DLP) tools. But these tools are a bit like capturing the thief as he runs down the driveway, rather than making sure the doors and windows were secured in the first place.
SSPM is yet another security tool to manage and pay for. But is it a tool we need? Well, that is up to you; however, our use of SaaS, for all the benefits it brings, has brought a new complexity and a new set of risks. We have so many more apps than we have ever had, many of them we don’t manage centrally, and they have many configuration knobs to turn. Without oversight of them all, we do run security risks.
Next Steps
SaaS security posture management (SSPM) is another entry into the growing catalog of security posture management tools. They are often easy to try out, and many offer free assessments that can give you an idea of the scale of the challenge you face. SaaS security is tricky and often does not get the coverage it deserves, so getting an idea of where you stand could be helpful.
Before you find yourself on the wrong end of a security incident and your SaaS vendor tells you it’s you, not me, it may be worth seeing what an SSPM tool can do for you. To learn more, take a look at GigaOm’s SSPM Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
Enlarge/ Roger Stone, former adviser to Donald Trump’s presidential campaign, center, during the Republican National Convention (RNC) in Milwaukee on July 17, 2024.
Getty Images
Google’s Threat Analysis Group confirmed Wednesday that they observed a threat actor backed by the Iranian government targeting Google accounts associated with US presidential campaigns, in addition to stepped-up attacks on Israeli targets.
APT42, associated with Iran’s Islamic Revolutionary Guard Corps, “consistently targets high-profile users in Israel and the US,” the Threat Analysis Group (TAG) writes. The Iranian group uses hosted malware, phishing pages, malicious redirects, and other tactics to gain access to Google, Dropbox, OneDrive, and other cloud-based accounts. Google’s TAG writes that it reset accounts, sent warnings to users, and blacklisted domains associated with APT42’s phishing attempts.
Among APT42’s tools were Google Sites pages that appeared to be a petition from legitimate Jewish activists, calling on Israel to mediate its ongoing conflict with Hamas. The page was fashioned from image files, not HTML, and an ngrok redirect sent users to phishing pages when they moved to sign the petition.
A petition purporting to be from The Jewish Agency for Israel, seeking support for mediation measures—but signatures quietly redirect to phishing sites, according to Google.
Google
In the US, Google’s TAG notes that, as with the 2020 elections, APT42 is actively targeting the personal emails of “roughly a dozen individuals affiliated with President Biden and former President Trump.” TAG confirms that APT42 “successfully gained access to the personal Gmail account of a high-profile political consultant,” which may be longtime Republican operative Roger Stone, as reported by The Guardian, CNN, and The Washington Post, among others. Microsoft separately noted last week that a “former senior advisor” to the Trump campaign had his Microsoft account compromised, which Stone also confirmed.
“Today, TAG continues to observe unsuccessful attempts from APT42 to compromise the personal accounts of individuals affiliated with President Biden, Vice President Harris and former President Trump, including current and former government officials and individuals associated with the campaigns,” Google’s TAG writes.
PDFs and phishing kits target both sides
Google’s post details the ways in which APT42 targets operatives in both parties. The broad strategy is to get the target off their email and into channels like Signal, Telegram, or WhatsApp, or possibly a personal email address that may not have two-factor authentication and threat monitoring set up. By establishing trust through sending legitimate PDFs, or luring them to video meetings, APT42 can then push links that use phishing kits with “a seamless flow” to harvest credentials from Google, Hotmail, and Yahoo.
After gaining a foothold, APT42 will often work to preserve its access by generating application-specific passwords inside the account, which typically bypass multifactor tools. Google notes that its Advanced Protection Program, intended for individuals at high risk of attack, disables such measures.
John Hultquist, with Google-owned cybersecurity firm Mandiant, told Wired’s Andy Greenberg that what looks initially like spying or political interference by Iran can easily escalate to sabotage and that both parties are equal targets. He also said that current thinking about threat vectors may need to expand.
“It’s not just a Russia problem anymore. It’s broader than that,” Hultquist said. “There are multiple teams in play. And we have to keep an eye out for all of them.”
Security flaws in your computer’s firmware, the deep-seated code that loads first when you turn the machine on and controls even how its operating system boots up, have long been a target for hackers looking for a stealthy foothold. But only rarely does that kind of vulnerability appear not in the firmware of any particular computer maker, but in the chips found across hundreds of millions of PCs and servers. Now security researchers have found one such flaw that has persisted in AMD processors for decades, and that would allow malware to burrow deep enough into a computer’s memory that, in many cases, it may be easier to discard a machine than to disinfect it.
At the Defcon hacker conference, Enrique Nissim and Krzysztof Okupski, researchers from the security firm IOActive, plan to present a vulnerability in AMD chips they’re calling Sinkclose. The flaw would allow hackers to run their own code in one of the most privileged modes of an AMD processor, known as System Management Mode, designed to be reserved only for a specific, protected portion of its firmware. IOActive’s researchers warn that it affects virtually all AMD chips dating back to 2006, or possibly even earlier.
Nissim and Okupski note that exploiting the bug would require hackers to already have obtained relatively deep access to an AMD-based PC or server, but that the Sinkclose flaw would then allow them to plant their malicious code far deeper still. In fact, for any machine with one of the vulnerable AMD chips, the IOActive researchers warn that an attacker could infect the computer with malware known as a “bootkit” that evades antivirus tools and is potentially invisible to the operating system, while offering a hacker full access to tamper with the machine and surveil its activity. For systems with certain faulty configurations in how a computer maker implemented AMD’s security feature known as Platform Secure Boot—which the researchers warn encompasses the large majority of the systems they tested—a malware infection installed via Sinkclose could be harder yet to detect or remediate, they say, surviving even a reinstallation of the operating system.
“Imagine nation-state hackers or whoever wants to persist on your system. Even if you wipe your drive clean, it’s still going to be there,” says Okupski. “It’s going to be nearly undetectable and nearly unpatchable.” Only opening a computer’s case, physically connecting directly to a certain portion of its memory chips with a hardware-based programming tool known as SPI Flash programmer and meticulously scouring the memory would allow the malware to be removed, Okupski says.
Nissim sums up that worst-case scenario in more practical terms: “You basically have to throw your computer away.”
In a statement shared with WIRED, AMD acknowledged IOActive’s findings, thanked the researchers for their work, and noted that it has “released mitigation options for its AMD EPYC datacenter products and AMD Ryzen PC products, with mitigations for AMD embedded products coming soon.” (The term “embedded,” in this case, refers to AMD chips found in systems such as industrial devices and cars.) For its EPYC processors designed for use in data-center servers, specifically, the company noted that it released patches earlier this year. AMD declined to answer questions in advance about how it intends to fix the Sinkclose vulnerability, or for exactly which devices and when, but it pointed to a full list of affected products that can be found on its website’s security bulletin page.
When Ryan Castellucci recently acquired solar panels and a battery storage system for their home just outside of London, they were drawn to the ability to use an open source dashboard to monitor and control the flow of electricity being generated. Instead, they gained much, much more—some 200 megawatts of programmable capacity to charge or discharge to the grid at will. That’s enough energy to power roughly 40,000 homes.
Castellucci, whose pronouns are they/them, acquired this remarkable control after gaining access to the administrative account for GivEnergy, the UK-based energy management provider who supplied the systems. In addition to the control over an estimated 60,000 installed systems, the admin account—which amounts to root control of the company’s cloud-connected products—also made it possible for them to enumerate names, email addresses, usernames, phone numbers, and addresses of all other GivEnergy customers (something the researcher didn’t actually do).
“My plan is to set up Home Assistant and integrate it with that, but in the meantime, I decided to let it talk to the cloud,” Castellucci wrote Thursday, referring to the recently installed gear. “I set up some scheduled charging, then started experimenting with the API. The next evening, I had control over a virtual power plant comprised of tens of thousands of grid connected batteries.”
Still broken after all these years
The cause of the authentication bypass Castellucci discovered was a programming interface that was protected by an RSA cryptographic key of just 512 bits. The key signs authentication tokens and is the rough equivalent of a master-key. The bit sizes allowed Castellucci to factor the private key underpinning the entire API. The factoring required $70 in cloud computing costs and less than 24 hours. GivEnergy introduced a fix within 24 hours of Castellucci privately disclosing the weakness.
The first publicly known instance of 512-bit RSA being factored came in 1999 by an international team of more than a dozen researchers. The feat took a supercomputer and hundreds of other computers seven months to carry out. By 2009 hobbyists spent about three weeks to factor 13 512-bit keys protecting firmware in Texas Instruments calculators from being copied. In 2015, researchers demonstrated factoring as a service, a method that used Amazon cloud computing, cost $75, and took about four hours. As processing power has increased, the resources required to factor keys has become ever less.
It’s tempting to fault GivEnergy engineers for pinning the security of its infrastructure on a key that’s trivial to break. Castellucci, however, said the responsibility is better assigned to the makers of code libraries developers rely on to implement complex cryptographic processes.
“Expecting developers to know that 512 bit RSA is insecure clearly doesn’t work,” the security researcher wrote. “They’re not cryptographers. This is not their job. The failure wasn’t that someone used 512 bit RSA. It was that a library they were relying on let them.”
Castellucci noted that OpenSSL, the most widely used cryptographic code library, still offers the option of using 512-bit keys. So does the Go crypto library. Coincidentally, the Python cryptography library removed the option only a few weeks ago (the commit for the change was made in January).
In an email, a GivEnergy representative reinforced Castellucci’s assessment, writing:
In this case, the problematic encryption approach was picked up via a 3rd party library many years ago, when we were a tiny startup company with only 2, fairly junior software developers & limited experience. Their assumption at the time was that because this encryption was available within the library, it was safe to use. This approach was passed through the intervening years and this part of the codebase was not changed significantly since implementation (so hadn’t passed through the review of the more experienced team we now have in place).
Federal authorities have arrested a Nashville man on charges he hosted laptops at his residences in a scheme to deceive US companies into hiring foreign remote IT workers who funneled hundreds of thousands of dollars in income to fund North Korea’s weapons program.
The scheme, federal prosecutors said, worked by getting US companies to unwittingly hire North Korean nationals, who used the stolen identity of a Georgia man to appear to be a US citizen. Under sanctions issued by the federal government, US employers are strictly forbidden from hiring citizens of North Korea. Once the North Korean nationals were hired, the employers sent company-issued laptops to Matthew Isaac Knoot, 38, of Nashville, Tennessee, the prosecutors said in court papers filed in the US District Court of the Middle District of Tennessee. The court documents also said a foreign national with the alias Yang Di was involved in the conspiracy.
The prosecutors wrote:
As part of the conspiracy, Knoot received and hosted laptop computers issued by US companies to Andrew M. at Knoot’s Nashville, Tennessee residences for the purposes of deceiving the companies into believing that Andrew M. was located in the United States. Following receipt of the laptops and without authorization, Knoot logged on to the laptops, downloaded and installed remote desktop applications, and accessed without authorization the victim companies’ networks. The remote desktop applications enabled DI to work from locations outside the United states, in particular, China, while appearing to the victim companies that Andre M. was working from Knoot’s residences. In exchange, Knoot charged Di monthly fees for his services, including flat rates for each hosted laptop and a percentage of Di’s salary for IT work, enriching himself off the scheme.
The arrest comes two weeks after security-training company KnowBe4 said it unknowingly hired a North Korean national using a fake identity to appear as someone eligible to fill a position for a software engineer for an internal IT AI team. KnowBe4’s security team soon became suspicious of the new hire after detecting “anomalous activity,” including manipulating session history files, transferring potentially harmful files, and executing unauthorized software.
The North Korean national was hired even after KnowBe4 conducted background checks, verified references, and conducted four video interviews while he was an applicant. The fake applicant was able to stymie those checks by using a stolen identity and a photo that was altered with AI tools to create a fake profile picture and mimic the face during video conference calls.
In May federal prosecutors charged an Arizona woman for allegedly raising $6.8 million in a similar scheme to fund the weapons program. The defendant in that case, Christina Marie Chapman, 49, of Litchfield Park, Arizona, and co-conspirators compromised the identities of more than 60 people living in the US and used their personal information to get North Koreans IT jobs across more than 300 US companies.
The FBI and Departments of State and Treasury issued a May 2022 advisory alerting the international community, private sector, and public of a campaign underway to land North Korean nationals IT jobs in violation of many countries’ laws. US and South Korean officials issued updated guidance in October 2023 and again in May 2024. The advisories include signs that may indicate North Korea IT worker fraud and the use of US-based laptop farms.
The North Korean IT workers using Knoot’s laptop farm generated revenue of more than $250,000 each between July 2022 and August 2023. Much of the funds were then funneled to North Korea’s weapons program, which includes weapons of mass destruction, prosecutors said.
Knoot faces charges, including wire fraud, intentional damage to protected computers, aggravated identity theft, and conspiracy to cause the unlawful employment of aliens. If found guilty, he faces a maximum of 20 years in prison.
Enlarge/ For a true representation of the people-search industry, a couple of these folks should have lanyards that connect them by the pockets.
Getty Images
If you’ve searched your name online in the last few years, you know what’s out there, and it’s bad. Alternately, you’ve seen the lowest-common-denominator ads begging you to search out people from your past to see what crimes are on their record. People-search sites are a gross loophole in the public records system, and it doesn’t feel like there’s much you can do about it.
Not that some firms haven’t promised to try. Do they work? Not really, Consumer Reports (CR) suggests in a recent study.
“[O]ur study shows that many of these services fall short of providing the kind of help and performance you’d expect, especially at the price levels some of them are charging,” said Yael Grauer, program manager for CR, in a statement.
Consumer Reports’ study asked 32 volunteers for permission to try to delete their personal data from 13 people-search sites, using seven services over four months. The services, including DeleteMe, Reputation Defender from Norton, and Confidently, were also compared to “Manual opt-outs,” i.e. following the tucked-away links to pull down that data on each people-search site. CR took volunteers from California, in which the California Consumer Privacy Act should theoretically make it mandatory for brokers to respond to opt-out requests, and in New York, with no such law, to compare results.
Table from Consumer Reports’ study of people-search removal services, showing effective removal rates over time for each service.
Finding a total of 332 instances of identifying information profiles on those sites, Consumer Reports found that only 117 profiles were removed within four months using all the services, or 35 percent. The services varied in efficacy, with EasyOptOuts notably performing the second-best at a 65 percent removal rate after four months. But if your goal is to remove entirely others’ ability to find out about you, no service Consumer Reports tested truly gets you there.
Manual opt-outs were the most effective removal method, at 70 percent removed within one week, which is both a higher elimination rate and quicker turn-around than all the automated services.
The study noted close ties between the people-search sites and the services that purport to clean them. Removing one volunteer’s data from ClustrMaps resulted in a page with a suggested “Next step”: signing up for privacy protection service OneRep. Firefox-maker Mozilla dropped OneRep as a service provider for its Mozilla Monitor Plus privacy bundle after reporting by Brian Krebs found that OneRep’s CEO had notable ties to the people-search industry.
In releasing this study, CR also advocates for laws at the federal and state level, like California’s Delete Act, that would make people-search removal far easier than manually scouring the web or paying for incomplete monitoring.
CR’s study cites CheckPeople, PublicDataUSA, and Intelius as the least responsive businesses in one of the least responsive industries, while noting that PeopleFinders, ClustrMaps, and ThatsThem deserve some very tiny, nearly inaudible recognition for complying with opt-out requests (our words, not theirs).