Biz & IT

inventor-of-ntp-protocol-that-keeps-time-on-billions-of-devices-dies-at-age-85

Inventor of NTP protocol that keeps time on billions of devices dies at age 85

A legend in his own time —

Dave Mills created NTP, the protocol that holds the temporal Internet together, in 1985.

A photo of David L. Mills taken by David Woolley on April 27, 2005.

Enlarge / A photo of David L. Mills taken by David Woolley on April 27, 2005.

David Woolley / Benj Edwards / Getty Images

On Thursday, Internet pioneer Vint Cerf announced that Dr. David L. Mills, the inventor of Network Time Protocol (NTP), died peacefully at age 85 on January 17, 2024. The announcement came in a post on the Internet Society mailing list after Cerf was informed of David’s death by Mills’ daughter, Leigh.

“He was such an iconic element of the early Internet,” wrote Cerf.

Dr. Mills created the Network Time Protocol (NTP) in 1985 to address a crucial challenge in the online world: the synchronization of time across different computer systems and networks. In a digital environment where computers and servers are located all over the world, each with its own internal clock, there’s a significant need for a standardized and accurate timekeeping system.

NTP provides the solution by allowing clocks of computers over a network to synchronize to a common time source. This synchronization is vital for everything from data integrity to network security. For example, NTP keeps network financial transaction timestamps accurate, and it ensures accurate and synchronized timestamps for logging and monitoring network activities.

In the 1970s, during his tenure at COMSAT and involvement with ARPANET (the precursor to the Internet), Mills first identified the need for synchronized time across computer networks. His solution aligned computers to within tens of milliseconds. NTP now operates on billions of devices worldwide, coordinating time across every continent, and has become a cornerstone of modern digital infrastructure.

As detailed in an excellent 2022 New Yorker profile by Nate Hopper, Mills faced significant challenges in maintaining and evolving the protocol, especially as the Internet grew in scale and complexity. His work highlighted the often under-appreciated role of key open source software developers (a topic explored quite well in a 2020 xkcd comic). Mills was born with glaucoma and lost his sight, eventually becoming completely blind. Due to difficulties with his sight, Mills turned over control of the protocol to Harlan Stenn in the 2000s.

A screenshot of Dr. David L. Mills' website at the University of Delaware captured on January 19, 2024.

Enlarge / A screenshot of Dr. David L. Mills’ website at the University of Delaware captured on January 19, 2024.

Aside from his work on NTP, Mills also invented the first “Fuzzball router” for NSFNET (one of the first modern routers, based on the DEC PDP-11 computer), created one of the first implementations of FTP, inspired the creation of “ping,” and played a key role in Internet architecture as the first chairman of the Internet Architecture Task Force.

Mills was widely recognized for his work, becoming a Fellow of the Association for Computing Machinery in 1999 and the Institute of Electrical and Electronics Engineers in 2002, as well as receiving the IEEE Internet Award in 2013 for contributions to network protocols and timekeeping in the development of the Internet.

Mills received his PhD in Computer and Communication Sciences from the University of Michigan in 1971. At the time of his death, Mills was an emeritus professor at the University of Delaware, having retired in 2008 after teaching there for 22 years.

Inventor of NTP protocol that keeps time on billions of devices dies at age 85 Read More »

openai-opens-the-door-for-military-uses-but-maintains-ai-weapons-ban

OpenAI opens the door for military uses but maintains AI weapons ban

Skynet deferred —

Despite new Pentagon collab, OpenAI won’t allow customers to “develop or use weapons” with its tools.

The OpenAI logo over a camoflage background.

On Tuesday, ChatGPT developer OpenAI revealed that it is collaborating with the United States Defense Department on cybersecurity projects and exploring ways to prevent veteran suicide, reports Bloomberg. OpenAI revealed the collaboration during an interview with the news outlet at the World Economic Forum in Davos. The AI company recently modified its policies, allowing for certain military applications of its technology, while maintaining prohibitions against using it to develop weapons.

According to Anna Makanju, OpenAI’s vice president of global affairs, “many people thought that [a previous blanket prohibition on military applications] would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world.” OpenAI removed terms from its service agreement that previously blocked AI use in “military and warfare” situations, but the company still upholds a ban on its technology being used to develop weapons or to cause harm or property damage.

Under the “Universal Policies” section of OpenAI’s Usage Policies document, section 2 says, “Don’t use our service to harm yourself or others.” The prohibition includes using its AI products to “develop or use weapons.” Changes to the terms that removed the “military and warfare” prohibitions appear to have been made by OpenAI on January 10.

The shift in policy appears to align OpenAI more closely with the needs of various governmental departments, including the possibility of preventing veteran suicides. “We’ve been doing work with the Department of Defense on cybersecurity tools for open-source software that secures critical infrastructure,” Makanju said in the interview. “We’ve been exploring whether it can assist with (prevention of) veteran suicide.”

The efforts mark a significant change from OpenAI’s original stance on military partnerships, Bloomberg says. Meanwhile, Microsoft Corp., a large investor in OpenAI, already has an established relationship with the US military through various software contracts.

OpenAI opens the door for military uses but maintains AI weapons ban Read More »

as-2024-election-looms,-openai-says-it-is-taking-steps-to-prevent-ai-abuse

As 2024 election looms, OpenAI says it is taking steps to prevent AI abuse

Don’t Rock the vote —

ChatGPT maker plans transparency for gen AI content and improved access to voting info.

A pixelated photo of Donald Trump.

On Monday, ChatGPT maker OpenAI detailed its plans to prevent the misuse of its AI technologies during the upcoming elections in 2024, promising transparency in AI-generated content and enhancing access to reliable voting information. The AI developer says it is working on an approach that involves policy enforcement, collaboration with partners, and the development of new tools aimed at classifying AI-generated media.

“As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency,” writes OpenAI in its blog post. “Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process.”

Initiatives proposed by OpenAI include preventing abuse by means such as deepfakes or bots imitating candidates, refining usage policies, and launching a reporting system for the public to flag potential abuses. For example, OpenAI’s image generation tool, DALL-E 3, includes built-in filters that reject requests to create images of real people, including politicians. “For years, we’ve been iterating on tools to improve factual accuracy, reduce bias, and decline certain requests,” the company stated.

OpenAI says it regularly updates its Usage Policies for ChatGPT and its API products to prevent misuse, especially in the context of elections. The organization has implemented restrictions on using its technologies for political campaigning and lobbying until it better understands the potential for personalized persuasion. Also, OpenAI prohibits creating chatbots that impersonate real individuals or institutions and disallows the development of applications that could deter people from “participation in democratic processes.” Users can report GPTs that may violate the rules.

OpenAI claims to be proactively engaged in detailed strategies to safeguard its technologies against misuse. According to their statements, this includes red-teaming new systems to anticipate challenges, engaging with users and partners for feedback, and implementing robust safety mitigations. OpenAI asserts that these efforts are integral to its mission of continually refining AI tools for improved accuracy, reduced biases, and responsible handling of sensitive requests

Regarding transparency, OpenAI says it is advancing its efforts in classifying image provenance. The company plans to embed digital credentials, using cryptographic techniques, into images produced by DALL-E 3 as part of its adoption of standards by the Coalition for Content Provenance and Authenticity. Additionally, OpenAI says it is testing a tool designed to identify DALL-E-generated images.

In an effort to connect users with authoritative information, particularly concerning voting procedures, OpenAI says it has partnered with the National Association of Secretaries of State (NASS) in the United States. ChatGPT will direct users to CanIVote.org for verified US voting information.

“We want to make sure that our AI systems are built, deployed, and used safely,” writes OpenAI. “Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.”

As 2024 election looms, OpenAI says it is taking steps to prevent AI abuse Read More »

famous-xkcd-comic-comes-full-circle-with-ai-bird-identifying-binoculars

Famous xkcd comic comes full circle with AI bird-identifying binoculars

Who watches the bird watchers —

Swarovski AX Visio, billed as first “smart binoculars,” names species and tracks location.

The Swarovski Optik Visio binoculars, with an excerpt of a 2014 xkcd comic strip called

Enlarge / The Swarovski Optik Visio binoculars, with an excerpt of a 2014 xkcd comic strip called “Tasks” in the corner.

xckd / Swarovski

Last week, Austria-based Swarovski Optik introduced the AX Visio 10×32 binoculars, which the company says can identify over 9,000 species of birds and mammals using image recognition technology. The company is calling the product the world’s first “smart binoculars,” and they come with a hefty price tag—$4,799.

“The AX Visio are the world’s first AI-supported binoculars,” the company says in the product’s press release. “At the touch of a button, they assist with the identification of birds and other creatures, allow discoveries to be shared, and offer a wide range of practical extra functions.”

The binoculars, aimed mostly at bird watchers, gain their ability to identify birds from the Merlin Bird ID project, created by Cornell Lab of Ornithology. As confirmed by a hands-on demo conducted by The Verge, the user looks at an animal through the binoculars and presses a button. A red progress circle fills in while the binoculars process the image, then the identified animal name pops up on the built-in binocular HUD screen within about five seconds.

In 2014, a famous xkcd comic strip titled Tasks depicted someone asking a developer to create an app that, when a user takes a photo, will check whether the user is in a national park (deemed easy due to GPS) and check whether the photo is of a bird (to which the developer says, “I’ll need a research team and five years”). The caption below reads, “In CS, it can be hard to explain the difference between the easy and the virtually impossible.”

The xkcd comic titled

The xkcd comic titled “Tasks” from September 24, 2014.

It’s been just over nine years since the comic was published, and while identifying the presence of a bird in a photo was solved some time ago, these binoculars arguably go further by identifying the species of the bird in the photo (it also keeps track of location due to GPS). While apps to identify bird species already exist, this feature is now packed into a handheld pair of binoculars.

According to Swarovski, the development of the AX Visio took approximately five years, involving around 390 “hardware parts.” The binoculars incorporate a neural processing unit (NPU) for object recognition processing. The company claims that the device will have a long product life cycle, with ongoing updates and improvements. The company also mentions “an open programming interface” in its press release, potentially allowing industrious users (or handy hackers) to expand the unit’s features over time.

  • The Swarovski Optik Visio binoculars.

    Swarovski Optik

  • The Swarovski Optik Visio binoculars.

    Swarovski Optik

  • The Swarovski Optik Visio binoculars.

    Swarovski Optik

The binoculars, which feature industrial design from Marc Newson, include built-in digital camera, compass, GPS, and discovery-sharing features that can “immediately show your companion where you have seen an animal.” The Visio unit also wirelessly ties into the “SWAROVSKI OPTIK Outdoor App” that can run on a smartphone. The app manages sharing photos and videos captured through the binoculars. (As an aside, we’ve come a long way from computer-connected gadgets that required pesky serial cables in the late 1990s.)

Swarovski says the AX Visio will be available at select retailers and online starting February 1, 2024. While this tech is at a premium price right now, given the speed of tech progress and market competition, we may see similar image-recognizing features built into much cheaper models in the years ahead.

Famous xkcd comic comes full circle with AI bird-identifying binoculars Read More »

apple-airdrop-leaks-user-data-like-a-sieve-chinese-authorities-say-they’re-scooping-it-up.

Apple AirDrop leaks user data like a sieve. Chinese authorities say they’re scooping it up.

Apple AirDrop leaks user data like a sieve. Chinese authorities say they’re scooping it up.

Aurich Lawson | Getty Images

Chinese authorities recently said they’re using an advanced encryption attack to de-anonymize users of AirDrop in an effort to crack down on citizens who use the Apple file-sharing feature to mass-distribute content that’s outlawed in that country.

According to a 2022 report from The New York Times, activists have used AirDrop to distribute scathing critiques of the Communist Party of China to nearby iPhone users in subway trains and stations and other public venues. A document one protester sent in October of that year called General Secretary Xi Jinping a “despotic traitor.” A few months later, with the release of iOS 16.1.1, the AirDrop users in China found that the “everyone” configuration, the setting that makes files available to all other users nearby, automatically reset to the more contacts-only setting. Apple has yet to acknowledge the move. Critics continue to see it as a concession Apple CEO Tim Cook made to Chinese authorities.

The rainbow connection

On Monday, eight months after the half-measure was put in place, officials with the local government in Beijing said some people have continued mass-sending illegal content. As a result, the officials said, they were now using an advanced technique publicly disclosed in 2021 to fight back.

“Some people reported that their iPhones received a video with inappropriate remarks in the Beijing subway,” the officials wrote, according to translations. “After preliminary investigation, the police found that the suspect used the AirDrop function of the iPhone to anonymously spread the inappropriate information in public places. Due to the anonymity and difficulty of tracking AirDrop, some netizens have begun to imitate this behavior.”

In response, the authorities said they’ve implemented the technical measures to identify the people mass-distributing the content.

  • Screenshot showing log files containing the hashes to be extracted

  • Screenshot showing a dedicated tool converting extracted AirDrop hashes.

The scant details and the quality of Internet-based translations don’t explicitly describe the technique. All the translations, however, have said it involves the use of what are known as rainbow tables to defeat the technical measures AirDrop uses to obfuscate users’ phone numbers and email addresses.

Rainbow tables were first proposed in 1980 as a means for vastly reducing what at the time was the astronomical amount of computing resources required to crack at-scale hashes, the one-way cryptographic representations used to conceal passwords and other types of sensitive data. Additional refinements made in 2003 made rainbow tables more useful still.

When AirDrop is configured to distribute files only between people who know each other, Apple says, it relies heavily on hashes to conceal the real-world identities of each party until the service determines there’s a match. Specifically, AirDrop broadcasts Bluetooth advertisements that contain a partial cryptographic hash of the sender’s phone number and/or email address.

If any of the truncated hashes match any phone number or email address in the address book of the other device, or if the devices are set to send or receive from everyone, the two devices will engage in a mutual authentication handshake. When the hashes match, the devices exchange the full SHA-256 hashes of the owners’ phone numbers and email addresses. This technique falls under an umbrella term known as private set intersection, often abbreviated as PSI.

In 2021, researchers at Germany’s Technical University of Darmstadt reported that they had devised practical ways to crack what Apple calls the identity hashes used to conceal identities while AirDrop determines if a nearby person is in the contacts of another. One of the researchers’ attack methods relies on rainbow tables.

Apple AirDrop leaks user data like a sieve. Chinese authorities say they’re scooping it up. Read More »

at-senate-ai-hearing,-news-executives-fight-against-“fair-use”-claims-for-ai-training-data

At Senate AI hearing, news executives fight against “fair use” claims for AI training data

All’s fair in love and AI —

Media orgs want AI firms to license content for training, and Congress is sympathetic.

WASHINGTON, DC - JANUARY 10: Danielle Coffey, President and CEO of News Media Alliance, Professor Jeff Jarvis, CUNY Graduate School of Journalism, Curtis LeGeyt President and CEO of National Association of Broadcasters, Roger Lynch CEO of Condé Nast, are strong in during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on “Artificial Intelligence and The Future Of Journalism” at the U.S. Capitol on January 10, 2024 in Washington, DC. Lawmakers continue to hear testimony from experts and business leaders about artificial intelligence and its impact on democracy, elections, privacy, liability and news. (Photo by Kent Nishimura/Getty Images)

Enlarge / Danielle Coffey, president and CEO of News Media Alliance; Professor Jeff Jarvis, CUNY Graduate School of Journalism; Curtis LeGeyt, president and CEO of National Association of Broadcasters; and Roger Lynch, CEO of Condé Nast, are sworn in during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on “Artificial Intelligence and The Future Of Journalism.”

Getty Images

On Wednesday, news industry executives urged Congress for legal clarification that using journalism to train AI assistants like ChatGPT is not fair use, as claimed by companies such as OpenAI. Instead, they would prefer a licensing regime for AI training content that would force Big Tech companies to pay for content in a method similar to rights clearinghouses for music.

The plea for action came during a US Senate Judiciary Committee hearing titled “Oversight of A.I.: The Future of Journalism,” chaired by Sen. Richard Blumenthal of Connecticut, with Sen. Josh Hawley of Missouri also playing a large role in the proceedings. Last year, the pair of senators introduced a bipartisan framework for AI legislation and held a series of hearings on the impact of AI.

Blumenthal described the situation as an “existential crisis” for the news industry and cited social media as a cautionary tale for legislative inaction about AI. “We need to move more quickly than we did on social media and learn from our mistakes in the delay there,” he said.

Companies like OpenAI have admitted that vast amounts of copyrighted material are necessary to train AI large language models, but they claim their use is transformational and covered under fair use precedents of US copyright law. Currently, OpenAI is negotiating licensing content from some news providers and striking deals, but the executives in the hearing said those efforts are not enough, highlighting closing newsrooms across the US and dropping media revenues while Big Tech’s profits soar.

“Gen AI cannot replace journalism,” said Condé Nast CEO Roger Lynch in his opening statement. (Condé Nast is the parent company of Ars Technica.) “Journalism is fundamentally a human pursuit, and it plays an essential and irreplaceable role in our society and our democracy.” Lynch said that generative AI has been built with “stolen goods,” referring to the use of AI training content from news outlets without authorization. “Gen AI companies copy and display our content without permission or compensation in order to build massive commercial businesses that directly compete with us.”

Roger Lynch, CEO of Condé Nast, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law during a hearing on “Artificial Intelligence and The Future Of Journalism.”

Enlarge / Roger Lynch, CEO of Condé Nast, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law during a hearing on “Artificial Intelligence and The Future Of Journalism.”

Getty Images

In addition to Lynch, the hearing featured three other witnesses: Jeff Jarvis, a veteran journalism professor and pundit; Danielle Coffey, the president and CEO of News Media Alliance; and Curtis LeGeyt, president and CEO of the National Association of Broadcasters.

Coffey also shared concerns about generative AI using news material to create competitive products. “These outputs compete in the same market, with the same audience, and serve the same purpose as the original articles that feed the algorithms in the first place,” she said.

When Sen. Hawley asked Lynch what kind of legislation might be needed to fix the problem, Lynch replied, “I think quite simply, if Congress could clarify that the use of our content and other publisher content for training and output of AI models is not fair use, then the free market will take care of the rest.”

Lynch used the music industry as a model: “You think about millions of artists, millions of ultimate consumers consuming that content, there have been models that have been set up, ASCAP, BMI, CSAC, GMR, these collective rights organizations to simplify the content that’s being used.”

Curtis LeGeyt, CEO of the National Association of Broadcasters, said that TV broadcast journalists are also affected by generative AI. “The use of broadcasters’ news content in AI models without authorization diminishes our audience’s trust and our reinvestment in local news,” he said. “Broadcasters have already seen numerous examples where content created by our journalists has been ingested and regurgitated by AI bots with little or no attribution.”

At Senate AI hearing, news executives fight against “fair use” claims for AI training data Read More »

openai’s-gpt-store-lets-chatgpt-users-discover-popular-user-made-chatbot-roles

OpenAI’s GPT Store lets ChatGPT users discover popular user-made chatbot roles

The bot of 1,000 faces —

Like an app store, people can find novel ChatGPT personalities—and some creators will get paid.

Two robots hold a gift box.

On Wednesday, OpenAI announced the launch of its GPT Store—a way for ChatGPT users to share and discover custom chatbot roles called “GPTs”—and ChatGPT Team, a collaborative ChatGPT workspace and subscription plan. OpenAI bills the new store as a way to “help you find useful and popular custom versions of ChatGPT” for members of Plus, Team, or Enterprise subscriptions.

“It’s been two months since we announced GPTs, and users have already created over 3 million custom versions of ChatGPT,” writes OpenAI in its promotional blog. “Many builders have shared their GPTs for others to use. Today, we’re starting to roll out the GPT Store to ChatGPT Plus, Team and Enterprise users so you can find useful and popular GPTs.”

OpenAI launched GPTs on November 6, 2023, as part of its DevDay event. Each GPT includes custom instructions and/or access to custom data or external APIs that can potentially make a custom GPT personality more useful than the vanilla ChatGPT-4 model. Before the GPT Store launch, paying ChatGPT users could create and share custom GPTs with others (by setting the GPT public and sharing a link to the GPT), but there was no central repository for browsing and discovering user-designed GPTs on the OpenAI website.

According to OpenAI, the ChatGPT Store will feature new GPTs every week, and the company shared a list a group of six notable early GPTs that are available now: AllTrails for finding hiking trails, Consensus for searching 200 million academic papers, Code Tutor for learning coding with Khan Academy, Canva for designing presentations, Books for discovering reading material, and CK-12 Flexi for learning math and science.

A screenshot of the OpenAI GPT Store provided by OpenAI.

Enlarge / A screenshot of the OpenAI GPT Store provided by OpenAI.

OpenAI

ChatGPT members can include their own GPTs in the GPT Store by setting them to be accessible to “Everyone” and then verifying a builder profile in ChatGPT settings. OpenAI plans to review GPTs to ensure they meet their policies and brand guidelines. GPTs that violate the rules can also be reported by users.

As promised by CEO Sam Altman during DevDay, OpenAI plans to share revenue with GPT creators. Unlike a smartphone app store, it appears that users will not sell their GPTs in the GPT Store, but instead, OpenAI will pay developers “based on user engagement with their GPTs.” The revenue program will launch in the first quarter of 2024, and OpenAI will provide more details on the criteria for receiving payments later.

“ChatGPT Team” is for teams who use ChatGPT

Also on Monday, OpenAI announced the cleverly named ChatGPT Team, a new group-based ChatGPT membership program akin to ChatGPT Enterprise, which the company launched last August. Unlike Enterprise, which is for large companies and does not have publicly listed prices, ChatGPT Team is a plan for “teams of all sizes” and costs US $25 a month per user (when billed annually) or US $30 a month per user (when billed monthly). By comparison, ChatGPT Plus costs $20 per month.

So what does ChatGPT Team offer above the usual ChatGPT Plus subscription? According to OpenAI, it “provides a secure, collaborative workspace to get the most out of ChatGPT at work.” Unlike Plus, OpenAI says it will not train AI models based on ChatGPT Team business data or conversations. It features an admin console for team management and the ability to share custom GPTs with your team. Like Plus, it also includes access to GPT-4 with the 32K context window, DALL-E 3, GPT-4 with Vision, Browsing, and Advanced Data Analysis—all with higher message caps.

Why would you want to use ChatGPT at work? OpenAI says it can help you generate better code, craft emails, analyze data, and more. Your mileage may vary, of course. As usual, our standard Ars warning about AI language models applies: “Bring your own data” for analysis, don’t rely on ChatGPT as a factual resource, and don’t rely on its outputs in ways you cannot personally confirm. OpenAI has provided more details about ChatGPT Team on its website.

OpenAI’s GPT Store lets ChatGPT users discover popular user-made chatbot roles Read More »

linux-devices-are-under-attack-by-a-never-before-seen-worm

Linux devices are under attack by a never-before-seen worm

NEW WORM ON THE BLOCK —

Based on Mirai malware, self-replicating NoaBot installs cryptomining app on infected devices.

Linux devices are under attack by a never-before-seen worm

Getty Images

For the past year, previously unknown self-replicating malware has been compromising Linux devices around the world and installing cryptomining malware that takes unusual steps to conceal its inner workings, researchers said.

The worm is a customized version of Mirai, the botnet malware that infects Linux-based servers, routers, web cameras, and other so-called Internet of Things devices. Mirai came to light in 2016 when it was used to deliver record-setting distributed denial-of-service attacks that paralyzed key parts of the Internet that year. The creators soon released the underlying source code, a move that allowed a wide array of crime groups from around the world to incorporate Mirai into their own attack campaigns. Once taking hold of a Linux device, Mirai uses it as a platform to infect other vulnerable devices, a design that makes it a worm, meaning it self-replicates.

Dime-a-dozen malware with a twist

Traditionally, Mirai and its many variants have spread when one infected device scans the Internet looking for other devices that accept Telnet connections. The infected devices then attempt to crack the telnet password by guessing default and commonly used credential pairs. When successful, the newly infected devices target additional devices using the same technique. Mirai has primarily been used to wage DDoSes. Given the large amounts of bandwidth available to many such devices, the floods of junk traffic are often huge, giving the botnet as a whole tremendous power.

On Wednesday, researchers from network security and reliability firm Akamai revealed that a previously unknown Mirai-based network they dubbed NoaBot has been targeting Linux devices since at least last January. Instead of targeting weak telnet passwords, the NoaBot targets weak passwords connecting SSH connections. Another twist: Rather than performing DDoSes, the new botnet installs cryptocurrency mining software, which allows the attackers to generate digital coins using victims’ computing resources, electricity, and bandwidth. The cryptominer is a modified version of XMRig, another piece of open source malware. More recently, NoaBot has been used to also deliver P2PInfect, a separate worm researchers from Palo Alto Networks revealed last July.

Akamai has been monitoring NoaBot for the past 12 months in a honeypot that mimics real Linux devices to track various attacks circulating in the wild. To date, attacks have originated from 849 distinct IP addresses, almost all of which are likely hosting a device that’s already infected. The following figure tracks the number of attacks delivered to the honeypot over the past year.

Noabot malware activity over time.

Enlarge / Noabot malware activity over time.

“On the surface, NoaBot isn’t a very sophisticated campaign—it’s ‘just’ a Mirai variant and an XMRig cryptominer, and they’re a dime a dozen nowadays,” Akamai Senior Security Researcher Stiv Kupchik wrote in a report Wednesday. “However, the obfuscations added to the malware and the additions to the original source code paint a vastly different picture of the threat actors’ capabilities.”

The most advanced capability is how NoaBot installs the XMRig variant. Typically, when crypto miners are installed, the wallets’ funds are distributed to are specified in configuration settings delivered in a command line issued to the infected device. This approach has long posed a risk to threat actors because it allows researchers to track where the wallets are hosted and how much money has flowed into them.

NoaBot uses a novel technique to prevent such detection. Instead of delivering the configuration settings through a command line, the botnet stores the settings in encrypted or obfuscated form and decrypts them only after XMRig is loaded into memory. The botnet then replaces the internal variable that normally would hold the command line configuration settings and passes control to the XMRig source code.

Kupchik offered a more technical and detailed description:

In the XMRig open source code, miners can accept configurations in one of two ways — either via the command line or via environment variables. In our case, the threat actors chose not to modify the XMRig original code and instead added parts before the main function. To circumvent the need for command line arguments (which can be an indicator of compromise IOC and alert defenders), the threat actors had the miner replace its own command line (in technical terms, replacing argv) with more “meaningful” arguments before passing control to the XMRig code. The botnet runs the miner with (at most) one argument that tells it to print its logs. Before replacing its command line, however, the miner has to build its configuration. First, it copies basic arguments that are stored plaintext— the rig-id flag, which identifies the miner with three random letters, the threads flags, and a placeholder for the pool’s IP address (Figure 7).

Curiously, because the configurations are loaded via the xmm registers, IDA actually misses the first two loaded arguments, which are the binary name and the pool IP placeholder.

NoaBot code that copies miner configurations

Enlarge / NoaBot code that copies miner configurations

Akamai

Next, the miner decrypts the pool’s domain name. The domain name is stored, encrypted, in a few data blocks that are decrypted via XOR operations. Although XMRig can work with a domain name, the attackers decided to go the extra step, and implemented their own DNS

resolution function. They communicate directly with Google’s DNS server (8.8.8.8) and parse its response to resolve the domain name to an IP address.

The last part of the configuration is also encrypted in a similar way, and it is the passkey for the miner to connect to the pool. All in all, the total configuration of the miner looks something like this:

-o --rig-id --threads –pass espana*tea

Notice anything missing? Yep, no wallet address.

We believe that the threat actors chose to run their own private pool instead of a public one, thereby eliminating the need to specify a wallet (their pool, their rules!). However, in our samples, we observed that miner’s domains were not resolving with Google’s DNS, so we can’t really prove our theory or gather more data from the pool, since the domains we have are no longer resolvable. We haven’t seen any recent incident that drops the miner, so it could also be that the threat actors decided to depart for greener pastures

Linux devices are under attack by a never-before-seen worm Read More »

hackers-can-infect-network-connected-wrenches-to-install-ransomware

Hackers can infect network-connected wrenches to install ransomware

TORQUE THIS —

Researchers identify 23 vulnerabilities, some of which can exploited with no authentication.

The Rexroth Nutrunner, a line of torque wrench sold by Bosch Rexroth.

Enlarge / The Rexroth Nutrunner, a line of torque wrench sold by Bosch Rexroth.

Bosch Rexroth

Researchers have unearthed nearly two dozen vulnerabilities that could allow hackers to sabotage or disable a popular line of network-connected wrenches that factories around the world use to assemble sensitive instruments and devices.

The vulnerabilities, reported Tuesday by researchers from security firm Nozomi, reside in the Bosch Rexroth Handheld Nutrunner NXA015S-36V-B. The cordless device, which wirelessly connects to the local network of organizations that use it, allows engineers to tighten bolts and other mechanical fastenings to precise torque levels that are critical for safety and reliability. When fastenings are too loose, they risk causing the device to overheat and start fires. When too tight, threads can fail and result in torques that are too loose. The Nutrunner provides a torque-level indicator display that’s backed by a certification from the Association of German Engineers and adopted by the automotive industry in 1999. The NEXO-OS, the firmware running on devices, can be controlled using a browser-based management interface.

NEXO-OS's management web application.

Enlarge / NEXO-OS’s management web application.

Nozomi

Nozomi researchers said the device is riddled with 23 vulnerabilities that, in certain cases, can be exploited to install malware. The malware could then be used to disable entire fleets of the devices or to cause them to tighten fastenings too loosely or tightly while the display continues to indicate the critical settings are still properly in place. B

Bosch officials emailed a statement that included the usual lines about security being a top priority. It went on to say that Nozomi reached out a few weeks ago to reveal the vulnerabilities. “Bosch Rexroth immediately took up this advice and is working on a patch to solve the problem,” the statement said. “This patch will be released at the end of January 2024.”

In a post, Nozomi researchers wrote:

The vulnerabilities found on the Bosch Rexroth NXA015S-36V-B allow an unauthenticated attacker who is able to send network packets to the target device to obtain remote execution of arbitrary code (RCE) with root privileges, completely compromising it. Once this unauthorized access is gained, numerous attack scenarios become possible. Within our lab environment, we successfully reconstructed the following two scenarios:

  • Ransomware: we were able to make the device completely inoperable by preventing a local operator from controlling the drill through the onboard display and disabling the trigger button. Furthermore, we could alter the graphical user interface (GUI) to display an arbitrary message on the screen, requesting the payment of a ransom. Given the ease with which this attack can be automated across numerous devices, an attacker could swiftly render all tools on a production line inaccessible, potentially causing significant disruptions to the final asset owner.
A PoC ransomware running on the test nutrunner.

Enlarge / A PoC ransomware running on the test nutrunner.

Nozomi

  • Manipulation of Control and View: we managed to stealthily alter the configuration of tightening programs, such as by increasing or decreasing the target torque value. At the same time, by patching in-memory the GUI on the onboard display, we could show a normal value to the operator, who would remain completely unaware of the change.
A manipulation of view attack. The actual torque applied in this tightening was 0.15 Nm.

A manipulation of view attack. The actual torque applied in this tightening was 0.15 Nm.

Hackers can infect network-connected wrenches to install ransomware Read More »

how-much-detail-is-too-much?-midjourney-v6-attempts-to-find-out

How much detail is too much? Midjourney v6 attempts to find out

An AI-generated image of a

Enlarge / An AI-generated image of a “Beautiful queen of the universe looking at the camera in sci-fi armor, snow and particles flowing, fire in the background” created using alpha Midjourney v6.

Midjourney

In December, just before Christmas, Midjourney launched an alpha version of its latest image synthesis model, Midjourney v6. Over winter break, Midjourney fans put the new AI model through its paces, with the results shared on social media. So far, fans have noted much more detail than v5.2 (the current default) and a different approach to prompting. Version 6 can also handle generating text in a rudimentary way, but it’s far from perfect.

“It’s definitely a crazy update, both in good and less good ways,” artist Julie Wieland, who frequently shares her Midjourney creations online, told Ars. “The details and scenery are INSANE, the downside (for now) are that the generations are very high contrast and overly saturated (imo). Plus you need to kind of re-adapt and rethink your prompts, working with new structures and now less is kind of more in terms of prompting.”

At the same time, critics of the service still bristle about Midjourney training its models using human-made artwork scraped from the web and obtained without permission—a controversial practice common among AI model trainers we have covered in detail in the past. We’ve also covered the challenges artists might face in the future from these technologies elsewhere.

Too much detail?

With AI-generated detail ramping up dramatically between major Midjourney versions, one could wonder if there is ever such as thing as “too much detail” in an AI-generated image. Midjourney v6 seems to be testing that very question, creating many images that sometimes seem more detailed than reality in an unrealistic way, although that can be modified with careful prompting.

  • An AI-generated image of a nurse in the 1960s created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of an astronaut created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of a “juicy flaming cheeseburger” created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of “a handsome Asian man” created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of an “Apple II” sitting on a desk in the 1980s created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of a “photo of a cat in a car holding a can of beer” created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of a forest path created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of a woman among flowers created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of “a plate of delicious pickles” created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of a barbarian beside a TV set that says “Ars Technica” on it created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of “Abraham Lincoln holding a sign that says Ars Technica” created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of Mickey Mouse holding a machine gun created using alpha Midjourney v6.

    Midjourney

In our testing of version 6 (which can currently be invoked with the “–v 6.0” argument at the end of a prompt), we noticed times when the new model appeared to produce worse results than v5.2, but Midjourney veterans like Wieland tell Ars that those differences are largely due to the different way that v6.0 interprets prompts. That is something Midjourney is continuously updating over time. “Old prompts sometimes work a bit better than the day they released it,” Wieland told us.

How much detail is too much? Midjourney v6 attempts to find out Read More »

ivanti-warns-of-critical-vulnerability-in-its-popular-line-of-endpoint-protection-software

Ivanti warns of critical vulnerability in its popular line of endpoint protection software

RCE STANDS FOR REMOTE CODE EXECUTION —

Customers of the Ivanti Endpoint Protection Manager should patch or mitigate ASAP.

Ivanti warns of critical vulnerability in its popular line of endpoint protection software

Software maker Ivanti is urging users of its end-point security product to patch a critical vulnerability that makes it possible for unauthenticated attackers to execute malicious code inside affected networks.

The vulnerability, in a class known as a SQL injection, resides in all supported versions of the Ivanti Endpoint Manager. Also known as the Ivanti EPM, the software runs on a variety of platforms, including Windows, macOS, Linux, Chrome OS, and Internet of Things devices such as routers. SQL injection vulnerabilities stem from faulty code that interprets user input as database commands or, in more technical terms, from concatenating data with SQL code without quoting the data in accordance with the SQL syntax. CVE-2023-39336, as the Ivanti vulnerability is tracked, carries a severity rating of 9.6 out of a possible 10.

“If exploited, an attacker with access to the internal network can leverage an unspecified SQL injection to execute arbitrary SQL queries and retrieve output without the need for authentication,” Ivanti officials wrote Friday in a post announcing the patch availability. “This can then allow the attacker control over machines running the EPM agent. When the core server is configured to use SQL express, this might lead to RCE on the core server.”

RCE is short for remote code execution, or the ability for off-premises attackers to run code of their choice. Currently, there’s no known evidence the vulnerability is under active exploitation.

Ivanti has also published a disclosure that is restricted only to registered users. A copy obtained by Ars said Ivanti learned of the vulnerability in October. The private disclosure in full is:

It’s unclear what “attacker with access to the internal network” means. Under the official explanation of the Common Vulnerability Scoring System, the code Ivanti used in the disclosure, AV:A, is short for “Attack Vector: Adjacent.” The scoring system defined it as:

The vulnerable component is bound to the network stack, but the attack is limited at the protocol level to a logically adjacent topology. This can mean an attack must be launched from the same shared physical or logical (e.g. local IP subnet) network…

In a thread on Mastodon, several security experts offered interpretations. One person who asked not to be identified by name, wrote:

Everything else about the vulnerability [besides the requirement of access to the network] is severe:

  • Attack complexity is low
  • Privileges not required
  • No user interaction necessary
  • Scope of the subsequent impact to other systems is changed
  • Impact to Confidentiality, Integrity and Availability is High

Reid Wightman, a researcher specializing in the security of industrial control systems at Dragos, provided this analysis:

Speculation but it appears that Ivanti is mis-applying CVSS and the score should possibly be 10.0.

They say AV:A (meaning, “adjacent network access required”). Usually this means that one of the following is true: 1) the vulnerable network protocol is not routable (this usually means it is not an IP-based protocol that is vulnerable), or 2) the vulnerability is really a person-in-the-middle attack (although this usually also has AC:H, since a person-in-the-middle requires some existing access to the network in order to actually launch the attack) or 3) (what I think), the vendor is mis-applying CVSS because they think their vulnerable service should not be exposed aka “end users should have a firewall in place”.

The assumption that the attacker must be an insider would have a CVSS modifier of PR:L or PR:H (privileges required on the system), or UI:R (tricking a legitimate user into doing something that they shouldn’t). The assumption that the attacker has some other existing access to the network should add AC:H (attack complexity high) to the score. Both would reduce the numeric score.

I’ve had many an argument with vendors who argue (3), specifically, “nobody should have the service exposed so it’s not really AV:N”. But CVSS does not account for “good network architecture”. It only cares about default configuration, and whether the attack can be launched from a remote network…it does not consider firewall rules that most organizations should have in place, in part because you always find counterexamples where the service is exposed to the Internet. You can almost always find counterexamples on Shodan and similar. Plenty of “Ivanti Service Managers” exposed on Shodan for example, though, I’m not sure if this is the actual vulnerable service.

A third participant, Ron Bowes of Skull Security, wrote: “Vendors—especially Ivanti—have a habit of underplaying security issues. They think that making it sound like the vuln is less bad makes them look better, when in reality it just makes their customers less safe. That’s a huge pet peeve. I’m not gonna judge vendors for having a vuln, but I am going to judge them for handling it badly.”

Ivanti representatives didn’t respond to emailed questions.

Putting devices running Ivanti EDM behind a firewall is a best practice and will go a long way to mitigating the severity of CVE-2023-39336, but it would likely do nothing to prevent an attacker who has gained limited access to an employee workstation from exploiting the critical vulnerability. It’s unclear if the vulnerability will come under active exploitation, but the best course of action is for all Ivanti EDM users to install the patch as soon as possible.

Ivanti warns of critical vulnerability in its popular line of endpoint protection software Read More »

a-“ridiculously-weak“-password-causes-disaster-for-spain’s-no.-2-mobile-carrier

A “ridiculously weak“ password causes disaster for Spain’s No. 2 mobile carrier

A “ridiculously weak“ password causes disaster for Spain’s No. 2 mobile carrier

Getty Images

Orange España, Spain’s second-biggest mobile operator, suffered a major outage on Wednesday after an unknown party obtained a “ridiculously weak” password and used it to access an account for managing the global routing table that controls which networks deliver the company’s Internet traffic, researchers said.

The hijacking began around 9: 28 Coordinated Universal Time (about 2: 28 Pacific time) when the party logged into Orange’s RIPE NCC account using the password “ripeadmin” (minus the quotation marks). The RIPE Network Coordination Center is one of five Regional Internet Registries, which are responsible for managing and allocating IP addresses to Internet service providers, telecommunication organizations, and companies that manage their own network infrastructure. RIPE serves 75 countries in Europe, the Middle East, and Central Asia.

“Things got ugly”

The password came to light after the party, using the moniker Snow, posted an image to social media that showed the orange.es email address associated with the RIPE account. RIPE said it’s working on ways to beef up account security.

Screenshot showing RIPE account, including the orange.es email address associated with it.

Enlarge / Screenshot showing RIPE account, including the orange.es email address associated with it.

Security firm Hudson Rock plugged the email address into a database it maintains to track credentials for sale in online bazaars. In a post, the security firm said the username and “ridiculously weak” password were harvested by information-stealing malware that had been installed on an Orange computer since September. The password was then made available for sale on an infostealer marketplace.

Partially redacted screenshot from Hudson Rock database showing the credentials for the Orange RIPE account.

Enlarge / Partially redacted screenshot from Hudson Rock database showing the credentials for the Orange RIPE account.

HJudson Rock

Researcher Kevin Beaumont said thousands of credentials protecting other RIPE accounts are also available in such marketplaces.

Once logged into Orange’s RIPE account, Snow made changes to the global routing table the mobile operator relies on to specify what backbone providers are authorized to carry its traffic to various parts of the world. These tables are managed using the Border Gateway Protocol (BGP), which connects one regional network to the rest of the Internet. Specifically, Snow added several new ROAs, short for Route Origin Authorizations. These entries allow “autonomous systems” such as Orange’s AS12479 to designate other autonomous systems or large chunks of IP addresses to deliver its traffic to various regions of the world.

In the initial stage, the changes had no meaningful effect because the ROAs Snow added announcing the IP addresses—93.117.88.0/22 and 93.117.88.0/21, and 149.74.0.0/16—already originated with Orange’s AS12479. A few minutes later, Snow added ROAs to five additional routes. All but one of them also originated with the Orange AS, and once again had no effect on traffic, according to a detailed writeup of the event by Doug Madory, a BGP expert at security and networking firm Kentik.

The creation of the ROA for 149.74.0.0/16 was the first act by Snow to create problems, because the maximum prefix length was set to 16, rendering any smaller routes using the address range invalid

“It invalidated any routes that are more specific (longer prefix length) than a 16,” Madory told Ars in an online interview. “So routes like 149.74.100.0/23 became invalid and started getting filtered. Then [Snow] created more ROAs to cover those routes. Why? Not sure. I think, at first, they were just messing around. Before that ROA was created, there was no ROA to assert anything about this address range.”

A “ridiculously weak“ password causes disaster for Spain’s No. 2 mobile carrier Read More »