Biz & IT

chinese-malware-removed-from-soho-routers-after-fbi-issues-covert-commands

Chinese malware removed from SOHO routers after FBI issues covert commands

REBOOT OR, BETTER yet, REPLACE YOUR OLD ROUTERS! —

Routers were being used to conceal attacks on critical infrastructure.

A wireless router with an Ethernet cable hooked into it.

Enlarge / A Wi-Fi router.

The US Justice Department said Wednesday that the FBI surreptitiously sent commands to hundreds of infected small office and home office routers to remove malware China state-sponsored hackers were using to wage attacks on critical infrastructure.

The routers—mainly Cisco and Netgear devices that had reached their end of life—were infected with what’s known as KV Botnet malware, Justice Department officials said. Chinese hackers from a group tracked as Volt Typhoon used the malware to wrangle the routers into a network they could control. Traffic passing between the hackers and the compromised devices was encrypted using a VPN module KV Botnet installed. From there, the campaign operators connected to the networks of US critical infrastructure organizations to establish posts that could be used in future cyberattacks. The arrangement caused traffic to appear as originating from US IP addresses with trustworthy reputations rather than suspicious regions in China.

Seizing infected devices

Before the takedown could be conducted legally, FBI agents had to receive authority—technically for what’s called a seizure of infected routers or “target devices”—from a federal judge. An initial affidavit seeking authority was filed in US federal court in Houston in December. Subsequent requests have been filed since then.

“To effect these seizures, the FBI will issue a command to each Target Device to stop it from running the KV Botnet VPN process,” an agency special agent wrote in an affidavit dated January 9. “This command will also stop the Target Device from operating as a VPN node, thereby preventing the hackers from further accessing Target Devices through any established VPN tunnel. This command will not affect the Target Device if the VPN process is not running, and will not otherwise affect the Target Device, including any legitimate VPN process installed by the owner of the Target Device.”

Wednesday’s Justice Department statement said authorities had followed through on the takedown, which disinfected “hundreds” of infected routers and removed them from the botnet. To prevent the devices from being reinfected, the takedown operators issued additional commands that the affidavit said would “interfere with the hackers’ control over the instrumentalities of their crimes (the Target Devices), including by preventing the hackers from easily re-infecting the Target Devices.”

The affidavit said elsewhere that the prevention measures would be neutralized if the routers were restarted. These devices would then be once again vulnerable to infection.

Redactions in the affidavit make the precise means used to prevent re-infections unclear. Portions that weren’t censored, however, indicated the technique involved a loop-back mechanism that prevented the devices from communicating with anyone trying to hack them.

Portions of the affidavit explained:

22. To effect these seizures, the FBI will simultaneously issue commands that will interfere with the hackers’ control over the instrumentalities of their crimes (the Target Devices), including by preventing the hackers from easily re-infecting the Target Devices with KV Botnet malware.

  1. a. When the FBI deletes the KV Botnet malware from the Target Devices [redacted. To seize the Target Devices and interfere with the hackers’ control over them, the FBI [redacted]. This [redacted] will have no effect except to protect the Target Device from reinfection by the KV Botnet [redacted] The effect of can be undone by restarting the Target Device [redacted] make the Target Device vulnerable to re-infection.
  2. b. [redacted] the FBI will seize each such Target Device by causing the malware on it to communicate with only itself. This method of seizure will interfere with the ability of the hackers to control these Target Devices. This communications loopback will, like the malware itself, not survive a restart of a Target Device.
  3. c. To seize Target Devices, the FBI will [redacted] block incoming traffic [redacted] used exclusively by the KV Botnet malware on Target Devices, to block outbound traffic to [redacted] the Target Devices’ parent and command-and-control nodes, and to allow a Target Device to communicate with itself [redacted] are not normally used by the router, and so the router’s legitimate functionality is not affected. The effect of [redacted] to prevent other parts of the botnet from contacting the victim router, undoing the FBI’s commands, and reconnecting it to the botnet. The effect of these commands is undone by restarting the Target Devices.

23. To effect these seizures, the FBI will issue a command to each Target Device to stop it from running the KV Botnet VPN process. This command will also stop the Target Device from operating as a VPN node, thereby preventing the hackers from further accessing Target Devices through any established VPN tunnel. This command will not affect the Target Device if the VPN process is not running, and will not otherwise affect the Target Device, including any legitimate VPN process installed by the owner of the Target Device.

Chinese malware removed from SOHO routers after FBI issues covert commands Read More »

chatgpt’s-new-@-mentions-bring-multiple-personalities-into-your-ai-convo

ChatGPT’s new @-mentions bring multiple personalities into your AI convo

team of rivals —

Bring different AI roles into the same chatbot conversation history.

Illustration of a man jugging at symbols.

Enlarge / With so many choices, selecting the perfect GPT can be confusing.

On Tuesday, OpenAI announced a new feature in ChatGPT that allows users to pull custom personalities called “GPTs” into any ChatGPT conversation with the @ symbol. It allows a level of quasi-teamwork within ChatGPT among expert roles that was previously impractical, making collaborating with a team of AI agents within OpenAI’s platform one step closer to reality.

You can now bring GPTs into any conversation in ChatGPT – simply type @ and select the GPT,” wrote OpenAI on the social media network X. “This allows you to add relevant GPTs with the full context of the conversation.”

OpenAI introduced GPTs in November as a way to create custom personalities or roles for ChatGPT to play. For example, users can build their own GPTs to focus on certain topics or certain skills. Paid ChatGPT subscribers can also freely download a host of GPTs developed by other ChatGPT users through the GPT Store.

Previously, if you wanted to share information between GPT profiles, you had to copy the text, select a new chat with the GPT, paste it, and explain the context of what the information means or what you want to do with it. Now, ChatGPT users can stay in the default ChatGPT window and bring in GPTs as needed without losing the history of the conversation.

For example, we created a “Wellness Guide” GPT that is crafted as an expert in human health conditions (of course, this being ChatGPT, always consult a human doctor if you’re having medical problems), and we created a “Canine Health Advisor” for dog-related health questions.

A screenshot of ChatGPT where we @-mentioned a human wellness advisor, then a dog advisor in the same conversation history.

Enlarge / A screenshot of ChatGPT where we @-mentioned a human wellness advisor, then a dog advisor in the same conversation history.

Benj Edwards

We started in a default ChatGPT chat, hit the @ symbol, then typed the first few letters of “Wellness” and selected it from a list. It filled out the rest. We asked a question about food poisoning in humans, and then we switched to the canine advisor in the same way with an @ symbol and asked about the dog.

Using this feature, you could alternatively consult, say, an “ad copywriter” GPT and an “editor” GPT—ask the copywriter to write some text, then rope in the editor GPT to check it, looking at it from a different angle. Different system prompts (the instructions that define a GPT’s personality) make for significant behavior differences.

We also tried swapping between GPT profiles that write software and others designed to consult on historical tech subjects. Interestingly, ChatGPT does not differentiate between GPTs as different personalities as you change. It will still say, “I did this earlier” when a different GPT is talking about a previous GPT’s output in the same conversation history. From its point of view, it’s just ChatGPT and not multiple agents.

From our vantage point, this feature seems to represent baby steps toward a future where GPTs, as independent agents, could work together as a team to fulfill more complex tasks directed by the user. Similar experiments have been done outside of OpenAI in the past (using API access), but OpenAI has so far resisted a more agentic model for ChatGPT. As we’ve seen (first with GPTs and now with this), OpenAI seems to be slowly angling toward that goal itself, but only time will tell if or when we see true agentic teamwork in a shipping service.

ChatGPT’s new @-mentions bring multiple personalities into your AI convo Read More »

ars-technica-used-in-malware-campaign-with-never-before-seen-obfuscation

Ars Technica used in malware campaign with never-before-seen obfuscation

WHEN USERS ATTACK —

Vimeo also used by legitimate user who posted booby-trapped content.

Ars Technica used in malware campaign with never-before-seen obfuscation

Getty Images

Ars Technica was recently used to serve second-stage malware in a campaign that used a never-before-seen attack chain to cleverly cover its tracks, researchers from security firm Mandiant reported Tuesday.

A benign image of a pizza was uploaded to a third-party website and was then linked with a URL pasted into the “about” page of a registered Ars user. Buried in that URL was a string of characters that appeared to be random—but were actually a payload. The campaign also targeted the video-sharing site Vimeo, where a benign video was uploaded and a malicious string was included in the video description. The string was generated using a technique known as Base 64 encoding. Base 64 converts text into a printable ASCII string format to represent binary data. Devices already infected with the first-stage malware used in the campaign automatically retrieved these strings and installed the second stage.

Not typically seen

“This is a different and novel way we’re seeing abuse that can be pretty hard to detect,” Mandiant researcher Yash Gupta said in an interview. “This is something in malware we have not typically seen. It’s pretty interesting for us and something we wanted to call out.”

The image posted on Ars appeared in the about profile of a user who created an account on November 23. An Ars representative said the photo, showing a pizza and captioned “I love pizza,” was removed by Ars staff on December 16 after being tipped off by email from an unknown party. The Ars profile used an embedded URL that pointed to the image, which was automatically populated into the about page. The malicious base 64 encoding appeared immediately following the legitimate part of the URL. The string didn’t generate any errors or prevent the page from loading.

Pizza image posted by user.

Enlarge / Pizza image posted by user.

Malicious string in URL.

Enlarge / Malicious string in URL.

Mandiant researchers said there were no consequences for people who may have viewed the image, either as displayed on the Ars page or on the website that hosted it. It’s also not clear that any Ars users visited the about page.

Devices that were infected by the first stage automatically accessed the malicious string at the end of the URL. From there, they were infected with a second stage.

The video on Vimeo worked similarly, except that the string was included in the video description.

Ars representatives had nothing further to add. Vimeo representatives didn’t immediately respond to an email.

The campaign came from a threat actor Mandiant tracks as UNC4990, which has been active since at least 2020 and bears the hallmarks of being motivated by financial gain. The group has already used a separate novel technique to fly under the radar. That technique spread the second stage using a text file that browsers and normal text editors showed to be blank.

Opening the same file in a hex editor—a tool for analyzing and forensically investigating binary files—showed that a combination of tabs, spaces, and new lines were arranged in a way that encoded executable code. Like the technique involving Ars and Vimeo, the use of such a file is something the Mandiant researchers had never seen before. Previously, UNC4990 used GitHub and GitLab.

The initial stage of the malware was transmitted by infected USB drives. The drives installed a payload Mandiant has dubbed explorerps1. Infected devices then automatically reached out to either the malicious text file or else to the URL posted on Ars or the video posted to Vimeo. The base 64 strings in the image URL or video description, in turn, caused the malware to contact a site hosting the second stage. The second stage of the malware, tracked as Emptyspace, continuously polled a command-and-control server that, when instructed, would download and execute a third stage.

Mandiant

Mandiant has observed the installation of this third stage in only one case. This malware acts as a backdoor the researchers track as Quietboard. The backdoor, in that case, went on to install a cryptocurrency miner.

Anyone who is concerned they may have been infected by any of the malware covered by Mandiant can check the indicators of compromise section in Tuesday’s post.

Ars Technica used in malware campaign with never-before-seen obfuscation Read More »

rhyming-ai-powered-clock-sometimes-lies-about-the-time,-makes-up-words

Rhyming AI-powered clock sometimes lies about the time, makes up words

Confabulation time —

Poem/1 Kickstarter seeks $103K for fun ChatGPT-fed clock that may hallucinate the time.

A CAD render of the Poem/1 sitting on a bookshelf.

Enlarge / A CAD render of the Poem/1 sitting on a bookshelf.

On Tuesday, product developer Matt Webb launched a Kickstarter funding project for a whimsical e-paper clock called the “Poem/1” that tells the current time using AI and rhyming poetry. It’s powered by the ChatGPT API, and Webb says that sometimes ChatGPT will lie about the time or make up words to make the rhymes work.

“Hey so I made a clock. It tells the time with a brand new poem every minute, composed by ChatGPT. It’s sometimes profound, and sometimes weird, and occasionally it fibs about what the actual time is to make a rhyme work,” Webb writes on his Kickstarter page.

The $126 clock is the product of Webb’s Acts Not Facts, which he bills as “.” Despite the net-connected service aspect of the clock, Webb says it will not require a subscription to function.

A labeled CAD rendering of the Poem/1 clock, representing its final shipping configuration.

Enlarge / A labeled CAD rendering of the Poem/1 clock, representing its final shipping configuration.

There are 1,440 minutes in a day, so Poem/1 needs to display 1,440 unique poems to work. The clock features a monochrome e-paper screen and pulls its poetry rhymes via Wi-Fi from a central server run by Webb’s company. To save money, that server pulls poems from ChatGPT’s API and will share them out to many Poem/1 clocks at once. This prevents costly API fees that would add up if your clock were querying OpenAI’s servers 1,440 times a day, non-stop, forever. “I’m reserving a % of the retail price from each clock in a bank account to cover AI and server costs for 5 years,” Webb writes.

For hackers, Webb says that you’ll be able to change the back-end server URL of the Poem/1 from the default to whatever you want, so it can display custom text every minute of the day. Webb says he will document and publish the API when Poem/1 ships.

Hallucination time

A photo of a Poem/1 prototype with a hallucinated time, according to Webb.

Enlarge / A photo of a Poem/1 prototype with a hallucinated time, according to Webb.

Given the Poem/1’s large language model pedigree, it’s perhaps not surprising that Poem/1 may sometimes make up things (also called “hallucination” or “confabulation” in the AI field) to fulfill its task. The LLM that powers ChatGPT is always searching for the most likely next word in a sequence, and sometimes factuality comes second to fulfilling that mission.

Further down on the Kickstarter page, Webb provides a photo of his prototype Poem/1 where the screen reads, “As the clock strikes eleven forty two, / I rhyme the time, as I always do.” Just below, Webb warns, “Poem/1 fibs occasionally. I don’t believe it was actually 11.42 when this photo was taken. The AI hallucinated the time in order to make the poem work. What we do for art…”

In other clocks, the tendency to unreliably tell the time might be a fatal flaw. But judging by his humorous angle on the Kickstarter page, Webb apparently sees the clock as more of a fun art project than a precision timekeeping instrument. “Don’t rely on this clock in situations where timekeeping is vital,” Webb writes, “such as if you work in air traffic control or rocket launches or the finish line of athletics competitions.”

Poem/1 also sometimes takes poetic license with vocabulary to tell the time. During a humorous moment in the Kickstarter promotional video, Webb looks at his clock prototype and reads the rhyme, “A clock that defies all rhyme and reason / 4: 30 PM, a temporal teason.” Then he says, “I had to look ‘teason’ up. It doesn’t mean anything, so it’s a made-up word.”

Rhyming AI-powered clock sometimes lies about the time, makes up words Read More »

raspberry-pi-is-planning-a-london-ipo,-but-its-ceo-expects-“no-change”-in-focus

Raspberry Pi is planning a London IPO, but its CEO expects “no change” in focus

Just enough RAM to move markets —

Eben Upton says hobbyists remain “incredibly important” while he’s involved.

Updated

Raspberry Pi 5 with Active Cooler installed on a wood desktop

Enlarge / Is it not a strange fate that we should suffer so much fear and doubt for so small a thing? So small a thing!

Andrew Cunningham

The business arm of Raspberry Pi is preparing to make an initial public offering (IPO) in London. CEO Eben Upton tells Ars that should the IPO happen, it will let Raspberry Pi’s not-for-profit side expand by “at least a factor of 2X.” And while it’s “an understandable thing” that Raspberry Pi enthusiasts could be concerned, “while I’m involved in running the thing, I don’t expect people to see any change in how we do things.”

CEO Eben Upton confirmed in an interview with Bloomberg News that Raspberry Pi had appointed bankers at London firms Peel Hunt and Jefferies to prepare for “when the IPO market reopens.”

Raspberry previously raised money from Sony and semiconductor and software design firm ARM, and it sought public investment. Upton denied or didn’t quite deny IPO rumors in 2021, and Bloomberg reported Raspberry Pi was considering an IPO in early 2022. After ARM took a minority stake in the company in November 2023, Raspberry Pi was valued at roughly 400 million pounds, or just over $500 million.

Given the company’s gradual recovery from pandemic supply chain shortages, and the success of the Raspberry Pi 5 launch, the company’s IPO will likely jump above that level, even with a listing in the UK rather than the more typical US IPO. Upton told The Register that “the business is in a much better place than it was last time we looked at it [an IPO]. We partly stopped because the markets got bad. And we partly stopped because our business became unpredictable.”

News of the potential transformation of Raspberry Pi Ltd from the private arm of the education-minded Raspberry Pi Foundation into a publicly traded company, obligated to generate profits for shareholders, reverberated about the way you’d expect on Reddit, Hacker News, and elsewhere. Many pointed with concern to the company’s decision to prioritize small business customers requiring Pi boards for their businesses as a portent of what investors might prioritize. Many expressed confusion over the commercial entity’s relationship to the foundation and what an IPO meant for that arrangement.

Seeing comments after the Bloomberg story, Upton said he understood concerns about a potential shift in mission or a change in the pricing structure. “It’s a good thing, in that people care about us,” Upton said in a phone interview. But he noted that Raspberry Pi’s business arm has had both strategic and private investors in its history, along with a majority shareholder in its Foundation (which in 2016 owned 75 percent of shares), and that he doesn’t see changes to what Pi has built.

“What Raspberry Pi [builds] are the products we want to buy, and then we sell them to people like us,” Upton said. “Certainly, while I’m involved in it, I can’t imagine an environment in which the hobbyists are not going to be incredibly important.”

The IPO is “about the foundation,” Upton said, with that charitable arm selling some of its majority stake in the business entity to raise funds and expand. (“We’ve not cooked up some new way for a not-for-profit to do an IPO, no,” he noted.) The foundation was previously funded by dividends from the business side, Upton said. “We do this transaction, and the proceeds of that transaction allow the foundation to train teachers, run clubs, expand programs, and… do those things at, at least, a factor of 2X. That’s what I’m most excited about.”

Asked about concerns that Raspberry Pi could focus its attention on higher-volume customers after public investors are involved, Upton said there would be “no change” to the kinds of products Pi makes, and that makers are “culturally important to us.” Upton noted that Raspberry Pi, apart from a single retail store, doesn’t sell Pis directly but through resellers. Margin structures at Raspberry Pi have “stayed the same all the way through,” Upton said and should remain so after the IPO.

Raspberry Pi’s lower-cost products, like the Zero 2 W and Pico, are fulfilling the educational and tinkering missions of the project, now at far better capability and lower price points than the original Pi products, Upton said. “If people think that an IPO means we’re going to … push prices up, push the margins up, push down the feature sets, the only answer we can give is, watch us. Keep watching,” he said. “Let’s look at it in 15, 20 years’ time.”

This post was updated at 2: 30 pm ET on January 30 to include an Ars interview with Raspberry Pi CEO Eben Upton.

Raspberry Pi is planning a London IPO, but its CEO expects “no change” in focus Read More »

chatgpt-is-leaking-passwords-from-private-conversations-of-its-users,-ars-reader-says

ChatGPT is leaking passwords from private conversations of its users, Ars reader says

OPENAI SPRINGS A LEAK —

Names of unpublished research papers, presentations, and PHP scripts also leaked.

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen.

Getty Images

ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users, screenshots submitted by an Ars reader on Monday indicated.

Two of the seven screenshots the reader submitted stood out in particular. Both contained multiple pairs of usernames and passwords that appeared to be connected to a support system used by employees of a pharmacy prescription drug portal. An employee using the AI chatbot seemed to be troubleshooting problems that encountered while using the portal.

“Horrible, horrible, horrible”

“THIS is so f-ing insane, horrible, horrible, horrible, i cannot believe how poorly this was built in the first place, and the obstruction that is being put in front of me that prevents it from getting better,” the user wrote. “I would fire [redacted name of software] just for this absurdity if it was my choice. This is wrong.”

Besides the candid language and the credentials, the leaked conversation includes the name of the app the employee is troubleshooting and the store number where the problem occurred.

The entire conversation goes well beyond what’s shown in the redacted screenshot above. A link Ars reader Chase Whiteside included showed the chat conversation in its entirety. The URL disclosed additional credential pairs.

The results appeared Monday morning shortly after reader Whiteside had used ChatGPT for an unrelated query.

“I went to make a query (in this case, help coming up with clever names for colors in a palette) and when I returned to access moments later, I noticed the additional conversations,” Whiteside wrote in an email. “They weren’t there when I used ChatGPT just last night (I’m a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren’t from me (and I don’t think they’re from the same user either).”

Other conversations leaked to Whiteside include the name of a presentation someone was working on, details of an unpublished research proposal, and a script using the PHP programming language. The users for each leaked conversation appeared to be different and unrelated to each other. The conversation involving the prescription portal included the year 2020. Dates didn’t appear in the other conversations.

The episode, and others like it, underscore the wisdom of stripping out personal details from queries made to ChatGPT and other AI services whenever possible. Last March, ChatGPT maker OpenAI took the AI chatbot offline after a bug caused the site to show titles from one active user’s chat history to unrelated users.

In November, researchers published a paper reporting how they used queries to prompt ChatGPT into divulging email addresses, phone and fax numbers, physical addresses, and other private data that was included in material used to train the ChatGPT large language model.

Concerned about the possibility of proprietary or private data leakage, companies, including Apple, have restricted their employees’ use of ChatGPT and similar sites.

As mentioned in an article from December when multiple people found that Ubiquity’s UniFy devices broadcasted private video belonging to unrelated users, these sorts of experiences are as old as the Internet is. As explained in the article:

The precise root causes of this type of system error vary from incident to incident, but they often involve “middlebox” devices, which sit between the front- and back-end devices. To improve performance, middleboxes cache certain data, including the credentials of users who have recently logged in. When mismatches occur, credentials for one account can be mapped to a different account.

An OpenAI representative said the company was investigating the report.

ChatGPT is leaking passwords from private conversations of its users, Ars reader says Read More »

openai-and-common-sense-media-partner-to-protect-teens-from-ai-harms-and-misuse

OpenAI and Common Sense Media partner to protect teens from AI harms and misuse

Adventures in chatbusting —

Site gave ChatGPT 3 stars and 48% privacy score: “Best used for creativity, not facts.”

Boy in Living Room Wearing Robot Mask

On Monday, OpenAI announced a partnership with the nonprofit Common Sense Media to create AI guidelines and educational materials targeted at parents, educators, and teens. It includes the curation of family-friendly GPTs in OpenAI’s GPT store. The collaboration aims to address concerns about the impacts of AI on children and teenagers.

Known for its reviews of films and TV shows aimed at parents seeking appropriate media for their kids to watch, Common Sense Media recently branched out into AI and has been reviewing AI assistants on its site.

“AI isn’t going anywhere, so it’s important that we help kids understand how to use it responsibly,” Common Sense Media wrote on X. “That’s why we’ve partnered with @OpenAI to help teens and families safely harness the potential of AI.”

OpenAI CEO Sam Altman and Common Sense Media CEO James Steyer announced the partnership onstage in San Francisco at the Common Sense Summit for America’s Kids and Families, an event that was well-covered by media members on the social media site X.

For his part, Altman offered a canned statement in the press release, saying, “AI offers incredible benefits for families and teens, and our partnership with Common Sense will further strengthen our safety work, ensuring that families and teens can use our tools with confidence.”

The announcement feels slightly non-specific in the official news release, with Steyer offering, “Our guides and curation will be designed to educate families and educators about safe, responsible use of ChatGPT, so that we can collectively avoid any unintended consequences of this emerging technology.”

The partnership seems aimed mostly at bringing a patina of family-friendliness to OpenAI’s GPT store, with the most solid reveal being the aforementioned fact that Common Sense media will help with the “curation of family-friendly GPTs in the GPT Store based on Common Sense ratings and standards.”

Common Sense AI reviews

As mentioned above, Common Sense Media began reviewing AI assistants on its site late last year. This puts Common Sense Media in an interesting position with potential conflicts of interest regarding the new partnership with OpenAI. However, it doesn’t seem to be offering any favoritism to OpenAI so far.

For example, Common Sense Media’s review of ChatGPT calls the AI assistant “A powerful, at times risky chatbot for people 13+ that is best used for creativity, not facts.” It labels ChatGPT as being suitable for ages 13 and up (which is in OpenAI’s Terms of Service) and gives the OpenAI assistant three out of five stars. ChatGPT also scores a 48 percent privacy rating (which is oddly shown as 55 percent on another page that goes into privacy details). The review we cited was last updated on October 13, 2023, as of this writing.

For reference, Google Bard gets a three-star overall rating and a 75 percent privacy rating in its Common Sense Media review. Stable Diffusion, the image synthesis model, nets a one-star rating with the description, “Powerful image generator can unleash creativity, but is wildly unsafe and perpetuates harm.” OpenAI’s DALL-E gets two stars and a 48 percent privacy rating.

The information that Common Sense Media includes about each AI model appears relatively accurate and detailed (and the organization cited an Ars Technica article as a reference in one explanation), so they feel fair, even in the face of the OpenAI partnership. Given the low scores, it seems that most AI models aren’t off to a great start, but that may change. It’s still early days in generative AI.

OpenAI and Common Sense Media partner to protect teens from AI harms and misuse Read More »

the-life-and-times-of-cozy-bear,-the-russian-hackers-who-just-hit-microsoft-and-hpe

The life and times of Cozy Bear, the Russian hackers who just hit Microsoft and HPE

FROM RUSSIA WITH ROOT —

Hacks by Kremlin-backed group continue to hit hard.

The life and times of Cozy Bear, the Russian hackers who just hit Microsoft and HPE

Getty Images

Hewlett Packard Enterprise (HPE) said Wednesday that Kremlin-backed actors hacked into the email accounts of its security personnel and other employees last May—and maintained surreptitious access until December. The disclosure was the second revelation of a major corporate network breach by the hacking group in five days.

The hacking group that hit HPE is the same one that Microsoft said Friday broke into its corporate network in November and monitored email accounts of senior executives and security team members until being driven out earlier this month. Microsoft tracks the group as Midnight Blizzard. (Under the company’s recently retired threat actor naming convention, which was based on chemical elements, the group was known as Nobelium.) But it is perhaps better known by the name Cozy Bear—though researchers have also dubbed it APT29, the Dukes, Cloaked Ursa, and Dark Halo.

“On December 12, 2023, Hewlett Packard Enterprise was notified that a suspected nation-state actor, believed to be the threat actor Midnight Blizzard, the state-sponsored actor also known as Cozy Bear, had gained unauthorized access to HPE’s cloud-based email environment,” company lawyers wrote in a filing with the Securities and Exchange Commission. “The Company, with assistance from external cybersecurity experts, immediately activated our response process to investigate, contain, and remediate the incident, eradicating the activity. Based on our investigation, we now believe that the threat actor accessed and exfiltrated data beginning in May 2023 from a small percentage of HPE mailboxes belonging to individuals in our cybersecurity, go-to-market, business segments, and other functions.”

An HPE representative said in an email that Cozy Bear’s initial entry into the network was through “a compromised, internal HPE Office 365 email account [that] was leveraged to gain access.” The representative declined to elaborate. The representative also declined to say how HPE discovered the breach.

Cozy Bear hacking its way into the email systems of two of the world’s most powerful companies and monitoring top employees’ accounts for months aren’t the only similarities between the two events. Both breaches also involved compromising a single device on each corporate network, then escalating that toehold to the network itself. From there, Cozy Bear camped out undetected for months. The HPE intrusion was all the more impressive because Wednesday’s disclosure said that the hackers also gained access to Sharepoint servers in May. Even after HPE detected and contained that breach a month later, it would take HPE another six months to discover the compromised email accounts.

The pair of disclosures, coming within five days of each other, may create the impression that there has been a recent flurry of hacking activity. But Cozy Bear has actually been one of the most active nation-state groups since at least 2010. In the intervening 14 years, it has waged an almost constant series of attacks, mostly on the networks of governmental organizations and the technology companies that supply them. Multiple intelligence services and private research companies have attributed the hacking group as an arm of Russia’s Foreign Intelligence Service, also known as the SVR.

The life and times of Cozy Bear (so far)

In its earliest years, Cozy Bear operated in relative obscurity—precisely the domain it prefers—as it hacked mostly Western governmental agencies and related organizations such as political think tanks and governmental subcontractors. In 2013, researchers from security firm Kaspersky unearthed MiniDuke, a sophisticated piece of malware that had taken hold of 60 government agencies, think tanks, and other high-profile organizations in 23 countries, including the US, Hungary, Ukraine, Belgium, and Portugal.

MiniDuke was notable for its odd combination of advanced programming and the gratuitous references to literature found embedded into its code. (It contained strings that alluded to Dante Alighieri’s Divine Comedy and to 666, the Mark of the Beast discussed in a verse from the Book of Revelation.) Written in assembly, employing multiple levels of encryption, and relying on hijacked Twitter accounts and automated Google searches to maintain stealthy communications with command-and-control servers, MiniDuke was among the most advanced pieces of malware found at the time.

It wasn’t immediately clear who was behind the mysterious malware—another testament to the stealth of its creators. In 2015, however, researchers linked MiniDuke—and seven other pieces of previously unidentified malware—to Cozy Bear. After a half-decade of lurking, the shadowy group was suddenly brought into the light of day.

Cozy Bear once again came to prominence the following year when researchers discovered the group (along with Fancy Bear, a separate Russian-state hacking group) inside the servers of the Democratic National Committee, looking for intelligence such as opposition research into Donald Trump, the Republican nominee for president at the time. The hacking group resurfaced in the days following Trump’s election victory that year with a major spear-phishing blitz that targeted dozens of organizations in government, military, defense contracting, media, and other industries.

One of Cozy Bear’s crowning achievements came in late 2020 with the discovery of an extensive supply chain attack that targeted customers of SolarWinds, the Austin, Texas, maker of network management tools. After compromising SolarWinds’ software build system, the hacking group pushed infected updates to roughly 18,000 customers. The hackers then used the updates to compromise nine federal agencies and about 100 private companies, White House officials have said.

Cozy Bear has remained active, with multiple campaigns coming to light in 2021, including one that used zero-day vulnerabilities to infect fully updated iPhones. Last year, the group devoted much of its time to hacks of Ukraine.

The life and times of Cozy Bear, the Russian hackers who just hit Microsoft and HPE Read More »

in-major-gaffe,-hacked-microsoft-test-account-was-assigned-admin-privileges

In major gaffe, hacked Microsoft test account was assigned admin privileges

In major gaffe, hacked Microsoft test account was assigned admin privileges

The hackers who recently broke into Microsoft’s network and monitored top executives’ email for two months did so by gaining access to an aging test account with administrative privileges, a major gaffe on the company’s part, a researcher said.

The new detail was provided in vaguely worded language included in a post Microsoft published on Thursday. It expanded on a disclosure Microsoft published late last Friday. Russia-state hackers, Microsoft said, used a technique known as password spraying to exploit a weak credential for logging into a “legacy non-production test tenant account” that wasn’t protected by multifactor authentication. From there, they somehow acquired the ability to access email accounts that belonged to senior executives and employees working in security and legal teams.

A “pretty big config error”

In Thursday’s post updating customers on findings from its ongoing investigation, Microsoft provided more details on how the hackers achieved this monumental escalation of access. The hackers, part of a group Microsoft tracks as Midnight Blizzard, gained persistent access to the privileged email accounts by abusing the OAuth authorization protcol, which is used industry-wide to allow an array of apps to access resources on a network. After compromising the test tenant, Midnight Blizzard used it to create a malicious app and assign it rights to access every email address on Microsoft’s Office 365 email service.

In Thursday’s update, Microsoft officials said as much, although in language that largely obscured the extent of the major blunder. They wrote:

Threat actors like Midnight Blizzard compromise user accounts to create, modify, and grant high permissions to OAuth applications that they can misuse to hide malicious activity. The misuse of OAuth also enables threat actors to maintain access to applications, even if they lose access to the initially compromised account. Midnight Blizzard leveraged their initial access to identify and compromise a legacy test OAuth application that had elevated access to the Microsoft corporate environment. The actor created additional malicious OAuth applications. They created a new user account to grant consent in the Microsoft corporate environment to the actor controlled malicious OAuth applications. The threat actor then used the legacy test OAuth application to grant them the Office 365 Exchange Online full_access_as_app role, which allows access to mailboxes. [Emphasis added.]

Kevin Beaumont—a researcher and security professional with decades of experience, including a stint working for Microsoft—pointed out on Mastodon that the only way for an account to assign the all-powerful full_access_as_app role to an OAuth app is for the account to have administrator privileges. “Somebody,” he said, “made a pretty big config error in production.”

In major gaffe, hacked Microsoft test account was assigned admin privileges Read More »

openai-updates-chatgpt-4-model-with-potential-fix-for-ai-“laziness”-problem

OpenAI updates ChatGPT-4 model with potential fix for AI “laziness” problem

Break’s over —

Also, new GPT-3.5 Turbo model, lower API prices, and other model updates.

A lazy robot (a man with a box on his head) sits on the floor beside a couch.

On Thursday, OpenAI announced updates to the AI models that power its ChatGPT assistant. Amid less noteworthy updates, OpenAI tucked in a mention of a potential fix to a widely reported “laziness” problem seen in GPT-4 Turbo since its release in November. The company also announced a new GPT-3.5 Turbo model (with lower pricing), a new embedding model, an updated moderation model, and a new way to manage API usage.

“Today, we are releasing an updated GPT-4 Turbo preview model, gpt-4-0125-preview. This model completes tasks like code generation more thoroughly than the previous preview model and is intended to reduce cases of ‘laziness’ where the model doesn’t complete a task,” writes OpenAI in its blog post.

Since the launch of GPT-4 Turbo, a large number of ChatGPT users have reported that the ChatGPT-4 version of its AI assistant has been declining to do tasks (especially coding tasks) with the same exhaustive depth as it did in earlier versions of GPT-4. We’ve seen this behavior ourselves while experimenting with ChatGPT over time.

OpenAI has never offered an official explanation for this change in behavior, but OpenAI employees have previously acknowledged on social media that the problem is real, and the ChatGPT X account wrote in December, “We’ve heard all your feedback about GPT4 getting lazier! we haven’t updated the model since Nov 11th, and this certainly isn’t intentional. model behavior can be unpredictable, and we’re looking into fixing it.”

We reached out to OpenAI asking if it could provide an official explanation for the laziness issue but did not receive a response by press time.

New GPT-3.5 Turbo, other updates

Elsewhere in OpenAI’s blog update, the company announced a new version of GPT-3.5 Turbo (gpt-3.5-turbo-0125), which it says will offer “various improvements including higher accuracy at responding in requested formats and a fix for a bug which caused a text encoding issue for non-English language function calls.”

And the cost of GPT-3.5 Turbo through OpenAI’s API will decrease for the third time this year “to help our customers scale.” New input token prices are 50 percent less, at $0.0005 per 1,000 input tokens, and output prices are 25 percent less, at $0.0015 per 1,000 output tokens.

Lower token prices for GPT-3.5 Turbo will make operating third-party bots significantly less expensive, but the GPT-3.5 model is generally more likely to confabulate than GPT-4 Turbo. So we might see more scenarios like Quora’s bot telling people that eggs can melt (although the instance used a now-deprecated GPT-3 model called text-davinci-003). If GPT-4 Turbo API prices drop over time, some of those hallucination issues with third parties might eventually go away.

OpenAI also announced new embedding models, text-embedding-3-small and text-embedding-3-large, which convert content into numerical sequences, aiding in machine learning tasks like clustering and retrieval. And an updated moderation model, text-moderation-007, is part of the company’s API that “allows developers to identify potentially harmful text,” according to OpenAI.

Finally, OpenAI is rolling out improvements to its developer platform, introducing new tools for managing API keys and a new dashboard for tracking API usage. Developers can now assign permissions to API keys from the API keys page, helping to clamp down on misuse of API keys (if they get into the wrong hands) that can potentially cost developers lots of money. The API dashboard allows devs to “view usage on a per feature, team, product, or project level, simply by having separate API keys for each.”

As the media world seemingly swirls around the company with controversies and think pieces about the implications of its tech, releases like these show that the dev teams at OpenAI are still rolling along as usual with updates at a fairly regular pace. Despite the company almost completely falling apart late last year, it seems that, under the hood, it’s business as usual for OpenAI.

OpenAI updates ChatGPT-4 model with potential fix for AI “laziness” problem Read More »

microsoft-cancels-blizzard-survival-game,-lays-off-1,900

Microsoft cancels Blizzard survival game, lays off 1,900

Survival game won’t survive —

Job cuts hit Xbox, ZeniMax businesses, too, reports say.

Activision Blizzard survival game

Enlarge / Blizzard shared this image teasing a now-cancelled game in 2022.

Blizzard Entertainment/Twitter

The survival game that Blizzard announced it was working on in January 2022 has reportedly been canceled. The cut comes as Microsoft is slashing jobs a little over four months after closing its $69 billion Activision Blizzard acquisition.

Blizzard’s game didn’t have a title yet, but Blizzard said it would be for PC and console and introduce new stories and characters. In January 2022, Blizzard put out a call for workers to help build the game.

The game’s axing was revealed today in an internal memo from Microsoft Gaming CEO Phil Spencer seen by publications including The Verge and CNBC that said:

Blizzard is ending development on its survival game project and will be shifting some of the people working on it to one of several promising new projects Blizzard has in the early stages of development.

Spencer said Microsoft was laying off 1,900 people starting today, with workers continuing to receive notifications in the coming days. The layoffs affect 8.64 percent of Microsoft’s 22,000-employee gaming division.

Another internal memo, written by Matt Booty, Microsoft’s game content and studios president, and seen by The Verge, said the layoffs are hitting “multiple” Blizzard teams, “including development teams, shared service organizations and corporate functions.” In January 2022, after plans for the merger were first announced, Bobby Kotick, then-CEO of Activision Blizzard, reportedly told employees at a meeting that Microsoft was “committed to trying to retain as many of our people as possible.”

Spencer said workers in Microsoft’s Xbox and ZeniMax Media businesses will also be impacted. Microsoft acquired ZeniMax, which owns Bethesda Softworks, for $7.5 billion in a deal that closed in March 2021.

After a bumpy ride with global regulators, Microsoft’s Activision Blizzard purchase closed in October. Booty’s memo said the job cuts announced today “reflect a focus on products and strategies that hold the most promise for Blizzard’s future growth, as well as identified areas of overlap across Blizzard and Microsoft Gaming.”

He claimed that layoffs would “enable Blizzard and Xbox to deliver ambitious games… on more platforms and in more places than ever before,” as well as “sustainable growth.”

Spencer’s memo said:

As we move forward in 2024, the leadership of Microsoft Gaming and Activision Blizzard is committed to aligning on a strategy and an execution plan with a sustainable cost structure that will support the whole of our growing business. Together, we’ve set priorities, identified areas of overlap, and ensured that we’re all aligned on the best opportunities for growth.

Laid-off employees will receive severance as per local employment laws, Spencer added.

Additional departures

Blizzard President Mike Ybarra announced via his X profile today that he is leaving the company. Booty’s memo said Ybarra “decided to leave” since the acquisition was completed. Ybarra was a top executive at Microsoft for over 20 years, including leadership positions at Xbox, before he started working at Blizzard in 2019.

Blizzard’s chief design officer, Allen Adham, is also leaving the company, per Booty’s memo.

The changes at the game studio follow Activision Blizzard CEO Bobby Kotick’s exit on January 1.

Microsoft also laid off 10,000 people, or about 4.5 percent of its reported 221,000-person workforce, last year as it worked to complete its Activision Blizzard buy. Microsoft blamed those job cuts on “macroeconomic conditions and changing customer priorities.”

Today’s job losses also join a string of recently announced tech layoffs, including at IBM, Google, SAP, and eBay and in the gaming community platforms Unity, Twitch, and Discord. However, layoffs following Microsoft’s Activision Blizzard deal were somewhat anticipated due to expected redundancies among the Washington tech giant’s biggest merger ever. This week, Microsoft hit a $3 trillion market cap, becoming the second company to do so (after Apple).

Microsoft cancels Blizzard survival game, lays off 1,900 Read More »

google’s-latest-ai-video-generator-can-render-cute-animals-in-implausible-situations

Google’s latest AI video generator can render cute animals in implausible situations

An elephant with a party hat—underwater —

Lumiere generates five-second videos that “portray realistic, diverse and coherent motion.”

Still images of AI-generated video examples provided by Google for its Lumiere video synthesis model.

Enlarge / Still images of AI-generated video examples provided by Google for its Lumiere video synthesis model.

On Tuesday, Google announced Lumiere, an AI video generator that it calls “a space-time diffusion model for realistic video generation” in the accompanying preprint paper. But let’s not kid ourselves: It does a great job at creating videos of cute animals in ridiculous scenarios, such as using roller skates, driving a car, or playing a piano. Sure, it can do more, but it is perhaps the most advanced text-to-animal AI video generator yet demonstrated.

According to Google, Lumiere utilizes unique architecture to generate a video’s entire temporal duration in one go. Or, as the company put it, “We introduce a Space-Time U-Net architecture that generates the entire temporal duration of the video at once, through a single pass in the model. This is in contrast to existing video models which synthesize distant keyframes followed by temporal super-resolution—an approach that inherently makes global temporal consistency difficult to achieve.”

In layperson terms, Google’s tech is designed to handle both the space (where things are in the video) and time (how things move and change throughout the video) aspects simultaneously. So, instead of making a video by putting together many small parts or frames, it can create the entire video, from start to finish, in one smooth process.

The official promotional video accompanying the paper “Lumiere: A Space-Time Diffusion Model for Video Generation,” released by Google.

Lumiere can also do plenty of party tricks, which are laid out quite well with examples on Google’s demo page. For example, it can perform text-to-video generation (turning a written prompt into a video), convert still images into videos, generate videos in specific styles using a reference image, apply consistent video editing using text-based prompts, create cinemagraphs by animating specific regions of an image, and offer video inpainting capabilities (for example, it can change the type of dress a person is wearing).

In the Lumiere research paper, the Google researchers state that the AI model outputs five-second long 1024×1024 pixel videos, which they describe as “low-resolution.” Despite those limitations, the researchers performed a user study and claim that Lumiere’s outputs were preferred over existing AI video synthesis models.

As for training data, Google doesn’t say where it got the videos they fed into Lumiere, writing, “We train our T2V [text to video] model on a dataset containing 30M videos along with their text caption. [sic] The videos are 80 frames long at 16 fps (5 seconds). The base model is trained at 128×128.”

A block diagram showing components of the Lumiere AI model, provided by Google.

Enlarge / A block diagram showing components of the Lumiere AI model, provided by Google.

AI-generated video is still in a primitive state, but it’s been progressing in quality over the past two years. In October 2022, we covered Google’s first publicly unveiled image synthesis model, Imagen Video. It could generate short 1280×768 video clips from a written prompt at 24 frames per second, but the results weren’t always coherent. Before that, Meta debuted its AI video generator, Make-A-Video. In June of last year, Runway’s Gen2 video synthesis model enabled the creation of two-second video clips from text prompts, fueling the creation of surrealistic parody commercials. And in November, we covered Stable Video Diffusion, which can generate short clips from still images.

AI companies often demonstrate video generators with cute animals because generating coherent, non-deformed humans is currently difficult—especially since we, as humans (you are human, right?), are adept at noticing any flaws in human bodies or how they move. Just look at AI-generated Will Smith eating spaghetti.

Judging by Google’s examples (and not having used it ourselves), Lumiere appears to surpass these other AI video generation models. But since Google tends to keep its AI research models close to its chest, we’re not sure when, if ever, the public may have a chance to try it for themselves.

As always, whenever we see text-to-video synthesis models getting more capable, we can’t help but think of the future implications for our Internet-connected society, which is centered around sharing media artifacts—and the general presumption that “realistic” video typically represents real objects in real situations captured by a camera. Future video synthesis tools more capable than Lumiere will make deceptive deepfakes trivially easy to create.

To that end, in the “Societal Impact” section of the Lumiere paper, the researchers write, “Our primary goal in this work is to enable novice users to generate visual content in an creative and flexible way. [sic] However, there is a risk of misuse for creating fake or harmful content with our technology, and we believe that it is crucial to develop and apply tools for detecting biases and malicious use cases in order to ensure a safe and fair use.”

Google’s latest AI video generator can render cute animals in implausible situations Read More »