Biz & IT

chatgpt-is-leaking-passwords-from-private-conversations-of-its-users,-ars-reader-says

ChatGPT is leaking passwords from private conversations of its users, Ars reader says

OPENAI SPRINGS A LEAK —

Names of unpublished research papers, presentations, and PHP scripts also leaked.

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen.

Getty Images

ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users, screenshots submitted by an Ars reader on Monday indicated.

Two of the seven screenshots the reader submitted stood out in particular. Both contained multiple pairs of usernames and passwords that appeared to be connected to a support system used by employees of a pharmacy prescription drug portal. An employee using the AI chatbot seemed to be troubleshooting problems that encountered while using the portal.

“Horrible, horrible, horrible”

“THIS is so f-ing insane, horrible, horrible, horrible, i cannot believe how poorly this was built in the first place, and the obstruction that is being put in front of me that prevents it from getting better,” the user wrote. “I would fire [redacted name of software] just for this absurdity if it was my choice. This is wrong.”

Besides the candid language and the credentials, the leaked conversation includes the name of the app the employee is troubleshooting and the store number where the problem occurred.

The entire conversation goes well beyond what’s shown in the redacted screenshot above. A link Ars reader Chase Whiteside included showed the chat conversation in its entirety. The URL disclosed additional credential pairs.

The results appeared Monday morning shortly after reader Whiteside had used ChatGPT for an unrelated query.

“I went to make a query (in this case, help coming up with clever names for colors in a palette) and when I returned to access moments later, I noticed the additional conversations,” Whiteside wrote in an email. “They weren’t there when I used ChatGPT just last night (I’m a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren’t from me (and I don’t think they’re from the same user either).”

Other conversations leaked to Whiteside include the name of a presentation someone was working on, details of an unpublished research proposal, and a script using the PHP programming language. The users for each leaked conversation appeared to be different and unrelated to each other. The conversation involving the prescription portal included the year 2020. Dates didn’t appear in the other conversations.

The episode, and others like it, underscore the wisdom of stripping out personal details from queries made to ChatGPT and other AI services whenever possible. Last March, ChatGPT maker OpenAI took the AI chatbot offline after a bug caused the site to show titles from one active user’s chat history to unrelated users.

In November, researchers published a paper reporting how they used queries to prompt ChatGPT into divulging email addresses, phone and fax numbers, physical addresses, and other private data that was included in material used to train the ChatGPT large language model.

Concerned about the possibility of proprietary or private data leakage, companies, including Apple, have restricted their employees’ use of ChatGPT and similar sites.

As mentioned in an article from December when multiple people found that Ubiquity’s UniFy devices broadcasted private video belonging to unrelated users, these sorts of experiences are as old as the Internet is. As explained in the article:

The precise root causes of this type of system error vary from incident to incident, but they often involve “middlebox” devices, which sit between the front- and back-end devices. To improve performance, middleboxes cache certain data, including the credentials of users who have recently logged in. When mismatches occur, credentials for one account can be mapped to a different account.

An OpenAI representative said the company was investigating the report.

ChatGPT is leaking passwords from private conversations of its users, Ars reader says Read More »

openai-and-common-sense-media-partner-to-protect-teens-from-ai-harms-and-misuse

OpenAI and Common Sense Media partner to protect teens from AI harms and misuse

Adventures in chatbusting —

Site gave ChatGPT 3 stars and 48% privacy score: “Best used for creativity, not facts.”

Boy in Living Room Wearing Robot Mask

On Monday, OpenAI announced a partnership with the nonprofit Common Sense Media to create AI guidelines and educational materials targeted at parents, educators, and teens. It includes the curation of family-friendly GPTs in OpenAI’s GPT store. The collaboration aims to address concerns about the impacts of AI on children and teenagers.

Known for its reviews of films and TV shows aimed at parents seeking appropriate media for their kids to watch, Common Sense Media recently branched out into AI and has been reviewing AI assistants on its site.

“AI isn’t going anywhere, so it’s important that we help kids understand how to use it responsibly,” Common Sense Media wrote on X. “That’s why we’ve partnered with @OpenAI to help teens and families safely harness the potential of AI.”

OpenAI CEO Sam Altman and Common Sense Media CEO James Steyer announced the partnership onstage in San Francisco at the Common Sense Summit for America’s Kids and Families, an event that was well-covered by media members on the social media site X.

For his part, Altman offered a canned statement in the press release, saying, “AI offers incredible benefits for families and teens, and our partnership with Common Sense will further strengthen our safety work, ensuring that families and teens can use our tools with confidence.”

The announcement feels slightly non-specific in the official news release, with Steyer offering, “Our guides and curation will be designed to educate families and educators about safe, responsible use of ChatGPT, so that we can collectively avoid any unintended consequences of this emerging technology.”

The partnership seems aimed mostly at bringing a patina of family-friendliness to OpenAI’s GPT store, with the most solid reveal being the aforementioned fact that Common Sense media will help with the “curation of family-friendly GPTs in the GPT Store based on Common Sense ratings and standards.”

Common Sense AI reviews

As mentioned above, Common Sense Media began reviewing AI assistants on its site late last year. This puts Common Sense Media in an interesting position with potential conflicts of interest regarding the new partnership with OpenAI. However, it doesn’t seem to be offering any favoritism to OpenAI so far.

For example, Common Sense Media’s review of ChatGPT calls the AI assistant “A powerful, at times risky chatbot for people 13+ that is best used for creativity, not facts.” It labels ChatGPT as being suitable for ages 13 and up (which is in OpenAI’s Terms of Service) and gives the OpenAI assistant three out of five stars. ChatGPT also scores a 48 percent privacy rating (which is oddly shown as 55 percent on another page that goes into privacy details). The review we cited was last updated on October 13, 2023, as of this writing.

For reference, Google Bard gets a three-star overall rating and a 75 percent privacy rating in its Common Sense Media review. Stable Diffusion, the image synthesis model, nets a one-star rating with the description, “Powerful image generator can unleash creativity, but is wildly unsafe and perpetuates harm.” OpenAI’s DALL-E gets two stars and a 48 percent privacy rating.

The information that Common Sense Media includes about each AI model appears relatively accurate and detailed (and the organization cited an Ars Technica article as a reference in one explanation), so they feel fair, even in the face of the OpenAI partnership. Given the low scores, it seems that most AI models aren’t off to a great start, but that may change. It’s still early days in generative AI.

OpenAI and Common Sense Media partner to protect teens from AI harms and misuse Read More »

the-life-and-times-of-cozy-bear,-the-russian-hackers-who-just-hit-microsoft-and-hpe

The life and times of Cozy Bear, the Russian hackers who just hit Microsoft and HPE

FROM RUSSIA WITH ROOT —

Hacks by Kremlin-backed group continue to hit hard.

The life and times of Cozy Bear, the Russian hackers who just hit Microsoft and HPE

Getty Images

Hewlett Packard Enterprise (HPE) said Wednesday that Kremlin-backed actors hacked into the email accounts of its security personnel and other employees last May—and maintained surreptitious access until December. The disclosure was the second revelation of a major corporate network breach by the hacking group in five days.

The hacking group that hit HPE is the same one that Microsoft said Friday broke into its corporate network in November and monitored email accounts of senior executives and security team members until being driven out earlier this month. Microsoft tracks the group as Midnight Blizzard. (Under the company’s recently retired threat actor naming convention, which was based on chemical elements, the group was known as Nobelium.) But it is perhaps better known by the name Cozy Bear—though researchers have also dubbed it APT29, the Dukes, Cloaked Ursa, and Dark Halo.

“On December 12, 2023, Hewlett Packard Enterprise was notified that a suspected nation-state actor, believed to be the threat actor Midnight Blizzard, the state-sponsored actor also known as Cozy Bear, had gained unauthorized access to HPE’s cloud-based email environment,” company lawyers wrote in a filing with the Securities and Exchange Commission. “The Company, with assistance from external cybersecurity experts, immediately activated our response process to investigate, contain, and remediate the incident, eradicating the activity. Based on our investigation, we now believe that the threat actor accessed and exfiltrated data beginning in May 2023 from a small percentage of HPE mailboxes belonging to individuals in our cybersecurity, go-to-market, business segments, and other functions.”

An HPE representative said in an email that Cozy Bear’s initial entry into the network was through “a compromised, internal HPE Office 365 email account [that] was leveraged to gain access.” The representative declined to elaborate. The representative also declined to say how HPE discovered the breach.

Cozy Bear hacking its way into the email systems of two of the world’s most powerful companies and monitoring top employees’ accounts for months aren’t the only similarities between the two events. Both breaches also involved compromising a single device on each corporate network, then escalating that toehold to the network itself. From there, Cozy Bear camped out undetected for months. The HPE intrusion was all the more impressive because Wednesday’s disclosure said that the hackers also gained access to Sharepoint servers in May. Even after HPE detected and contained that breach a month later, it would take HPE another six months to discover the compromised email accounts.

The pair of disclosures, coming within five days of each other, may create the impression that there has been a recent flurry of hacking activity. But Cozy Bear has actually been one of the most active nation-state groups since at least 2010. In the intervening 14 years, it has waged an almost constant series of attacks, mostly on the networks of governmental organizations and the technology companies that supply them. Multiple intelligence services and private research companies have attributed the hacking group as an arm of Russia’s Foreign Intelligence Service, also known as the SVR.

The life and times of Cozy Bear (so far)

In its earliest years, Cozy Bear operated in relative obscurity—precisely the domain it prefers—as it hacked mostly Western governmental agencies and related organizations such as political think tanks and governmental subcontractors. In 2013, researchers from security firm Kaspersky unearthed MiniDuke, a sophisticated piece of malware that had taken hold of 60 government agencies, think tanks, and other high-profile organizations in 23 countries, including the US, Hungary, Ukraine, Belgium, and Portugal.

MiniDuke was notable for its odd combination of advanced programming and the gratuitous references to literature found embedded into its code. (It contained strings that alluded to Dante Alighieri’s Divine Comedy and to 666, the Mark of the Beast discussed in a verse from the Book of Revelation.) Written in assembly, employing multiple levels of encryption, and relying on hijacked Twitter accounts and automated Google searches to maintain stealthy communications with command-and-control servers, MiniDuke was among the most advanced pieces of malware found at the time.

It wasn’t immediately clear who was behind the mysterious malware—another testament to the stealth of its creators. In 2015, however, researchers linked MiniDuke—and seven other pieces of previously unidentified malware—to Cozy Bear. After a half-decade of lurking, the shadowy group was suddenly brought into the light of day.

Cozy Bear once again came to prominence the following year when researchers discovered the group (along with Fancy Bear, a separate Russian-state hacking group) inside the servers of the Democratic National Committee, looking for intelligence such as opposition research into Donald Trump, the Republican nominee for president at the time. The hacking group resurfaced in the days following Trump’s election victory that year with a major spear-phishing blitz that targeted dozens of organizations in government, military, defense contracting, media, and other industries.

One of Cozy Bear’s crowning achievements came in late 2020 with the discovery of an extensive supply chain attack that targeted customers of SolarWinds, the Austin, Texas, maker of network management tools. After compromising SolarWinds’ software build system, the hacking group pushed infected updates to roughly 18,000 customers. The hackers then used the updates to compromise nine federal agencies and about 100 private companies, White House officials have said.

Cozy Bear has remained active, with multiple campaigns coming to light in 2021, including one that used zero-day vulnerabilities to infect fully updated iPhones. Last year, the group devoted much of its time to hacks of Ukraine.

The life and times of Cozy Bear, the Russian hackers who just hit Microsoft and HPE Read More »

in-major-gaffe,-hacked-microsoft-test-account-was-assigned-admin-privileges

In major gaffe, hacked Microsoft test account was assigned admin privileges

In major gaffe, hacked Microsoft test account was assigned admin privileges

The hackers who recently broke into Microsoft’s network and monitored top executives’ email for two months did so by gaining access to an aging test account with administrative privileges, a major gaffe on the company’s part, a researcher said.

The new detail was provided in vaguely worded language included in a post Microsoft published on Thursday. It expanded on a disclosure Microsoft published late last Friday. Russia-state hackers, Microsoft said, used a technique known as password spraying to exploit a weak credential for logging into a “legacy non-production test tenant account” that wasn’t protected by multifactor authentication. From there, they somehow acquired the ability to access email accounts that belonged to senior executives and employees working in security and legal teams.

A “pretty big config error”

In Thursday’s post updating customers on findings from its ongoing investigation, Microsoft provided more details on how the hackers achieved this monumental escalation of access. The hackers, part of a group Microsoft tracks as Midnight Blizzard, gained persistent access to the privileged email accounts by abusing the OAuth authorization protcol, which is used industry-wide to allow an array of apps to access resources on a network. After compromising the test tenant, Midnight Blizzard used it to create a malicious app and assign it rights to access every email address on Microsoft’s Office 365 email service.

In Thursday’s update, Microsoft officials said as much, although in language that largely obscured the extent of the major blunder. They wrote:

Threat actors like Midnight Blizzard compromise user accounts to create, modify, and grant high permissions to OAuth applications that they can misuse to hide malicious activity. The misuse of OAuth also enables threat actors to maintain access to applications, even if they lose access to the initially compromised account. Midnight Blizzard leveraged their initial access to identify and compromise a legacy test OAuth application that had elevated access to the Microsoft corporate environment. The actor created additional malicious OAuth applications. They created a new user account to grant consent in the Microsoft corporate environment to the actor controlled malicious OAuth applications. The threat actor then used the legacy test OAuth application to grant them the Office 365 Exchange Online full_access_as_app role, which allows access to mailboxes. [Emphasis added.]

Kevin Beaumont—a researcher and security professional with decades of experience, including a stint working for Microsoft—pointed out on Mastodon that the only way for an account to assign the all-powerful full_access_as_app role to an OAuth app is for the account to have administrator privileges. “Somebody,” he said, “made a pretty big config error in production.”

In major gaffe, hacked Microsoft test account was assigned admin privileges Read More »

openai-updates-chatgpt-4-model-with-potential-fix-for-ai-“laziness”-problem

OpenAI updates ChatGPT-4 model with potential fix for AI “laziness” problem

Break’s over —

Also, new GPT-3.5 Turbo model, lower API prices, and other model updates.

A lazy robot (a man with a box on his head) sits on the floor beside a couch.

On Thursday, OpenAI announced updates to the AI models that power its ChatGPT assistant. Amid less noteworthy updates, OpenAI tucked in a mention of a potential fix to a widely reported “laziness” problem seen in GPT-4 Turbo since its release in November. The company also announced a new GPT-3.5 Turbo model (with lower pricing), a new embedding model, an updated moderation model, and a new way to manage API usage.

“Today, we are releasing an updated GPT-4 Turbo preview model, gpt-4-0125-preview. This model completes tasks like code generation more thoroughly than the previous preview model and is intended to reduce cases of ‘laziness’ where the model doesn’t complete a task,” writes OpenAI in its blog post.

Since the launch of GPT-4 Turbo, a large number of ChatGPT users have reported that the ChatGPT-4 version of its AI assistant has been declining to do tasks (especially coding tasks) with the same exhaustive depth as it did in earlier versions of GPT-4. We’ve seen this behavior ourselves while experimenting with ChatGPT over time.

OpenAI has never offered an official explanation for this change in behavior, but OpenAI employees have previously acknowledged on social media that the problem is real, and the ChatGPT X account wrote in December, “We’ve heard all your feedback about GPT4 getting lazier! we haven’t updated the model since Nov 11th, and this certainly isn’t intentional. model behavior can be unpredictable, and we’re looking into fixing it.”

We reached out to OpenAI asking if it could provide an official explanation for the laziness issue but did not receive a response by press time.

New GPT-3.5 Turbo, other updates

Elsewhere in OpenAI’s blog update, the company announced a new version of GPT-3.5 Turbo (gpt-3.5-turbo-0125), which it says will offer “various improvements including higher accuracy at responding in requested formats and a fix for a bug which caused a text encoding issue for non-English language function calls.”

And the cost of GPT-3.5 Turbo through OpenAI’s API will decrease for the third time this year “to help our customers scale.” New input token prices are 50 percent less, at $0.0005 per 1,000 input tokens, and output prices are 25 percent less, at $0.0015 per 1,000 output tokens.

Lower token prices for GPT-3.5 Turbo will make operating third-party bots significantly less expensive, but the GPT-3.5 model is generally more likely to confabulate than GPT-4 Turbo. So we might see more scenarios like Quora’s bot telling people that eggs can melt (although the instance used a now-deprecated GPT-3 model called text-davinci-003). If GPT-4 Turbo API prices drop over time, some of those hallucination issues with third parties might eventually go away.

OpenAI also announced new embedding models, text-embedding-3-small and text-embedding-3-large, which convert content into numerical sequences, aiding in machine learning tasks like clustering and retrieval. And an updated moderation model, text-moderation-007, is part of the company’s API that “allows developers to identify potentially harmful text,” according to OpenAI.

Finally, OpenAI is rolling out improvements to its developer platform, introducing new tools for managing API keys and a new dashboard for tracking API usage. Developers can now assign permissions to API keys from the API keys page, helping to clamp down on misuse of API keys (if they get into the wrong hands) that can potentially cost developers lots of money. The API dashboard allows devs to “view usage on a per feature, team, product, or project level, simply by having separate API keys for each.”

As the media world seemingly swirls around the company with controversies and think pieces about the implications of its tech, releases like these show that the dev teams at OpenAI are still rolling along as usual with updates at a fairly regular pace. Despite the company almost completely falling apart late last year, it seems that, under the hood, it’s business as usual for OpenAI.

OpenAI updates ChatGPT-4 model with potential fix for AI “laziness” problem Read More »

microsoft-cancels-blizzard-survival-game,-lays-off-1,900

Microsoft cancels Blizzard survival game, lays off 1,900

Survival game won’t survive —

Job cuts hit Xbox, ZeniMax businesses, too, reports say.

Activision Blizzard survival game

Enlarge / Blizzard shared this image teasing a now-cancelled game in 2022.

Blizzard Entertainment/Twitter

The survival game that Blizzard announced it was working on in January 2022 has reportedly been canceled. The cut comes as Microsoft is slashing jobs a little over four months after closing its $69 billion Activision Blizzard acquisition.

Blizzard’s game didn’t have a title yet, but Blizzard said it would be for PC and console and introduce new stories and characters. In January 2022, Blizzard put out a call for workers to help build the game.

The game’s axing was revealed today in an internal memo from Microsoft Gaming CEO Phil Spencer seen by publications including The Verge and CNBC that said:

Blizzard is ending development on its survival game project and will be shifting some of the people working on it to one of several promising new projects Blizzard has in the early stages of development.

Spencer said Microsoft was laying off 1,900 people starting today, with workers continuing to receive notifications in the coming days. The layoffs affect 8.64 percent of Microsoft’s 22,000-employee gaming division.

Another internal memo, written by Matt Booty, Microsoft’s game content and studios president, and seen by The Verge, said the layoffs are hitting “multiple” Blizzard teams, “including development teams, shared service organizations and corporate functions.” In January 2022, after plans for the merger were first announced, Bobby Kotick, then-CEO of Activision Blizzard, reportedly told employees at a meeting that Microsoft was “committed to trying to retain as many of our people as possible.”

Spencer said workers in Microsoft’s Xbox and ZeniMax Media businesses will also be impacted. Microsoft acquired ZeniMax, which owns Bethesda Softworks, for $7.5 billion in a deal that closed in March 2021.

After a bumpy ride with global regulators, Microsoft’s Activision Blizzard purchase closed in October. Booty’s memo said the job cuts announced today “reflect a focus on products and strategies that hold the most promise for Blizzard’s future growth, as well as identified areas of overlap across Blizzard and Microsoft Gaming.”

He claimed that layoffs would “enable Blizzard and Xbox to deliver ambitious games… on more platforms and in more places than ever before,” as well as “sustainable growth.”

Spencer’s memo said:

As we move forward in 2024, the leadership of Microsoft Gaming and Activision Blizzard is committed to aligning on a strategy and an execution plan with a sustainable cost structure that will support the whole of our growing business. Together, we’ve set priorities, identified areas of overlap, and ensured that we’re all aligned on the best opportunities for growth.

Laid-off employees will receive severance as per local employment laws, Spencer added.

Additional departures

Blizzard President Mike Ybarra announced via his X profile today that he is leaving the company. Booty’s memo said Ybarra “decided to leave” since the acquisition was completed. Ybarra was a top executive at Microsoft for over 20 years, including leadership positions at Xbox, before he started working at Blizzard in 2019.

Blizzard’s chief design officer, Allen Adham, is also leaving the company, per Booty’s memo.

The changes at the game studio follow Activision Blizzard CEO Bobby Kotick’s exit on January 1.

Microsoft also laid off 10,000 people, or about 4.5 percent of its reported 221,000-person workforce, last year as it worked to complete its Activision Blizzard buy. Microsoft blamed those job cuts on “macroeconomic conditions and changing customer priorities.”

Today’s job losses also join a string of recently announced tech layoffs, including at IBM, Google, SAP, and eBay and in the gaming community platforms Unity, Twitch, and Discord. However, layoffs following Microsoft’s Activision Blizzard deal were somewhat anticipated due to expected redundancies among the Washington tech giant’s biggest merger ever. This week, Microsoft hit a $3 trillion market cap, becoming the second company to do so (after Apple).

Microsoft cancels Blizzard survival game, lays off 1,900 Read More »

google’s-latest-ai-video-generator-can-render-cute-animals-in-implausible-situations

Google’s latest AI video generator can render cute animals in implausible situations

An elephant with a party hat—underwater —

Lumiere generates five-second videos that “portray realistic, diverse and coherent motion.”

Still images of AI-generated video examples provided by Google for its Lumiere video synthesis model.

Enlarge / Still images of AI-generated video examples provided by Google for its Lumiere video synthesis model.

On Tuesday, Google announced Lumiere, an AI video generator that it calls “a space-time diffusion model for realistic video generation” in the accompanying preprint paper. But let’s not kid ourselves: It does a great job at creating videos of cute animals in ridiculous scenarios, such as using roller skates, driving a car, or playing a piano. Sure, it can do more, but it is perhaps the most advanced text-to-animal AI video generator yet demonstrated.

According to Google, Lumiere utilizes unique architecture to generate a video’s entire temporal duration in one go. Or, as the company put it, “We introduce a Space-Time U-Net architecture that generates the entire temporal duration of the video at once, through a single pass in the model. This is in contrast to existing video models which synthesize distant keyframes followed by temporal super-resolution—an approach that inherently makes global temporal consistency difficult to achieve.”

In layperson terms, Google’s tech is designed to handle both the space (where things are in the video) and time (how things move and change throughout the video) aspects simultaneously. So, instead of making a video by putting together many small parts or frames, it can create the entire video, from start to finish, in one smooth process.

The official promotional video accompanying the paper “Lumiere: A Space-Time Diffusion Model for Video Generation,” released by Google.

Lumiere can also do plenty of party tricks, which are laid out quite well with examples on Google’s demo page. For example, it can perform text-to-video generation (turning a written prompt into a video), convert still images into videos, generate videos in specific styles using a reference image, apply consistent video editing using text-based prompts, create cinemagraphs by animating specific regions of an image, and offer video inpainting capabilities (for example, it can change the type of dress a person is wearing).

In the Lumiere research paper, the Google researchers state that the AI model outputs five-second long 1024×1024 pixel videos, which they describe as “low-resolution.” Despite those limitations, the researchers performed a user study and claim that Lumiere’s outputs were preferred over existing AI video synthesis models.

As for training data, Google doesn’t say where it got the videos they fed into Lumiere, writing, “We train our T2V [text to video] model on a dataset containing 30M videos along with their text caption. [sic] The videos are 80 frames long at 16 fps (5 seconds). The base model is trained at 128×128.”

A block diagram showing components of the Lumiere AI model, provided by Google.

Enlarge / A block diagram showing components of the Lumiere AI model, provided by Google.

AI-generated video is still in a primitive state, but it’s been progressing in quality over the past two years. In October 2022, we covered Google’s first publicly unveiled image synthesis model, Imagen Video. It could generate short 1280×768 video clips from a written prompt at 24 frames per second, but the results weren’t always coherent. Before that, Meta debuted its AI video generator, Make-A-Video. In June of last year, Runway’s Gen2 video synthesis model enabled the creation of two-second video clips from text prompts, fueling the creation of surrealistic parody commercials. And in November, we covered Stable Video Diffusion, which can generate short clips from still images.

AI companies often demonstrate video generators with cute animals because generating coherent, non-deformed humans is currently difficult—especially since we, as humans (you are human, right?), are adept at noticing any flaws in human bodies or how they move. Just look at AI-generated Will Smith eating spaghetti.

Judging by Google’s examples (and not having used it ourselves), Lumiere appears to surpass these other AI video generation models. But since Google tends to keep its AI research models close to its chest, we’re not sure when, if ever, the public may have a chance to try it for themselves.

As always, whenever we see text-to-video synthesis models getting more capable, we can’t help but think of the future implications for our Internet-connected society, which is centered around sharing media artifacts—and the general presumption that “realistic” video typically represents real objects in real situations captured by a camera. Future video synthesis tools more capable than Lumiere will make deceptive deepfakes trivially easy to create.

To that end, in the “Societal Impact” section of the Lumiere paper, the researchers write, “Our primary goal in this work is to enable novice users to generate visual content in an creative and flexible way. [sic] However, there is a risk of misuse for creating fake or harmful content with our technology, and we believe that it is crucial to develop and apply tools for detecting biases and malicious use cases in order to ensure a safe and fair use.”

Google’s latest AI video generator can render cute animals in implausible situations Read More »

mass-exploitation-of-ivanti-vpns-is-infecting-networks-around-the-globe

Mass exploitation of Ivanti VPNs is infecting networks around the globe

THIS IS NOT A DRILL —

Orgs that haven’t acted yet should, even if it means suspending VPN services.

Cybercriminals or anonymous hackers use malware on mobile phones to hack personal and business passwords online.

Enlarge / Cybercriminals or anonymous hackers use malware on mobile phones to hack personal and business passwords online.

Getty Images

Hackers suspected of working for the Chinese government are mass exploiting a pair of critical vulnerabilities that give them complete control of virtual private network appliances sold by Ivanti, researchers said.

As of Tuesday morning, security company Censys detected 492 Ivanti VPNs that remained infected out of 26,000 devices exposed to the Internet. More than a quarter of the compromised VPNs—121—resided in the US. The three countries with the next biggest concentrations were Germany, with 26, South Korea, with 24, and China, with 21.

Censys

Microsoft’s customer cloud service hosted the most infected devices with 13, followed by cloud environments from Amazon with 12, and Comcast at 10.

Censys

“We conducted a secondary scan on all Ivanti Connect Secure servers in our dataset and found 412 unique hosts with this backdoor, Censys researchers wrote. “Additionally, we found 22 distinct ‘variants’ (or unique callback methods), which could indicate multiple attackers or a single attacker evolving their tactics.”

In an email, members of the Censys research team said evidence suggests that the people infecting the devices are motivated by espionage objectives. That theory aligns with reports published recently by security firms Volexity and Mandiant. Volexity researchers said they suspect the threat actor, tracked as UTA0178, is a “Chinese nation-state-level threat actor.” Mandiant, which tracks the attack group as UNC5221, said the hackers are pursuing an “espionage-motivated APT campaign.”

All civilian governmental agencies have been mandated to take corrective action to prevent exploitation. Federal Civilian Executive Branch agencies had until 11: 59 pm Monday to follow the mandate, which was issued Friday by the Cybersecurity and Infrastructure Security Agency. Ivanti has yet to release patches to fix the vulnerabilities. In their absence, Ivanti, CISA, and security companies are urging affected users to follow mitigation and recovery guidance provided by Ivanti that include preventative measures to block exploitation and steps for customers to rebuild and upgrade their systems if they detect exploitation.

“This directive is no surprise, considering the worldwide mass exploitation observed since Ivanti initially revealed the vulnerabilities on January 10,” Censys researchers wrote. “These vulnerabilities are particularly serious given the severity, widespread exposure of these systems, and the complexity of mitigation—especially given the absence of an official patch from the vendor as of the current writing.

When Avanti disclosed the vulnerabilities on January 10, the company said it would release patches on a staggered basis starting this week. The company has not issued a public statement since confirming the patch was still on schedule.

VPNs are an ideal device for hackers to infect because the always-on appliances sit at the very edge of the network, where they accept incoming connections. Because the VPNs must communicate with broad parts of the internal network, hackers who compromise the devices can then expand their presence to other areas. When exploited in unison, the vulnerabilities, tracked as CVE-2023-46805 and CVE-2024-21887, allow attackers to remotely execute code on servers. All supported versions of the Ivanti Connect Secure—often abbreviated as ICS and formerly known as Pulse Secure—are affected.

The ongoing attacks use the exploits to install a host of malware that acts as a backdoor. The hackers then use the malware to harvest as many credentials as possible belonging to various employees and devices on the infected network and to rifle around the network. Despite the use of this malware, the attackers largely employ an approach known as “living off the land,” which uses legitimate software and tools so they’re harder to detect.

The posts linked above from Volexity and Mandiant provide extensive descriptions of how the malware behaves and methods for detecting infections.

Given the severity of the vulnerabilities and the consequences that follow when they’re exploited, all users of affected products should prioritize mitigation of these vulnerabilities, even if that means temporarily suspending VPN usage.

Mass exploitation of Ivanti VPNs is infecting networks around the globe Read More »

a-“robot”-should-be-chemical,-not-steel,-argues-man-who-coined-the-word

A “robot” should be chemical, not steel, argues man who coined the word

Dispatch from 1935 —

Čapek: “The world needed mechanical robots, for it believes in machines more than it believes in life.”

In 1921, Czech playwright Karel Čapek and his brother Josef invented the word “robot” in a sci-fi play called R.U.R. (short for Rossum’s Universal Robots). As Even Ackerman in IEEE Spectrum points out, Čapek wasn’t happy about how the term’s meaning evolved to denote mechanical entities, straying from his original concept of artificial human-like beings based on chemistry.

In a newly translated column called “The Author of the Robots Defends Himself,” published in Lidové Noviny on June 9, 1935, Čapek expresses his frustration about how his original vision for robots was being subverted. His arguments still apply to both modern robotics and AI. In this column, he referred to himself in the third-person:

For his robots were not mechanisms. They were not made of sheet metal and cogwheels. They were not a celebration of mechanical engineering. If the author was thinking of any of the marvels of the human spirit during their creation, it was not of technology, but of science. With outright horror, he refuses any responsibility for the thought that machines could take the place of people, or that anything like life, love, or rebellion could ever awaken in their cogwheels. He would regard this somber vision as an unforgivable overvaluation of mechanics or as a severe insult to life.

This recently resurfaced article comes courtesy of a new English translation of Čapek’s play called R.U.R. and the Vision of Artificial Life accompanied by 20 essays on robotics, philosophy, politics, and AI. The editor, Jitka Čejková, a professor at the Chemical Robotics Laboratory in Prague, aligns her research with Čapek’s original vision. She explores “chemical robots”—microparticles resembling living cells—which she calls “liquid robots.”

Enlarge / “An assistant of inventor Captain Richards works on the robot the Captain has invented, which speaks, answers questions, shakes hands, tells the time and sits down when it’s told to.” – September 1928

In Čapek’s 1935 column, he clarifies that his robots were not intended to be mechanical marvels, but organic products of modern chemistry, akin to living matter. Čapek emphasizes that he did not want to glorify mechanical systems but to explore the potential of science, particularly chemistry. He refutes the idea that machines could replace humans or develop emotions and consciousness.

The author of the robots would regard it as an act of scientific bad taste if he had brought something to life with brass cogwheels or created life in the test tube; the way he imagined it, he created only a new foundation for life, which began to behave like living matter, and which could therefore have become a vehicle of life—but a life which remains an unimaginable and incomprehensible mystery. This life will reach its fulfillment only when (with the aid of considerable inaccuracy and mysticism) the robots acquire souls. From which it is evident that the author did not invent his robots with the technological hubris of a mechanical engineer, but with the metaphysical humility of a spiritualist.

The reason for the transition from chemical to mechanical in the public perception of robots isn’t entirely clear (though Čapek does mention a Russian film which went the mechanical route and was likely influential). The early 20th century was a period of rapid industrialization and technological advancement that saw the emergence of complex machinery and electronic automation, which probably influenced the public and scientific community’s perception of autonomous beings, leading them to associate the idea of robots with mechanical and electronic devices rather than chemical creations.

The 1935 piece is full of interesting quotes (you can read the whole thing in IEEE Spectrum or here), and we’ve grabbed a few highlights below that you can conveniently share with your robot-loving friends to blow their minds:

  • “He pronounces that his robots were created quite differently—that is, by a chemical path”
  • “He has learned, without any great pleasure, that genuine steel robots have started to appear”
  • “Well then, the author cannot be blamed for what might be called the worldwide humbug over the robots.”
  • “The world needed mechanical robots, for it believes in machines more than it believes in life; it is fascinated more by the marvels of technology than by the miracle of life.”

So it seems, over 100 years later, that we’ve gotten it wrong all along. Čapek’s vision, rooted in chemical synthesis and the philosophical mysteries of life, offers a different narrative from the predominant mechanical and electronic interpretation of robots we know today. But judging from what Čapek wrote, it sounds like he would be firmly against AI takeover scenarios. In fact, Čapek, who died in 1938, probably would think they would be impossible.

A “robot” should be chemical, not steel, argues man who coined the word Read More »

openwrt,-now-20-years-old,-is-crafting-its-own-future-proof-reference-hardware

OpenWrt, now 20 years old, is crafting its own future-proof reference hardware

It’s time for a new blue box —

There are, as you might expect, a few disagreements about what’s most important.

Linksys WRT54G

Enlarge / Failing an image of the proposed reference hardware by the OpenWrt group, let us gaze upon where this all started: inside a device that tried to quietly use open source software without crediting or releasing it.

Jim Salter

OpenWrt, the open source firmware that sprang from Linksys’ use of open source code in its iconic WRT54G router and subsequent release of its work, is 20 years old this year. To keep the project going, lead developers have proposed creating a “fully upstream supported hardware design,” one that would prevent the need for handling “binary blobs” in modern router hardware and let DIY router enthusiasts forge their own path.

OpenWRT project members, 13 of which signed off on this hardware, are keeping the “OpenWrt One” simple, while including “some nice features we believe all OpenWrt supported platforms should have,” including “almost unbrickable” low-level firmware, an on-board real-time clock with a battery backup, and USB-PD power. The price should be under $100 and the schematics and code publicly available.

But OpenWrt will not be producing or selling these boards, “for a ton of reasons.” The group is looking to the Banana Pi makers to distribute a fitting device, with every device producing a donation to the Software Freedom Conservancy earmarked for OpenWrt. That money could then be used for hosting expenses, or “maybe an OpenWrt summit.”

OpenWrt tries to answer some questions about its designs. There are two flash chips on the board to allow for both a main loader and a write-protected recovery. There’s no USB 3.0 because all the USB and PCIe buses are shared on the board. And there’s such an emphasis on a battery-backed RTC because “we believe there are many things a Wi-Fi … device should have on-board by default.”

But members of the site have more questions, some of them beyond the scope of what OpenWrt is promising. Some want to see a device that resembles the blue boxes of old, with four or five Ethernet ports built in. Others are asking about a lack of PoE support, or USB 3.0 for network-attached drives. Some are actually wondering why the proposed device includes NVMe storage. And quite a few are asking why the device has 1Gbps and 2.5Gbps ports, given that this means anyone with Internet faster than 1Gbps will be throttled, since the 2.5 port will likely be used for wireless output.

There is no expected release date, though it’s noted that it’s the “first” community-driven reference hardware.

OpenWrt, which has existed in parallel with the DD-WRT project that sprang from the same firmware moment, powers a number of custom-made routers. It and other open source router firmware faced an uncertain future in the mid-2010s, when Federal Communications Commission rules, or at least manufacturers’ interpretation of them, made them seem potentially illegal. Because open firmware often allowed for pushing wireless radios beyond their licensed radio frequency parameters, firms like TP-Link blocked them, while Linksys (at that point owned by Belkin) continued to allow them. In 2020, OpenWrt patched a code-execution exploit due to unencrypted update channels.

OpenWrt, now 20 years old, is crafting its own future-proof reference hardware Read More »

microsoft-network-breached-through-password-spraying-by-russian-state-hackers

Microsoft network breached through password-spraying by Russian-state hackers

Microsoft network breached through password-spraying by Russian-state hackers

Getty Images

Russia-state hackers exploited a weak password to compromise Microsoft’s corporate network and accessed emails and documents that belonged to senior executives and employees working in security and legal teams, Microsoft said late Friday.

The attack, which Microsoft attributed to a Kremlin-backed hacking group it tracks as Midnight Blizzard, is at least the second time in as many years that failures to follow basic security hygiene has resulted in a breach that has the potential to harm customers. One paragraph in Friday’s disclosure, filed with the Securities and Exchange Commission, was gobsmacking:

Beginning in late November 2023, the threat actor used a password spray attack to compromise a legacy non-production test tenant account and gain a foothold, and then used the account’s permissions to access a very small percentage of Microsoft corporate email accounts, including members of our senior leadership team and employees in our cybersecurity, legal, and other functions, and exfiltrated some emails and attached documents. The investigation indicates they were initially targeting email accounts for information related to Midnight Blizzard itself. We are in the process of notifying employees whose email was accessed.

Microsoft didn’t detect the breach until January 12, exactly a week before Friday’s disclosure. Microsoft’s account raises the prospect that the Russian hackers had uninterrupted access to the accounts for as long as two months.

A translation of the 93 words quoted above: A device inside Microsoft’s network was protected by a weak password with no form of two-factor authentication employed. The Russian adversary group was able to guess it by peppering it with previously compromised or commonly used passwords until they finally landed on the right one. The threat actor then accessed the account, indicating that either 2FA wasn’t employed or the protection was somehow bypassed.

Furthermore, this “legacy non-production test tenant account” was somehow configured so that Midnight Blizzard could pivot and gain access to some of the company’s most senior and sensitive employee accounts.

As Steve Bellovin, a computer science professor and affiliate law prof at Columbia University with decades of experience in cybersecurity, wrote on Mastodon:

A lot of fascinating implications here. A successful password spray attack suggests no 2FA and either reused or weak passwords. Access to email accounts belonging to “senior leadership… cybersecurity, and legal” teams using just the permissions of a “test tenant account” suggests that someone gave that test account amazing privileges. Why? Why wasn’t it removed when the test was over? I also note that it took Microsoft about seven weeks to detect the attack.

While Microsoft said that it wasn’t aware of any evidence that Midnight Blizzard gained access to customer environments, production systems, source code, or AI systems, some researchers voiced doubts, particularly about whether the Microsoft 365 service might be or have been susceptible to similar attack techniques. One of the researchers was Kevin Beaumont, who has had a long cybersecurity career that has included a stint working for Microsoft. On LinkedIn, he wrote:

Microsoft staff use Microsoft 365 for email. SEC filings and blogs with no details on Friday night are great.. but they’re going to have to be followed with actual detail. The age of Microsoft doing tents, incident code words, CELA’ing things and pretending MSTIC sees everything (threat actors have Macs too) are over — they need to do radical technical and cultural transformation to retain trust.

CELA is short for Corporate, External, and Legal Affairs, a group inside Microsoft that helps draft disclosures. MSTIC stands for the Microsoft Threat Intelligence Center.

Microsoft network breached through password-spraying by Russian-state hackers Read More »

$40-billion-worth-of-crypto-crime-enabled-by-stablecoins-since-2022

$40 billion worth of crypto crime enabled by stablecoins since 2022

illustration of cryptocurrency breaking through brick wall

Anjali Nair; Getty Images

Stablecoins, cryptocurrencies pegged to a stable value like the US dollar, were created with the promise of bringing the frictionless, border-crossing fluidity of bitcoin to a form of digital money with far less volatility. That combination has proved to be wildly popular, rocketing the total value of stablecoin transactions since 2022 past even that of Bitcoin itself.

It turns out, however, that as stablecoins have become popular among legitimate users over the past two years, they were even more popular among a different kind of user: those exploiting them for billions of dollars of international sanctions evasion and scams.

As part of its annual crime report, cryptocurrency-tracing firm Chainalysis today released new numbers on the disproportionate use of stablecoins for both of those massive categories of illicit crypto transactions over the last year. By analyzing blockchains, Chainalysis determined that stablecoins were used in fully 70 percent of crypto scam transactions in 2023, 83 percent of crypto payments to sanctioned countries like Iran and Russia, and 84 percent of crypto payments to specifically sanctioned individuals and companies. Those numbers far outstrip stablecoins’ growing overall use—including for legitimate purposes—which accounted for 59 percent of all cryptocurrency transaction volume in 2023.

In total, Chainalysis measured $40 billion in illicit stablecoin transactions in 2022 and 2023 combined. The largest single category of that stablecoin-enabled crime was sanctions evasion. In fact, across all cryptocurrencies, sanctions evasion accounted for more than half of the $24.2 billion in criminal transactions Chainalysis observed in 2023, with stablecoins representing the vast majority of those transactions.

The attraction of stablecoins for both sanctioned people and countries, argues Andrew Fierman, Chainalysis’ head of sanctions strategy, is that it allows targets of sanctions to circumvent any attempt to deny them a stable currency like the US dollar. “Whether it’s an individual located in Iran or a bad guy trying to launder money—either way, there’s a benefit to the stability of the US dollar that people are looking to obtain,” Fierman says. “If you’re in a jurisdiction where you don’t have access to the US dollar due to sanctions, stablecoins become an interesting play.”

As examples, Fierman points to Nobitex, the largest cryptocurrency exchange operating in the sanctioned country of Iran, as well as Garantex, a notorious exchange based in Russia that has been specifically sanctioned for its widespread criminal use. Stablecoin usage on Nobitex outstrips bitcoin by a 9:1 ratio, and on Garantex by a 5:1 ratio, Chainalysis found. That’s a stark difference from the roughly 1:1 ratio between stablecoins and bitcoins on a few nonsanctioned mainstream exchanges that Chainalysis checked for comparison.

Chainalysis' chart showing the growth in stablecoins as a fraction of the value of total illicit crypto transactions over time.

Enlarge / Chainalysis’ chart showing the growth in stablecoins as a fraction of the value of total illicit crypto transactions over time.

Chainanalysis

$40 billion worth of crypto crime enabled by stablecoins since 2022 Read More »