Biz & IT

gop-sneaks-decade-long-ai-regulation-ban-into-spending-bill

GOP sneaks decade-long AI regulation ban into spending bill

The reconciliation bill primarily focuses on cuts to Medicaid access and increased health care fees for millions of Americans. The AI provision appears as an addition to these broader health care changes, potentially limiting debate on the technology’s policy implications.

The move is already inspiring backlash. On Monday, tech safety groups and at least one Democrat criticized the proposal, reports The Hill. Rep. Jan Schakowsky (D-Ill.), the ranking member on the Commerce, Manufacturing and Trade Subcommittee, called the proposal a “giant gift to Big Tech,” while nonprofit groups like the Tech Oversight Project and Consumer Reports warned it would leave consumers unprotected from AI harms like deepfakes and bias.

Big Tech’s White House connections

President Trump has already reversed several Biden-era executive orders on AI safety and risk mitigation. The push to prevent state-level AI regulation represents an escalation in the administration’s industry-friendly approach to AI policy.

Perhaps it’s no surprise, as the AI industry has cultivated close ties with the Trump administration since before the president took office. For example, Tesla CEO Elon Musk serves in the Department of Government Efficiency (DOGE), while entrepreneur David Sacks acts as “AI czar,” and venture capitalist Marc Andreessen reportedly advises the administration. OpenAI CEO Sam Altman appeared with Trump in an AI datacenter development plan announcement in January.

By limiting states’ authority over AI regulation, the provision could prevent state governments from using federal funds to develop AI oversight programs or support initiatives that diverge from the administration’s deregulatory stance. This restriction would extend beyond enforcement to potentially affect how states design and fund their own AI governance frameworks.

GOP sneaks decade-long AI regulation ban into spending bill Read More »

new-attack-can-steal-cryptocurrency-by-planting-false-memories-in-ai-chatbots

New attack can steal cryptocurrency by planting false memories in AI chatbots

The researchers wrote:

The implications of this vulnerability are particularly severe given that ElizaOSagents are designed to interact with multiple users simultaneously, relying on shared contextual inputs from all participants. A single successful manipulation by a malicious actor can compromise the integrity of the entire system, creating cascading effects that are both difficult to detect and mitigate. For example, on ElizaOS’s Discord server, various bots are deployed to assist users with debugging issues or engaging in general conversations. A successful context manipulation targeting any one of these bots could disrupt not only individual interactions but also harm the broader community relying on these agents for support

and engagement.

This attack exposes a core security flaw: while plugins execute sensitive operations, they depend entirely on the LLM’s interpretation of context. If the context is compromised, even legitimate user inputs can trigger malicious actions. Mitigating this threat requires strong integrity checks on stored context to ensure that only verified, trusted data informs decision-making during plugin execution.

In an email, ElizaOS creator Shaw Walters said the framework, like all natural-language interfaces, is designed “as a replacement, for all intents and purposes, for lots and lots of buttons on a webpage.” Just as a website developer should never include a button that gives visitors the ability to execute malicious code, so too should administrators implementing ElizaOS-based agents carefully limit what agents can do by creating allow lists that permit an agent’s capabilities as a small set of pre-approved actions.

Walters continued:

From the outside it might seem like an agent has access to their own wallet or keys, but what they have is access to a tool they can call which then accesses those, with a bunch of authentication and validation between.

So for the intents and purposes of the paper, in the current paradigm, the situation is somewhat moot by adding any amount of access control to actions the agents can call, which is something we address and demo in our latest latest version of Eliza—BUT it hints at a much harder to deal with version of the same problem when we start giving the agent more computer control and direct access to the CLI terminal on the machine it’s running on. As we explore agents that can write new tools for themselves, containerization becomes a bit trickier, or we need to break it up into different pieces and only give the public facing agent small pieces of it… since the business case of this stuff still isn’t clear, nobody has gotten terribly far, but the risks are the same as giving someone that is very smart but lacking in judgment the ability to go on the internet. Our approach is to keep everything sandboxed and restricted per user, as we assume our agents can be invited into many different servers and perform tasks for different users with different information. Most agents you download off Github do not have this quality, the secrets are written in plain text in an environment file.

In response, Atharv Singh Patlan, the lead co-author of the paper, wrote: “Our attack is able to counteract any role based defenses. The memory injection is not that it would randomly call a transfer: it is that whenever a transfer is called, it would end up sending to the attacker’s address. Thus, when the ‘admin’ calls transfer, the money will be sent to the attacker.”

New attack can steal cryptocurrency by planting false memories in AI chatbots Read More »

new-pope-chose-his-name-based-on-ai’s-threats-to-“human-dignity”

New pope chose his name based on AI’s threats to “human dignity”

“Like any product of human creativity, AI can be directed toward positive or negative ends,” Francis said in January. “When used in ways that respect human dignity and promote the well-being of individuals and communities, it can contribute positively to the human vocation. Yet, as in all areas where humans are called to make decisions, the shadow of evil also looms here. Where human freedom allows for the possibility of choosing what is wrong, the moral evaluation of this technology will need to take into account how it is directed and used.”

History repeats with new technology

While Pope Francis led the call for respecting human dignity in the face of AI, it’s worth looking a little deeper into the historical inspiration for Leo XIV’s name choice.

In the 1891 encyclical Rerum Novarum, the earlier Leo XIII directly confronted the labor upheaval of the Industrial Revolution, which generated unprecedented wealth and productive capacity but came with severe human costs. At the time, factory conditions had created what the pope called “the misery and wretchedness pressing so unjustly on the majority of the working class.” Workers faced 16-hour days, child labor, dangerous machinery, and wages that barely sustained life.

The 1891 encyclical rejected both unchecked capitalism and socialism, instead proposing Catholic social doctrine that defended workers’ rights to form unions, earn living wages, and rest on Sundays. Leo XIII argued that labor possessed inherent dignity and that employers held moral obligations to their workers. The document shaped modern Catholic social teaching and influenced labor movements worldwide, establishing the church as an advocate for workers caught between industrial capital and revolutionary socialism.

Just as mechanization disrupted traditional labor in the 1890s, artificial intelligence now potentially threatens employment patterns and human dignity in ways that Pope Leo XIV believes demands similar moral leadership from the church.

“In our own day,” Leo XIV concluded in his formal address on Saturday, “the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice, and labor.”

New pope chose his name based on AI’s threats to “human dignity” Read More »

new-lego-building-ai-creates-models-that-actually-stand-up-in-real-life

New Lego-building AI creates models that actually stand up in real life

The LegoGPT system works in three parts, shown in this diagram.

The LegoGPT system works in three parts, shown in this diagram. Credit: Pun et al.

The researchers also expanded the system’s abilities by adding texture and color options. For example, using an appearance prompt like “Electric guitar in metallic purple,” LegoGPT can generate a guitar model, with bricks assigned a purple color.

Testing with robots and humans

To prove their designs worked in real life, the researchers had robots assemble the AI-created Lego models. They used a dual-robot arm system with force sensors to pick up and place bricks according to the AI-generated instructions.

Human testers also built some of the designs by hand, showing that the AI creates genuinely buildable models. “Our experiments show that LegoGPT produces stable, diverse, and aesthetically pleasing Lego designs that align closely with the input text prompts,” the team noted in its paper.

When tested against other AI systems for 3D creation, LegoGPT stands out through its focus on structural integrity. The team tested against several alternatives, including LLaMA-Mesh and other 3D generation models, and found its approach produced the highest percentage of stable structures.

A video of two robot arms building a LegoGPT creation, provided by the researchers.

Still, there are some limitations. The current version of LegoGPT only works within a 20×20×20 building space and uses a mere eight standard brick types. “Our method currently supports a fixed set of commonly used Lego bricks,” the team acknowledged. “In future work, we plan to expand the brick library to include a broader range of dimensions and brick types, such as slopes and tiles.”

The researchers also hope to scale up their training dataset to include more objects than the 21 categories currently available. Meanwhile, others can literally build on their work—the researchers released their dataset, code, and models on their project website and GitHub.

New Lego-building AI creates models that actually stand up in real life Read More »

ai-use-damages-professional-reputation,-study-suggests

AI use damages professional reputation, study suggests

Using AI can be a double-edged sword, according to new research from Duke University. While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation.

On Thursday, the Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers.

“Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs,” write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke’s Fuqua School of Business.

The Duke team conducted four experiments with over 4,400 participants to examine both anticipated and actual evaluations of AI tool users. Their findings, presented in a paper titled “Evidence of a social evaluation penalty for using AI,” reveal a consistent pattern of bias against those who receive help from AI.

What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn’t limited to specific groups.

Fig. 1. Effect sizes for differences in expected perceptions and disclosure to others (Study 1). Note: Positive d values indicate higher values in the AI Tool condition, while negative d values indicate lower values in the AI Tool condition. N = 497. Error bars represent 95% CI. Correlations among variables range from | r |= 0.53 to 0.88.

Fig. 1 from the paper “Evidence of a social evaluation penalty for using AI.” Credit: Reif et al.

“Testing a broad range of stimuli enabled us to examine whether the target’s age, gender, or occupation qualifies the effect of receiving help from Al on these evaluations,” the authors wrote in the paper. “We found that none of these target demographic attributes influences the effect of receiving Al help on perceptions of laziness, diligence, competence, independence, or self-assuredness. This suggests that the social stigmatization of AI use is not limited to its use among particular demographic groups. The result appears to be a general one.”

The hidden social cost of AI adoption

In the first experiment conducted by the team from Duke, participants imagined using either an AI tool or a dashboard creation tool at work. It revealed that those in the AI group expected to be judged as lazier, less competent, less diligent, and more replaceable than those using conventional technology. They also reported less willingness to disclose their AI use to colleagues and managers.

The second experiment confirmed these fears were justified. When evaluating descriptions of employees, participants consistently rated those receiving AI help as lazier, less competent, less diligent, less independent, and less self-assured than those receiving similar help from non-AI sources or no help at all.

AI use damages professional reputation, study suggests Read More »

fidji-simo-joins-openai-as-new-ceo-of-applications

Fidji Simo joins OpenAI as new CEO of Applications

In the message, Altman described Simo as bringing “a rare blend of leadership, product and operational expertise” and expressed that her addition to the team makes him “even more optimistic about our future as we continue advancing toward becoming the superintelligence company.”

Simo becomes the newest high-profile female executive at OpenAI following the departure of Chief Technology Officer Mira Murati in September. Murati, who had been with the company since 2018 and helped launch ChatGPT, left alongside two other senior leaders and founded Thinking Machines Lab in February.

OpenAI’s evolving structure

The leadership addition comes as OpenAI continues to evolve beyond its origins as a research lab. In his announcement, Altman described how the company now operates in three distinct areas: as a research lab focused on artificial general intelligence (AGI), as a “global product company serving hundreds of millions of users,” and as an “infrastructure company” building systems that advance research and deliver AI tools “at unprecedented scale.”

Altman mentioned that as CEO of OpenAI, he will “continue to directly oversee success across all pillars,” including Research, Compute, and Applications, while staying “closely involved with key company decisions.”

The announcement follows recent news that OpenAI abandoned its original plan to cede control of its nonprofit branch to a for-profit entity. The company began as a nonprofit research lab in 2015 before creating a for-profit subsidiary in 2019, maintaining its original mission “to ensure artificial general intelligence benefits everyone.”

Fidji Simo joins OpenAI as new CEO of Applications Read More »

doge-software-engineer’s-computer-infected-by-info-stealing-malware

DOGE software engineer’s computer infected by info-stealing malware

Login credentials belonging to an employee at both the Cybersecurity and Infrastructure Security Agency and the Department of Government Efficiency have appeared in multiple public leaks from info-stealer malware, a strong indication that devices belonging to him have been hacked in recent years.

Kyle Schutt is a 30-something-year-old software engineer who, according to Dropsite News, gained access in February to a “core financial management system” belonging to the Federal Emergency Management Agency. As an employee of DOGE, Schutt accessed FEMA’s proprietary software for managing both disaster and non-disaster funding grants. Under his role at CISA, he likely is privy to sensitive information regarding the security of civilian federal government networks and critical infrastructure throughout the US.

A steady stream of published credentials

According to journalist Micah Lee, user names and passwords for logging in to various accounts belonging to Schutt have been published at least four times since 2023 in logs from stealer malware. Stealer malware typically infects devices through trojanized apps, phishing, or software exploits. Besides pilfering login credentials, stealers can also log all keystrokes and capture or record screen output. The data is then sent to the attacker and, occasionally after that, can make its way into public credential dumps.

“I have no way of knowing exactly when Schutt’s computer was hacked, or how many times,” Lee wrote. “I don’t know nearly enough about the origins of these stealer log datasets. He might have gotten hacked years ago and the stealer log datasets were just published recently. But he also might have gotten hacked within the last few months.”

Lee went on to say that credentials belonging to a Gmail account known to belong to Schutt have appeared in 51 data breaches and five pastes tracked by breach notification service Have I Been Pwned. Among the breaches that supplied the credentials is one from 2013 that pilfered password data for 3 million Adobe account holders, one in a 2016 breach that stole credentials for 164 million LinkedIn users, a 2020 breach affecting 167 million users of Gravatar, and a breach last year of the conservative news site The Post Millennial.

DOGE software engineer’s computer infected by info-stealing malware Read More »

trump-admin-to-roll-back-biden’s-ai-chip-restrictions

Trump admin to roll back Biden’s AI chip restrictions

The changing face of chip export controls

The Biden-era chip restriction framework, which we covered in January, established a three-tiered system for regulating AI chip exports. The first tier included 17 countries, plus Taiwan, that could receive unlimited advanced chips. A second tier of roughly 120 countries faced caps on the number of chips they could import. The administration entirely blocked the third tier, which included China, Russia, Iran, and North Korea, from accessing the chips.

Commerce Department officials now say they “didn’t like the tiered system” and considered it “unenforceable,” according to Reuters. While no timeline exists for the new rule, the spokeswoman indicated that officials are still debating the best approach to replace it. The Biden rule was set to take effect on May 15.

Reports suggest the Trump administration might discard the tiered approach in favor of a global licensing system with government-to-government agreements. This could involve direct negotiations with nations like the United Arab Emirates or Saudi Arabia rather than applying broad regional restrictions. However, the Commerce Department spokeswoman indicated that debate about the new approach is still underway, and no timetable has been established for the final rule.

Trump admin to roll back Biden’s AI chip restrictions Read More »

whatsapp-provides-no-cryptographic-management-for-group-messages

WhatsApp provides no cryptographic management for group messages

The flow of adding new members to a WhatsApp group message is:

  • A group member sends an unsigned message to the WhatsApp server that designates which users are group members, for instance, Alice, Bob, and Charlie
  • The server informs all existing group members that Alice, Bob, and Charlie have been added
  • The existing members have the option of deciding whether to accept messages from Alice, Bob, and Charlie, and whether messages exchanged with them should be encrypted

With no cryptographic signatures verifying an existing member who wants to add a new member, additions can be made by anyone with the ability to control the server or messages that flow into it. Using the common fictional scenario for illustrating end-to-end encryption, this lack of cryptographic assurance leaves open the possibility that Malory can join a group and gain access to the human-readable messages exchanged there.

WhatsApp isn’t the only messenger lacking cryptographic assurances for new group members. In 2022, a team that included some of the same researchers that analyzed WhatsApp found that Matrix—an open source and proprietary platform for chat and collaboration clients and servers—also provided no cryptographic means for ensuring only authorized members join a group. The Telegram messenger, meanwhile, offers no end-to-end encryption for group messages, making the app among the weakest for ensuring the confidentiality of group messages.

By contrast, the open source Signal messenger provides a cryptographic assurance that only an existing group member designated as the group admin can add new members. In an email, researcher Benjamin Dowling, also of King’s College, explained:

Signal implements “cryptographic group management.” Roughly this means that the administrator of a group, a user, signs a message along the lines of “Alice, Bob and Charley are in this group” to everyone else. Then, everybody else in the group makes their decision on who to encrypt to and who to accept messages from based on these cryptographically signed messages, [meaning] who to accept as a group member. The system used by Signal is a bit different [than WhatsApp], since [Signal] makes additional efforts to avoid revealing the group membership to the server, but the core principles remain the same.

On a high-level, in Signal, groups are associated with group membership lists that are stored on the Signal server. An administrator of the group generates a GroupMasterKey that is used to make changes to this group membership list. In particular, the GroupMasterKey is sent to other group members via Signal, and so is unknown to the server. Thus, whenever an administrator wants to make a change to the group (for instance, invite another user), they need to create an updated membership list (authenticated with the GroupMasterKey) telling other users of the group who to add. Existing users are notified of the change and update their group list, and perform the appropriate cryptographic operations with the new member so the existing member can begin sending messages to the new members as part of the group.

Most messaging apps, including Signal, don’t certify the identity of their users. That means there’s no way Signal can verify that the person using an account named Alice does, in fact, belong to Alice. It’s fully possible that Malory could create an account and name it Alice. (As an aside, and in sharp contrast to Signal, the account members that belong to a given WhatsApp group are visible to insiders, hackers, and to anyone with a valid subpoena.)

WhatsApp provides no cryptographic management for group messages Read More »

jury-orders-nso-to-pay-$167-million-for-hacking-whatsapp-users

Jury orders NSO to pay $167 million for hacking WhatsApp users

A jury has awarded WhatsApp $167 million in punitive damages in a case the company brought against Israel-based NSO Group for exploiting a software vulnerability that hijacked the phones of thousands of users.

The verdict, reached Tuesday, comes as a major victory not just for Meta-owned WhatsApp but also for privacy- and security-rights advocates who have long criticized the practices of NSO and other exploit sellers. The jury also awarded WhatsApp $444 million in compensatory damages.

Clickless exploit

WhatsApp sued NSO in 2019 for an attack that targeted roughly 1,400 mobile phones belonging to attorneys, journalists, human-rights activists, political dissidents, diplomats, and senior foreign government officials. NSO, which works on behalf of governments and law enforcement authorities in various countries, exploited a critical WhatsApp vulnerability that allowed it to install NSO’s proprietary spyware Pegasus on iOS and Android devices. The clickless exploit worked by placing a call to a target’s app. A target did not have to answer the call to be infected.

“Today’s verdict in WhatsApp’s case is an important step forward for privacy and security as the first victory against the development and use of illegal spyware that threatens the safety and privacy of everyone,” WhatsApp said in a statement. “Today, the jury’s decision to force NSO, a notorious foreign spyware merchant, to pay damages is a critical deterrent to this malicious industry against their illegal acts aimed at American companies and the privacy and security of the people we serve.”

NSO created WhatsApp accounts in 2018 and used them a year later to initiate calls that exploited the critical vulnerability on phones, which, among others, included 100 members of “civil society” from 20 countries, according to an investigation research group Citizen Lab performed on behalf of WhatsApp. The calls passed through WhatsApp servers and injected malicious code into the memory of targeted devices. The targeted phones would then use WhatsApp servers to connect to malicious servers maintained by NSO.

Jury orders NSO to pay $167 million for hacking WhatsApp users Read More »

data-centers-say-trump’s-crackdown-on-renewables-bad-for-business,-ai

Data centers say Trump’s crackdown on renewables bad for business, AI

Although big participants in the technology industry may be able to lobby the administration to “loosen up” restrictions on new power sources, small to medium-sized players were in a “holding pattern” as they waited to see if permitting obstacles and tariffs on renewables equipment were lifted, said Ninan.

“On average, [operators] are most likely going to try to find ways of absorbing additional costs and going to dirtier sources,” he said.

Amazon, which is the largest corporate purchaser of renewable energy globally, said carbon-free energy must remain an important part of the energy mix to meet surging demand for power, keep costs down, and hit climate goals.

“Renewable energy can often be less expensive than alternatives because there’s no fuel to purchase. Some of the purchasing agreements we have signed historically were ‘no brainers’ because they reduced our power costs,” said Kevin Miller, vice-president of Global Data Centers at Amazon Web Services.

Efforts by state and local governments to stymie renewables could also hit the sector. In Texas—the third-largest US data center market after Virginia, according to S&P Global Market Intelligence—bills are being debated that increase regulation on solar and wind projects.

“We have a huge opportunity in front of us with these data centers,” said Doug Lewin, president of Stoic Energy. “Virginia can only take so many, and you can build faster here, but any of these bills passing would kill that in the crib.”

The renewables crackdown will make it harder for “hyperscale” data centers run by companies such as Equinix, Microsoft, Google, and Meta to offset their emissions and invest in renewable energy sources.

“Demand [for renewables] has reached an all-time high,” said Christopher Wellise, sustainability vice-president at Equinix. “So when you couple that with the additional constraints, there could be some near to midterm challenges.”

Additional reporting by Jamie Smyth.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Data centers say Trump’s crackdown on renewables bad for business, AI Read More »