ai action plan

us-executive-branch-agencies-will-use-chatgpt-enterprise-for-just-$1-per-agency

US executive branch agencies will use ChatGPT Enterprise for just $1 per agency

OpenAI announced an agreement to supply more than 2 million workers for the US federal executive branch access to ChatGPT and related tools at practically no cost: just $1 per agency for one year.

The deal was announced just one day after the US General Services Administration (GSA) signed a blanket deal to allow OpenAI and rivals like Google and Anthropic to supply tools to federal workers.

The workers will have access to ChatGPT Enterprise, a type of account that includes access to frontier models and cutting-edge features with relatively high token limits, alongside a more robust commitment to data privacy than general consumers of ChatGPT get. ChatGPT Enterprise has been trialed over the past several months at several corporations and other types of large organizations.

The workers will also have unlimited access to advanced features like Deep Research and Advanced Voice Mode for a 60-day period. After the one-year trial period, the agencies are under no obligation to renew.

A limited deployment of ChatGPT for federal workers was already done via a pilot program with the US Department of Defense earlier this summer.

In a blog post, OpenAI heralded this announcement as an act of public service:

This effort delivers on a core pillar of the Trump Administration’s AI Action Plan by making powerful AI tools available across the federal government so that workers can spend less time on red tape and paperwork, and more time doing what they came to public service to do: serve the American people.

The AI Action Plan aims to expand AI-focused data centers in the United States while bringing AI tools to federal workers, ostensibly to improve efficiency.

US executive branch agencies will use ChatGPT Enterprise for just $1 per agency Read More »

states-take-the-lead-in-ai-regulation-as-federal-government-steers-clear

States take the lead in AI regulation as federal government steers clear

AI in health care

In the first half of 2025, 34 states introduced over 250 AI-related health bills. The bills generally fall into four categories: disclosure requirements, consumer protection, insurers’ use of AI, and clinicians’ use of AI.

Bills about transparency define requirements for information that AI system developers and organizations that deploy the systems disclose.

Consumer protection bills aim to keep AI systems from unfairly discriminating against some people and ensure that users of the systems have a way to contest decisions made using the technology.

Bills covering insurers provide oversight of the payers’ use of AI to make decisions about health care approvals and payments. And bills about clinical uses of AI regulate use of the technology in diagnosing and treating patients.

Facial recognition and surveillance

In the US, a long-standing legal doctrine that applies to privacy protection issues, including facial surveillance, is to protect individual autonomy against interference from the government. In this context, facial recognition technologies pose significant privacy challenges as well as risks from potential biases.

Facial recognition software, commonly used in predictive policing and national security, has exhibited biases against people of color and consequently is often considered a threat to civil liberties. A pathbreaking study by computer scientists Joy Buolamwini and Timnit Gebru found that facial recognition software poses significant challenges for Black people and other historically disadvantaged minorities. Facial recognition software was less likely to correctly identify darker faces.

Bias also creeps into the data used to train these algorithms, for example when the composition of teams that guide the development of such facial recognition software lack diversity.

By the end of 2024, 15 states in the US had enacted laws to limit the potential harms from facial recognition. Some elements of state-level regulations are requirements on vendors to publish bias test reports and data management practices, as well as the need for human review in the use of these technologies.

States take the lead in AI regulation as federal government steers clear Read More »