congress

texas-politicians-warn-smithsonian-it-must-not-lobby-to-retain-its-space-shuttle

Texas politicians warn Smithsonian it must not lobby to retain its space shuttle

(Oddly, Cornyn and Weber’s letter to Roberts described the law as requiring Duffy “to transfer a space vehicle involved in the Commercial Crew Program” rather than choosing a destination NASA center related to the same, as the bill actually reads. Taken as written, if that was indeed their intent, Discovery and the other retired shuttles would be exempt, as the winged orbiters were never part of that program. A request for clarification sent to both Congress members’ offices was not immediately answered.)

two men in business suits sit front of a large model of a space shuttle

Sen. John Cornyn (R-TX, at right) sits in front of a model of Space Shuttle Discovery at Space Center Houston, where they want to move the real orbiter. Credit: collectSPACE.com

In the letter, Cornyn and Weber cited the Anti-Lobbying Act as restricting the use of funds provided by the federal government to “influence members of the public to pressure Congress regarding legislation or appropriations matters.”

“As the Smithsonian Institution receives annual appropriations from Congress, it is subject to the restrictions imposed by this statute,” they wrote.

The money that Congress allocates to the Smithsonian accounts for about two-thirds of the Institution’s annual budget, primarily covering federal staff salaries, collections care, facilities maintenance, and the construction and revitalization of the buildings that house the Smithsonian’s 21 museums and other centers.

Pols want Smithsonian to stay mum

As evidence of the Smithsonian’s alleged wrongdoing, Cornyn and Weber cited a July 11 article by Zach Vasile for Flying Magazine, which ran under the headline “Smithsonian Pushing Back on Plans to Relocate Space Shuttle.” Vasile quoted from a message the Institution sent to Congress saying that there was no precedent for removing an object from its collection to send it elsewhere.

The Texas officials wrote that the anti-lobbying restrictions apply to “staff time or public relations resources” and claimed that the Smithsonian’s actions did not fall under the law’s exemptions, including “public speeches, incidental expenditures for public education or communications, or activities unrelated to legislation or appropriations.”

Cornyn and Weber urged Roberts, as the head of the Smithsonian’s Board of Regents, to “conduct a comprehensive internal review” as it applied to how the institution responded to the One Big Beautiful Bill Act.

“Should the review reveal that appropriated funds were used in a manner inconsistent with the prohibitions outlined in the Anti-Lobbying Act, we respectfully request that immediate and appropriate corrective measures be implemented to ensure the Institution’s full compliance with all applicable statutory and ethical obligations,” Cornyn and Weber wrote.

Texas politicians warn Smithsonian it must not lobby to retain its space shuttle Read More »

lawmakers-writing-nasa’s-budget-want-a-cheaper-upper-stage-for-the-sls-rocket

Lawmakers writing NASA’s budget want a cheaper upper stage for the SLS rocket


Eliminating the Block 1B upgrade now would save NASA at least $500 million per year.

Artist’s illustration of the Boeing-developed Exploration Upper Stage, with four hydrogen-fueled RL10 engines. Credit: NASA

Not surprisingly, Congress is pushing back against the Trump administration’s proposal to cancel the Space Launch System, the behemoth rocket NASA has developed to propel astronauts back to the Moon.

Spending bills making their way through both houses of Congress reject the White House’s plan to wind down the SLS rocket after two more launches, but the text of a draft budget recently released by the House Appropriations Committee suggests an openness to making some major changes to the program.

The next SLS flight, called Artemis II, is scheduled to lift off early next year to send a crew of four astronauts around the far side of the Moon. Artemis III will follow a few years later on a mission to attempt a crew lunar landing at the Moon’s south pole. These missions follow Artemis I, a successful unpiloted test flight in 2022.

After Artemis III, the official policy of the Trump administration is to terminate the SLS program, along with the Orion crew capsule designed to launch on top of the rocket. The White House also proposed canceling NASA’s Gateway, a mini-space station to be placed in orbit around the Moon. NASA would instead procure commercial launches and commercial spacecraft to ferry astronauts between the Earth and the Moon, while focusing the agency’s long-term gaze toward Mars.

CYA EUS?

House and Senate appropriations bills would preserve SLS, Orion, and the Gateway. However, the House version of NASA’s budget has an interesting paragraph directing NASA to explore cheaper, faster options for a new SLS upper stage.

NASA has tasked Boeing, which also builds SLS core stages, to develop an Exploration Upper Stage for debut on the Artemis IV mission, the fourth flight of the Space Launch System. This new upper stage would have large propellant tanks and carry four engines instead of the single engine used on the rocket’s interim upper stage, which NASA is using for the first three SLS flights.

The House version of NASA’s fiscal year 2026 budget raises questions about the long-term future of the Exploration Upper Stage. In one section of the bill, House lawmakers would direct NASA to “evaluate alternatives to the current Exploration Upper Stage (EUS) design for SLS.” The committee members wrote the evaluation should focus on reducing development and production costs, shortening the schedule, and maintaining the SLS rocket’s lift capability.

“NASA should also evaluate how alternative designs could support the long-term evolution of SLS and broader exploration goals beyond low-Earth orbit,” the lawmakers wrote. “NASA is directed to assess various propulsion systems, stage configurations, infrastructure compatibility, commercial and international collaboration opportunities, and the cost and schedule impacts of each alternative.”

The SLS rocket is expensive, projected to cost at least $2.5 billion per launch, not counting development costs or expenses related to the Orion spacecraft and the ground systems required to launch it at Kennedy Space Center in Florida. Those figures bring the total cost of an Artemis mission using SLS and Orion to more than $4 billion, according to NASA’s inspector general.

NASA’s Block 1B version of the SLS rocket will be substantially larger than Block 1. Credit: NASA

The EUS is likewise an expensive undertaking. Last year, NASA’s inspector general reported that the new upper stage’s development costs had ballooned from $962 million to $2.8 billion, and the Boeing-led project had been delayed more than six years. The version of the SLS rocket with the EUS, known as Block 1B, is supposed to deliver a 40 percent increase in performance over the Block 1 configuration used on the first three Space Launch System flights. Overall, NASA’s inspector general projected Block 1B’s development costs to total $5.7 billion.

Eliminating the Block 1B upgrade now would save NASA at least $500 million per year, and perhaps more if NASA could also end work on a costly mobile launch tower specifically designed to support SLS Block 1B missions.

NASA can’t go back to the interim upper stage, which is based on the design of the upper stage that flew on United Launch Alliance’s (ULA’s) now-retired Delta IV Heavy rocket. ULA has shut down its Delta production line, so there’s no way to build any more. What ULA does have is a new high-energy upper stage called Centaur V. This upper stage is sized for ULA’s new Vulcan rocket, with more capability than the interim upper stage but with lower performance than the larger EUS.

A season of compromise, maybe

Ars’ Eric Berger wrote last year about the possibility of flying the Centaur V upper stage on SLS missions.

Incorporating the Centaur V wouldn’t maintain the SLS rocket’s lift capability, as the House committee calls for in its appropriations bill. The primary reason for improving the rocket’s performance is to give SLS Block 1B enough oomph to carry “co-manifested” payloads, meaning it can launch an Orion crew capsule and equipment for NASA’s Gateway lunar space station on a single flight. The lunar Gateway is also teed up for cancellation in Trump’s budget proposal, but both congressional appropriations bills would save it, too. If the Gateway escapes cancellation, there are ways to launch its modules on commercial rockets.

Blue Origin also has an upper stage that could conceivably fly on the Space Launch System. But the second stage for Blue Origin’s New Glenn rocket would be a more challenging match for SLS for several reasons, chiefly its 7-meter (23-foot) diameter—too wide to be a drop-in replacement for the interim upper stage used on Block 1. ULA’s Centaur V is much closer in size to the existing upper stage.

The House budget bill has passed a key subcommittee vote but won’t receive a vote from the full appropriations committee until after Congress’s August recess. A markup of the bill by the House Appropriations Committee scheduled for Thursday was postponed after Speaker Mike Johnson announced an early start to the recess this week.

Ars reported last week on the broad strokes of how the House and Senate appropriations bills would affect NASA. Since then, members of the House Appropriations Committee released the text of the report attached to their version of the NASA budget. The report, which includes the paragraph on the Exploration Upper Stage, provides policy guidance and more detailed direction on where NASA should spend its money.

The House’s draft budget includes $2.5 billion for the Space Launch System, close to this year’s funding level and $500 million more than the Trump administration’s request for the next fiscal year, which begins October 1. The budget would continue development of SLS Block 1B and the Exploration Upper Stage while NASA completes a six-month study of alternatives.

The report attached to the Senate appropriations bill for NASA has no specific instructions regarding the Exploration Upper Stage. But like the House bill, the Senate’s draft budget directs NASA to continue ordering spares and long-lead parts for SLS and Orion missions beyond Artemis III. Both versions of the NASA budget require the agency to continue with SLS and Orion until a suitable commercial, human-rated rocket and crew vehicle are proven ready for service.

In a further indication of Congress’ position on the SLS and Orion programs, lawmakers set aside more than $4 billion for the procurement of SLS rockets for the Artemis IV and Artemis V rockets in the reconciliation bill signed into law by President Donald Trump earlier this month.

Congress must pass a series of federal appropriations bills by October 1, when funding for the current fiscal year runs out. If Congress doesn’t act by then, it could pass a continuing resolution to maintain funding at levels close to this year’s budget or face a government shutdown.

Lawmakers will reconvene in Washington, DC, in early September in hopes of finishing work on the fiscal year 2026 budget. The section of the budget that includes NASA still must go through a markup hearing by the House Appropriations Committee and pass floor votes in the House and Senate. Then the two chambers will have to come to a compromise on the differences in their appropriations bill. Only then can the budget be put to another vote in each chamber and go to the White House for Trump’s signature.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Lawmakers writing NASA’s budget want a cheaper upper stage for the SLS rocket Read More »

as-white-house-talks-about-impounding-nasa-funding,-congress-takes-the-threat-seriously

As White House talks about impounding NASA funding, Congress takes the threat seriously

This year, given the recent action on the budget measures, it is possible that Congress could pass Appropriations legislation for most of the federal government, including NASA before October 1.

Certainly there is motivation to do so, because the White House and its Office of Management and Budget, led by Russ Vought, has indicated that in absence of Appropriations legislation it is planning to take measures that would implement the Presidents Budget Request, which set significantly lower spending levels for NASA and other federal agencies.

For example, as Ars reported earlier this month, the principal investigators of NASA science missions that White House seeks to kill have been told to create termination plans that could be implemented within three months, beginning as soon as October 1.

Whether there is a continuing resolution, or shutdown, then, the White House appears likely to go to court to implement its spending priorities at federal agencies, including NASA.

Congress acknowledges the threat

This week the Ranking Members of House committee with oversight over NASA raised the alarm publicly about this in a letter to Sean Duffy, the Secretary of Transportation who was recently named interim administrator of NASA as well.

NASA appears to be acting in accordance with a fringe, extremist ideology emanating from the White House Office of Management and Budget that asserts a right to impound funds appropriated by Congress for the sake of executive branch priorities. Moreover, it now appears that the agency intends to implement funding cuts that were never enacted by Congress in order to “align” the agency’s present-day budget with the Trump Administration’s slash-and-burn proposed budget for the next fiscal year, with seemingly no concern for the devastation that will be caused by mass layoffs, widespread program terminations, and the possible closure of critical centers and facilities. These decisions are wrong, and they are not yours to make.

The letter reminds Duffy that Congress sets the budget, and federal agencies work toward those budget levels. However, the legislators say, NASA is moving ahead with funding freezes for various programs reducing employees across the agency. Approximately 2,700 employees have left the agency since the beginning of the Trump Administration.

As White House talks about impounding NASA funding, Congress takes the threat seriously Read More »

congress-moves-to-reject-bulk-of-white-house’s-proposed-nasa-cuts

Congress moves to reject bulk of White House’s proposed NASA cuts

Fewer robots, more humans

The House version of NASA’s fiscal year 2026 budget includes $9.7 billion for exploration programs, a roughly 25 percent boost over NASA’s exploration budget for 2025, and 17 percent more than the Trump administration’s request in May. The text of the House bill released publicly doesn’t include any language explicitly rejecting the White House’s plan to terminate the SLS and Orion programs after two more missions.

Instead, it directs NASA to submit a five-year budget profile for SLS, Orion, and associated ground systems to “ensure a crewed launch as early as possible.” A five-year planning budget seems to imply that the House committee wants SLS and Orion to stick around. The White House budget forecast zeros out funding for both programs after 2028.

The House also seeks to provide more than $4.1 billion for NASA’s space operations account, a slight cut from 2025 but well above the White House’s number. Space operations covers programs like the International Space Station, NASA’s Commercial Crew Program, and funding for new privately owned space stations to replace the ISS.

Many of NASA’s space technology programs would also be salvaged in the House budget, which allocates $913 million for tech development, a reduction from the 2025 budget but still an increase over the Trump administration’s request.

The House bill’s cuts to science and space technology, though more modest than those proposed by the White House, would still likely result in cancellations and delays for some of NASA’s robotic space missions.

Rep. Grace Meng (D-NY), the senior Democrat on the House subcommittee responsible for writing NASA’s budget, called out the bill’s cut to the agency’s science portfolio.

“As other countries are racing forward in space exploration and climate science, this bill would cause the US to fall behind by cutting NASA’s account by over $1.3 billion,” she said Tuesday.

Lawmakers reported the Senate spending bill to the full Senate Appropriations Committee last week by voice vote. Members of the House subcommittee advanced their bill to the full committee Tuesday afternoon by a vote of 9-6.

The budget bills will next be sent to the full appropriations committees of each chamber for a vote and an opportunity for amendments, before moving on to the floor for a vote by all members.

It’s still early in the annual appropriations process, and a final budget bill is likely months away from passing both houses of Congress and heading to President Donald Trump’s desk for signature. There’s no guarantee Trump will sign any congressional budget bill, or that Congress will finish the appropriations process before this year’s budget runs out on September 30.

Congress moves to reject bulk of White House’s proposed NASA cuts Read More »

congress-asks-better-questions

Congress Asks Better Questions

Back in May I did a dramatization of a key and highly painful Senate hearing. Now, we are back for a House committee meeting. It was entitled ‘Authoritarians and Algorithms: Why U.S. AI Must Lead’ and indeed a majority of talk was very much about that, with constant invocations of the glory of democratic AI and the need to win.

The majority of talk was this orchestrated rhetoric that assumes the conclusion that what matters is ‘democracy versus authoritarianism’ and whether we ‘win,’ often (but not always) translating that as market share without any actual mechanistic model of any of it.

However, there were also some very good signs, some excellent questions, signs that there is an awareness setting in. As far as Congressional discussions of real AGI issues go, this was in part one of them. That’s unusual.

(And as always there were a few on random other high horses, that’s how this works.)

Partly because I was working from YouTube rather than a transcript, instead of doing a dramatization I will be first be highlighting some other coverage of the events to skip to some of the best quotes, then doing a more general summary and commentary.

Most of you should likely read the first section or two, and then stop. I did find it enlightening to go through the whole thing, but most people don’t need to do that.

Here is the full video of last week’s congressional hearing, here is a write-up by Shakeel Hashim with some quotes.

Also from the hearing, here’s Congressman Nathaniel Moran (R-Texas) asking a good question about strategic surprise arising from automated R&D and getting a real answer. Still way too much obsession with ‘beat China’, but this is at least progress. And here’s Tokuda (D-HI):

Peter Wildeford: Ranking Member Raja Krishnamoorthi (D-IL) opened by literally playing a clip from The Matrix, warning about a “rogue AI army that has broken loose from human control.”

Not Matrix as a loose metaphor, but speaking of a ‘machine uprising’ as a literal thing that could happen and is worth taking seriously by Congress.

The hearing was entitled “Algorithms and Authoritarians: Why U.S. AI Must Lead”. But what was supposed to be a routine House hearing about US-China competition became the most AGI-serious Congressional discussion in history.

Rep. Neal Dunn (R-FL) asked about an Anthropic paper where Claude “attempted to blackmail the chief engineer” in a test scenario and another paper about AI “sleeper agents” that could act normally for months before activating. While Jack Clark, a witness and Head of Policy at Anthropic, attempted to reassure by saying safety testing might mitigate the risks, Dunn’s response was perfect — “I’m not sure I feel a lot better, but thank you for your answer.”

Rep. Nathaniel Moran (R-TX) got to the heart of what makes modern AI different:

Instead of a programmer writing each rule a system will follow, the system itself effectively writes the rules […] AI systems will soon have the capability to conduct their own research and development.

That was a good illustration of both sides of what we saw.

This was also a central case of why Anthropic and Jack Clark are so frustrating.

Anthropic should indeed be emphasizing the need for testing, and Clark does this, but we shouldn’t be ‘attempting to reassure’ anyone based on that. Anthropic knows it is worse than you know, and hides this information thinking this is a good strategic move.

Throughout the hearing, Jack Clark said many very helpful things, and often said them quite well. He also constantly pulled back from the brink and declined various opportunities to inform people of important things, and emphasized lesser concerns and otherwise played it quiet.

Peter Wildeford:

The hearing revealed we face three interlocking challenges:

  1. Commercial competition: The traditional great power race with China for economic and military advantage through AI

  2. Existential safety: The risk that any nation developing superintelligence could lose control — what Beall calls a race of “humanity against time”

  3. Social disruption: Mass technological unemployment as AI makes humans “not just unemployed, but unemployable”

I can accept that framing. The full talk about humans being unemployable comes at the very end. Until then, there is talk several times about jobs and societal disruption, but it tries to live in the Sam Altman style fantasy where not much changes. Finally, at the end, Mark Beall gets an opportunity to actually Say The Thing. He doesn’t miss.

It is a good thing I knew there was better ahead, because oh boy did things start out filled with despair.

As our first speaker, after urging us to ban AI therapist bots because one sort of encouraged a kid to murder his parents ‘so they could be together,’ Representative Krishnamoorthi goes on show a clip of Chinese robot dogs, then to say we must ban Chinese and Russian AI models so we don’t sent them our data (no one tell him about self-hosting) and then plays ‘a clip from The Matrix’ that is not even from The Matrix, claiming that the army or Mr. Smiths is ‘a rogue AI army that is broken loose from human control.’

I could not even. Congress often lives in the ultimate cringe random half-right associative Gell-Mann Amnesia world. But that still can get you to realize some rather obvious true things, and luckily that was indeed the worst of it even from Krishnamoorthi, this kind of thinking can indeed point towards important things.

Mr. Krishnamoorthi: OpenAI’s chief scientist wanted to quote unquote build a bunker before we release AGI as you can see on the visual here. Rather than building bunkers however we should be building safer AI whether it’s American AI or Chinese AI it should not be released until we know it’s safe that’s why I’m working on a new bill the AGI Safety Act that will require AGI to be aligned with human values and require it to comply with laws that apply to humans. That is just common sense.

I mean yes that is common sense. Yes, rhetoric from minutes prior (and after) aside, we should be building ‘safer AGI’ and if we can’t do that we shouldn’t be building AGI at all.

It’s a real shame that no one has any idea how to ensure that AGI is aligned with human values, or how to get it to comply with laws that apply to humans. Maybe we should get to work on that.

And then we get another excellent point.

Mr. Krishnamoorthi: I’d like to conclude with something else that’s common sense. Not shooting ourselves in the foot. 70% of America’s AI researchers are foreign born or foreign educated. Jack Clark our eminent witness today is an immigrant. We cannot be deporting the people we depend on to build AI we also can’t be defunding the agency that make AI miracles like Ann’s ability to speak again a reality federal grants from agencies like NSF are what allow scientists across America to make miracles happen. AI is the defining technology of our lifetimes to do AI right and prevent nightmares we need.

Yes, at a bare minimum not deporting our existing AI researchers and cutting off existing related research programs does seem like the least you could do? I’d also like to welcome a lot more talent, but somehow this is where we are.

We then get Dr. Mahnken’s opening statement, which emphasizes that we are in a battle for technical dominance, America is good and free and our AI will be empowering and innovative whereas China is bad and low trust and a fast follower. He also emphasizes the need for diffusion in key areas.

Of course, if you are facing a fast follower, you should think about what does and doesn’t help them follow, and also you can’t panic every time they fast follow you and respond with ‘go faster or they’ll take the lead!’ as they then fast follow your new faster pace. Nor would you want to hand out your top technology for free.

Next up is Mr. Beall. He frames the situation as two races. I like this a lot. First, we have the traditional battle for economic, military and geopolitical advantage in mundane terms played with new pieces.

Many only see this game, or pretend only this game exists. This is a classic, well-understood type of game. You absolutely want to fight for profits and military strength and economic growth and so on in mundane terms. We all agree on things like the need to greatly expand American energy production (although the BBB does not seem to share this opinion) and speed adaptation in government.

I still think that even under this framework the obsession with ‘market share’ especially of chip sales (instead of chip ownership and utilization) makes absolutely no sense and would make no sense even if that question was in play, as does the obsession with the number of tokens models serve as opposed to looking at productivity, revenue and profits. There’s so much rhetoric behind metrics that don’t matter.

The second race is the race to artificial superintelligence (ASI) or to AGI. This is the race that counts, and even if we get there first (and even more likely if China gets there first) the default result is that everyone loses.

He asks for the ‘three Ps,’ protect our capabilities, promote American technology abroad and prepare by getting it into the hands of those that need it and gathering the necessary information. He buys into this new centrality of the ‘American AI tech stack’ line that’s going around, despite the emphasis on superintelligence, but he does warn that AGI may come soon and we need to urgently gather information about that so we can make informed choices, and even suggests narrow dialogue with China on potential mitigations of certain risks and verification measures, while continuing to compete with China otherwise.

Third up we have Jack Clark of Anthropic, he opens like this.

Jack Clark: America can win the race to build powerful AI and winning the race is a necessary but not sufficient achievement. We have to get safety right.

When I discuss powerful AI I’m talking about AI systems that represent a major advancement beyond today’s capabilities a useful conceptual framework is to think of this as like a country of geniuses in a data center and I believe that that technology could be buildable by late 2026 or early 2027.

America is well positioned to build this technology but we need to deal with its risks.

He then goes on to talk about how American AI will be democratic and Chinese AI will be authoritarian and America must prevail, as we are now required to say by law, Shibboleth. He talks about misuse risk and CBRN risks and notes DeepSeek poses these as well, and then mentions the blackmail findings, and calls for tighter export controls and stronger federal ability to test AI models, and broader deployment within government.

I get what Clark is trying to do here, and the dilemma he is facing. I appreciate talking about safety up front, and warning about the future pace of progress, but I still feel like he is holding back key information that needs to be shared if you want people to understand the real situation.

Instead, we still have 100 minutes that touch on this in places but mostly are about mundane economic or national security questions, plus some model misbehavior.

Now we return to Representative Krishnamoorthi, true master of screen time, who shows Claude refusing to write a blog post promoting eating disorders, then DeepSeek being happy to help straight up and gets Clark to agree that DeepSeek does not do safety interventions beyond CCP protocols and that this is unacceptable, then reiterates his bill to not let the government use DeepSeek, citing that they store data on Chinese servers. I mean yes obviously don’t use their hosted version for government purposes, but does he not know how open source works, I wonder?

He pivots to chip smuggling and the risk of DeepSeek using our chips. Clark is happy to once again violently agree. I wonder if this is a waste or good use of time, since none of it is new, but yes obviously what matters is who is using the chip, not who made it, and selling our chips to China (at least at current market prices) is foolish, Krishnamoorthi points out Nvidia’s sales are growing like gangbusters despite export controls and Clark points out that every AI company keeps using more compute than expected.

Then there’s a cool question, essentially asking about truesight and ability to infer missing information when given context, before finishing by asking about recent misalignment results:

Representative Krishnamoorthi: If someone enters their diary into Claude for a year and then ask Claude to guess what they did not write down Claude is able to accurately predict what they left out isn’t that right?

Jack Clark: Sometimes that’s accurate yes these systems are increasingly advanced and are able to make subtle predictions like this which is why we need to ensure that our own US intelligence services use this technology and know how to get the most out of it.

Representative Moolenaar then starts with a focus on chip smuggling and diffusion, getting Beall to affirm smuggling is a big deal then asking Clark about how this is potentially preventing American technological infrastructure diffusion elsewhere. There is an obvious direct conflict, you need to ensure the compute is not diverted or misused at scale. Comparisons are made to nuclear materials.

Then he asks Clark, as an immigrant, about how to welcome immigrants especially from authoritarian states to help our AI work, and what safeguards we would need. Great question. Clark suggests starting with university-level STEM immigration, the earlier the better. I agree, but it would be good to have a more complete answer here about containing information risks. It is a real issue.

Representative Carson is up next and asks about information warfare. Clark affirms AI can do this and says we need tools to fight against it.

Representative Lahood asks about the moratorium that was recently removed from the BBB, warning about the ‘patchwork of states.’ Clark says we need a federal framework, but that without one powerful AI is coming soon and you’d just be creating a vacuum, which would be flooded if something went wrong. Later Clark, in response to another question, emphasizes that the timeline is short and we need to be open to options.

Representative Dunn asks about the blackmail findings and asks if he should be worried about AIs using his bank information against him. Clark says no, because we publish the research and we should encourage more of this and also closely study Chinese models, and I agree with that call but it doesn’t actually explain why you shouldn’t worry (for now, anyway). Dunn then asks about the finding that you can put a sleeper agent into an AI, Clark says testing for such things likely would take them a month.

Dunn then asks Manhken what would be the major strategic missteps Congress might make in an AGI world. He splits his answer into insufficient export controls and overregulation, it seems he thinks there are not other things to worry about when it comes to AGI.

Here’s one that isn’t being noticed enough:

Mr. Moulton (56: 50): The concern is China and so we have to somehow get to an international framework a Geneva Conventions like agreement that has a chance at least at limiting uh what what our adversaries might do with AI at the extremes.

He then asks Beall what should be included in that. Beall starts off with strategic missile-related systems and directive 3000.09 on lethal autonomous systems. Then he moves to superintelligence, but time runs out before he can explain what he wants.

Representative Johnson notes the members are scared and that ‘losing this race’ could ‘trigger a global crisis,’ and asks about dangers of data centers outside America, which Beall notes of course are that we won’t ultimately own the chips or AI, so we should redouble our efforts to build domestically even if we have to accept some overseas buildout for energy reasons.

Johnson asks about the tradeoff between safety and speed, seeing them in conflict. Jack points out that, at current margins, they’re not.

Jack Clark: We all buy cars because we know that if they if they get dinged we’re not going to suffer in them because they have airbags and they have seat belts. You’ve grown the size of the car market by innovating on safety technology and American firms compete on safety technology to sell to consumers.

The same will be true of AI. So far, we do not see there being a trade-off here we see that making more reliable trustworthy technology ultimately helps you grow the size of the market and grows the attractiveness of American platforms vis-a-vie China so I would constructively sort of push back on this and put it to you that there’s an amazing opportunity here to use safety as a way to grow the American existing dominance in the market.

Those who set up the ‘slow down’ and safety versus speed framework must of course take the L on how that (in hindsight inevitably) went down. Certainly there are still sometimes tradeoffs here on some margins, on some questions, especially when you are the ‘fun police’ towards your users, or you delay releases for verification. Later down the road, there will be far more real tradeoffs that occur at various points.

But also, yes, for now the tradeoffs are a lot like those in cars, in that improving the safety and security of the models helps them be a lot more useful, something you can trust and that businesses especially will want to use. At this point, Anthropic’s security focus is a strategic advantage.

Johnson wants to believe Clark, but is skeptical and asks Manhken, who says too much emphasis on safety could indeed slow us down (which, as phrased, is obviously true), that he’s worried we won’t go fast enough and there’s no parallel conversation at the PRC.

Representative Torres asks Clark how close China is to matching ASML and TSMC. Clark says they are multiple years behind. Torres then goes full poisoned banana race:

Torres: The first country to reach ASI will likely emerge as the superpower of the 21st century the superpower who will set the rules for the rest of the world. Mr clark what do you make of the Manhattan project framing?

Clark says yes in terms of doing it here but no because it’s from private actors and they agree we desperately need more energy.

Hissen says Chinese labs aren’t doing healthy competition, they’re stealing our tech, then praises the relaxation of the Biden diffusion rules that prevent China from stealing our tech, and asks about what requirements we should attach to diffusion deals and everyone talks arms race and market share. Sigh.

In case you were wonder where that was coming from, well, here we go:

Hinson: members of of your key team at Anthropic have held very influential roles in this space both open philanthropy and in the previous administration with the Biden administration as well.

Can you speak to how you manage you know obviously we’ve got a lot of viewpoints but how you manage potential areas of conflict of interest in advancing this tech and ensuring that everybody’s really on that same page with helping to shape this national AI policy that we’re talking about the competition on the global stage for this for this technology.

You see, if you’re trying to not die that’s a conflict of interest and your role must have been super important, never mind all that lobbying by major tech corporations. Whereas if you want American policy to focus on your own market share, that’s good old fashioned patriotism, that must be it.

Jack Clark: Thank you for the question we have a simple goal. Win the race and make technology that can be relied on and all of the work that we do at our company starts from looking at that and then just trying to work out the best way to get there and we work with people from a variety of backgrounds and skills and our goal is to just have the best most substantive answer that we can bring to hearings.

No, ma’am, we too are only trying to win the race and maximize corporate profits and keep our fellow patriots informed, it is fine. Anthropic doesn’t care about everyone not dying or anything, that would be terrible. Again, I get the strategic bind here, but I continue to find this deeply disappointing, and I don’t think it is a good play.

She then asks Beall about DeepSeek’s ability to quickly copy our tech and potential future espionage threats and Beall reminds her that export controls work with a lag and notes DeepSeek was a wakeup call (although one that I once again note was blown out or proportion for various reasons, but we’re stuck with it). Beall recommends the Remote Access Security Act and then he says we have to ‘grapple with the open source issue.’ Which is that if you open the model they can copy it. Well, there is that.

Representative Brown pulls out They Took Our Jobs and ensuring people (like those in her district, Ohio’s 11th) don’t get left behind by automation and benefit instead, calling for investing in the American workforce, so Clark goes into those speeches and encouraging diffusion and adjusting regulation and acts as if Dario hadn’t predicted the automation of half of white-collar entry level jobs within five years.

Representative Nun notes (along with various other race-related things) the commissioning of four top AI teams as lieutenant kernels, which I and Patrick McKenzie both noticed but has gotten little attention. He then brings up a Chinese startup called Zhipu (currently valued around $20 billion) as some sort of global threat.

Nun: A new AI group out of Beijing called Zhipu is an AI anomaly that is now facing off against the likes of OpenAI and their entire intent is to lock in Chinese systems and standards into emerging markets before the West so this is clearly a largescale attempt by the Chinese to box the United States out now as a counter intelligence officer who was on the front line in fighting against Huawei’s takeover of the United States through something called Huawei America.

That is indeed how a number of Congress people talk these days, including this sudden paranoia with some mysterious ‘lock in’ mechanism for API calls or self-hosted open models that no one has ever been able to explain to me. He does then ask an actual good question:

Nun: Is the US currently prepared for an AI accelerated cyber attack a zero-day attack or a larger threat that faces us today?

Mahnken does some China bad, US good and worries the Chinese will be deluded into thinking AI will let them do things they can’t do and they might start a war? Which is such a bizarre thing to worry about and also not an answer? Are we prepared? I assume mostly no.

Nun then pushes his HR 2152 for government AI diffusion.

He asks Clark how government and business can cooperate. Clark points to the deployment side and the development of safety standards as a way to establish trust and sell globally.

Representative Tokuda starts out complaining about us gutting our institutions, Clark of course endorses investing more in NIST and other such institutions. Tokuda asks about industry responsibility, including for investment in related infrastructure, Clark basically says he works on that and for broader impact questions get back to him in 3-4 years to talk more.

Then she gives us the remarkable quote above about superintelligence (at 1: 30: 20), the full quote is even stronger, but she doesn’t leave enough time for an answer.

I am very grateful for the statement, even with no time left to respond. There is something so weird about asking two other questions first, then getting to ASI.

Representative Moran asks Clark, what’s the most important thing to win this race? Clark chooses power followed by compute and then government infrastructure, and suggests working backwards from the goal of 50 GW in 2027. Mahnkin is asked next and suggests trying to slow down the Chinese.

Moran notices that AI is not like older programming, that it effectively will write its own rules and programming and will soon do its own research and asks what’s up with that. Clark says more research is urgently needed, and points out you wouldn’t want an AI that can blackmail you designing its successor. I’m torn on whether that cuts to the heart of the question in a useful way or not here.

Moran then asks, what is the ‘red line’ on AI the Chinese cannot be allowed cross? Beall confirms AI systems are grown, not built, that it is alchemy, and that the automated R&D is the red line and a really big deal, we need to be up to speed on that.

Representative Conner notes NIST’s safety testing is voluntary and asks if there should be some minimum third party verification required, if only to verify the company’s own standards. All right, Clark, he served it up for you, here’s the ball, what have you got?

Clark: this question is illustrates the challenge we have about weighing safety versus you know moving ahead as quickly as possible we need to first figure out what we want to hold to that standard of testing.

Today the voluntary agreements rest on CBRN testing and some forms of cyber cyber attack testing once we have standards that we’re confident of I think you can take a look at the question of whether voluntary is sufficient or you need something else.

But my sense is it’s too early and we first need to design those tests and really agree on those before figuring out what the next step would be and who would design those tests is it the AI institute or is it the private sector who who comes up with what those tests should be today these tests are done highly collaboratively between US private sector which you mentioned and parts of the US government including those in the the intelligence and defense community i think bringing those people together.

So that we have the nation’s best experts on this and standards and tests that we all agree on is the first step that we can take to get us to everything else and by when do you think that needs to be done. It would be ideal to have this within a year the timelines that I’ve spoken about in this hearing are powerful AI arrives at the end of 2026 or early 2027. Before then we would ideally have standard tests for the national security properties that we deeply care about.

I’m sorry, I think the word you were looking for was ‘yes’? What the hell? This is super frustrating. I mean as worded how is this even a question? You don’t need to know the final exact testing requirements before you start to move towards such a regime. There are so many different ways this answer is a missed opportunity.

The last question goes back to They Took Our Jobs, and Clark basically can only say we can gather data, and there are areas that won’t be impacted soon by AI, again pretending his CEO Dario Amodei hadn’t warned of a jobs ‘bloodbath.’ Beall steps up and says the actual damn thing (within the jobs context), which is that we face a potential future where humans are not only unemployed but unemployable, and we have to have those conversations in advance.

And we end on this not so reassuring note:

Mark Beall: when I hear folks in industry claim things about universal basic income and this sort of digital utopia I you know I study history. I worry that that sort of leads to one place and that place is the Goolog.

That is quite the bold warning, and an excellent place to end the hearing. It is not the way I would have put it, but yes the idea of most or all of humanity being entirely disempowered and unproductive except for our little status games, existing off of gifted resources, property rights and rule of law and some form of goodwill and hoping all of this holds up does not seem like a plan that is likely to end well. At least, not for those humans. No, having ‘solved the alignment problem’ does not on its own get you out of this in any way, solving the alignment problem is the price to try at all.

And that is indeed one kind of thing we need to think about now.

Is this where I wanted the conversation to be in 2025? Oh, hell no.

It’s a start.

Discussion about this post

Congress Asks Better Questions Read More »

congress-passes-bill-to-jumpstart-new-nuclear-power-tech

Congress passes bill to jumpstart new nuclear power tech

A nuclear reactor and two cooling towards on a body of water, with a late-evening glow in the sky.

Earlier this week, the US Senate passed what’s being called the ADVANCE Act, for Accelerating Deployment of Versatile, Advanced Nuclear for Clean Energy. Among a number of other changes, the bill would attempt to streamline permitting for newer reactor technology and offer cash incentives for the first companies that build new plants that rely on one of a handful of different technologies. It enjoyed broad bipartisan support both in the House and Senate and now heads to President Biden for his signature.

Given Biden’s penchant for promoting his bipartisan credentials, it’s likely to be signed into law. But the biggest hurdles nuclear power faces are all economic, rather than regulatory, and the bill provides very little in the way of direct funding that could help overcome those barriers.

Incentives

For reasons that will be clear only to congressional staffers, the Senate version of the bill was attached to an amendment to the Federal Fire Prevention and Control Act. Nevertheless, it passed by a margin of 88-2, indicating widespread (and potentially veto-proof) support. Having passed the House already, there’s nothing left but the president’s signature.

The bill’s language focuses on the Nuclear Regulatory Commission (NRC) and its role in licensing nuclear reactor technology. The NRC is directed to develop a variety of reports for Congress—so, so many reports, focusing on everything from nuclear waste to fusion power—that could potentially inform future legislation. But the meat of the bill has two distinct focuses: streamlining regulation and providing some incentives for new technology.

The incentives are one of the more interesting features of the bill. They’re primarily focused on advanced nuclear technology, which is defined extremely broadly by an earlier statute as providing any of the following:

    • (A) additional inherent safety features
    • (B) significantly lower levelized cost of electricity
    • (C) lower waste yields
    • (D) greater fuel utilization
    • (E) enhanced reliability
    • (F) increased proliferation resistance
    • (G) increased thermal efficiency
    • (H) ability to integrate into electric and nonelectric applications

Normally, the work of the NRC in licensing is covered via application fees paid by the company seeking the license. But the NRC is instructed to lower its licensing fees for anyone developing advanced nuclear technologies. And there’s a “prize” incentive where the first company to get across the line with any of a handful of specific technologies will have all these fees refunded to it.

Winners will be awarded when they have met any of the following requirements: the first advanced reactor design that receives a license from the NRC; the first to be loaded with fuel for operation; the first to use isotopes derived from spent fuel; the first to build a facility where the reactor is integrated into a system that stores energy; the first to build a facility where the reactor provides electricity or processes heat for industrial applications.

The first award will likely go to NuScale, which is developing a small, modular reactor design and has gotten pretty far along in the licensing process. Its first planned installation, however, has been cancelled due to rising costs, so there’s no guarantee that the company will be first to fuel a reactor. TerraPower, a company backed by Bill Gates, is fairly far along in the design of a rector facility that will come with integrated storage, and so may be considered a frontrunner there.

For the remaining two prizes, there aren’t frontrunners for very different reasons. Nearly every company building small modular nuclear reactors promotes them as a potential source of process heat. By contrast, reprocessing spent fuel has been hugely expensive in any country where it has been tried, so it’s unlikely that prize will ever be given out.

Congress passes bill to jumpstart new nuclear power tech Read More »

sharing-deepfake-porn-could-lead-to-lengthy-prison-time-under-proposed-law

Sharing deepfake porn could lead to lengthy prison time under proposed law

Fake nudes, real harms —

Teen “shouting for change” after fake nude images spread at NJ high school.

Sharing deepfake porn could lead to lengthy prison time under proposed law

The US seems to be getting serious about criminalizing deepfake pornography after teen boys at a New Jersey high school used AI image generators to create and share non-consensual fake nude images of female classmates last October.

On Tuesday, Rep. Joseph Morelle (D-NY) announced that he has re-introduced the “Preventing Deepfakes of Intimate Images Act,” which seeks to “prohibit the non-consensual disclosure of digitally altered intimate images.” Under the proposed law, anyone sharing deepfake pornography without an individual’s consent risks damages that could go as high as $150,000 and imprisonment of up to 10 years if sharing the images facilitates violence or impacts the proceedings of a government agency.

The hope is that steep penalties will deter companies and individuals from allowing the disturbing images to be spread. It creates a criminal offense for sharing deepfake pornography “with the intent to harass, annoy, threaten, alarm, or cause substantial harm to the finances or reputation of the depicted individual” or with “reckless disregard” or “actual knowledge” that images will harm the individual depicted. It also provides a path for victims to sue offenders in civil court.

Rep. Tom Kean (R-NJ), who co-sponsored the bill, said that “proper guardrails and transparency are essential for fostering a sense of responsibility among AI companies and individuals using AI.”

“Try to imagine the horror of receiving intimate images looking exactly like you—or your daughter, or your wife, or your sister—and you can’t prove it’s not,” Morelle said. “Deepfake pornography is sexual exploitation, it’s abusive, and I’m astounded it is not already a federal crime.”

Joining Morelle in pushing to criminalize deepfake pornography was Dorota and Francesca Mani, who have spent the past two months meeting with lawmakers, The Wall Street Journal reported. The mother and daughter experienced the horror Morelle described firsthand when the New Jersey high school confirmed that 14-year-old Francesca was among the students targeted last year.

“What happened to me and my classmates was not cool, and there’s no way I’m just going to shrug and let it slide,” Francesca said. “I’m here, standing up and shouting for change, fighting for laws, so no one else has to feel as lost and powerless as I did on October 20th.”

Morelle’s office told Ars that “advocacy from partners like the Mani family” is “critical to bringing attention to this issue” and getting the proposed law “to the floor for a vote.”

Morelle introduced the law in December 2022, but it failed to pass that year or in 2023. He’s re-introducing the law in 2024 after seemingly gaining more support during a House Oversight subcommittee hearing on “Advances in Deepfake Technology” last November.

At that hearing, many lawmakers warned of the dangers of AI-generated deepfakes, citing a study from the Dutch AI company Sensity, which found that 96 percent of deepfakes online are deepfake porn—the majority of which targets women.

But lawmakers also made clear that it’s currently hard to detect AI-generated images and distinguish them from real images.

According to a hearing transcript posted by the nonprofit news organization Tech Policy Press, David Doermann—currently interim chair of the University at Buffalo’s computer science and engineering department and former program manager at the Defense Advanced Research Projects Agency (DARPA)—told lawmakers that DARPA was already working on advanced deepfake detection tools but still had more work to do.

To support laws like Morelle’s, lawmakers have called for more funding for DARPA and the National Science Foundation to aid in ongoing efforts to create effective detection tools. At the same time, President Joe Biden—through a sweeping AI executive order—has pushed for solutions like watermarking deepfakes. Biden’s executive order also instructed the Department of Commerce to establish “standards and best practices for detecting AI-generated content and authenticating official content.”

Morelle is working to push his law through in 2024, warning that deepfake pornography is already affecting a “generation of young women like Francesca,” who are “ready to stand up against systemic oppression and stand in their power.”

Until the federal government figures out how to best prevent the sharing of AI-generated deepfakes, Francesca and her mom plan to keep pushing for change.

“Our voices are our secret weapon, and our words are like power-ups in Fortnite,” Francesca said. “My mom and I are advocating to create a world where being safe isn’t just a hope; it’s a reality for everyone.”

Sharing deepfake porn could lead to lengthy prison time under proposed law Read More »