While chipmakers wait for more clarity, Lutnick has suggested that Trump—who campaigned on killing the CHIPS Act—has found a way to salvage the legislation that Joe Biden viewed as his lasting legacy. It seems possible that the plan arose after Trump realized how hard it would be to ax the legislation completely, with grants already finalized (but most not disbursed).
“The Biden administration literally was giving Intel money for free and giving TSMC money for free, and all these companies just giving the money for free, and Donald Trump turned it into saying, ‘Hey, we want equity for the money. If we’re going to give you the money, we want a piece of the action for the American taxpayer,'” Lutnick said.
“It’s not governance, we’re just converting what was a grant under Biden into equity for the Trump administration, for the American people,” Lutnick told CNBC.
Further, US firms could potentially benefit from any potential arrangements. For Intel, the “highly unusual” deal that Trump is mulling now could help the struggling chipmaker compete with its biggest rivals, including Nvidia, Samsung, and TSMC, BBC noted.
Vincent Fernando, founder of the investment consultancy Zero One, told the BBC that taking a stake in Intel “makes sense, given the company’s key role in producing semiconductors in the US,” which is a major Trump priority.
But as Intel likely explores the potential downsides of accepting such a deal, other companies applying for federal grants may already be alarmed by Trump’s move. Fernando suggested that Trump’s deals to take ownership stake in US firms—which economics professor Kevin J. Fox said only previously occurred during the global financial crisis—could add “uncertainty for any company who is already part of a federal grant program or considering one.”
Fox also agreed that the Intel deal could deter other companies from accepting federal grants, while possibly making it harder for Intel to run its business “effectively.”
Trump’s attacks on Intel CEO may stem from beef with Biden.
Lip-Bu Tan, chief executive officer of Intel Corp., departs following a meeting at the White House. President Donald Trump said Tan had an “amazing story” after the meeting.
Donald Trump has been meddling with Intel, which now apparently includes mulling “the possibility of the US government taking a financial stake in the troubled chip maker,” The Wall Street Journal reported.
Trump and Intel CEO Lip-Bu Tan weighed the option during a meeting on Monday at the White House, people familiar with the matter told WSJ. These talks have only just begun—with Intel branding them a rumor—and sources told the WSJ that Trump has yet to iron out how the potential arrangement might work.
The WSJ’s report comes after Trump called for Tan to “resign immediately” last week. Trump’s demand was seemingly spurred by a letter that Republican senator Tom Cotton sent to Intel, accusing Tan of having “concerning” ties to the Chinese Communist Party.
Cotton accused Tan of controlling “dozens of Chinese companies” and holding a stake in “hundreds of Chinese advanced-manufacturing and chip firms,” at least eight of which “reportedly have ties to the Chinese People’s Liberation Army.”
Further, before joining Intel, Tan was CEO of Cadence Design Systems, which recently “pleaded guilty to illegally selling its products to a Chinese military university and transferring its technology to an associated Chinese semiconductor company without obtaining license.”
“These illegal activities occurred under Mr. Tan’s tenure,” Cotton pointed out.
He demanded answers by August 15 from Intel on whether they weighed Tan’s alleged Cadence conflicts of interest against the company’s requirements to comply with US national security laws after accepting $8 billion in CHIPS Act funding—the largest granted during Joe Biden’s term. The senator also asked Intel if Tan was required to make any divestments to meet CHIPS Act obligations and if Tan has ever disclosed any ties to the Chinese government to the US government.
Neither Intel nor Cotton’s office responded to Ars’ request to comment on the letter or confirm whether Intel has responded.
But Tan has claimed that there is “a lot of misinformation” about his career and portfolio, the South China Morning Post reported. Born in Malaysia, Tan has been a US citizen for 40 years after finishing postgraduate studies in nuclear engineering at the Massachusetts Institute of Technology.
In an op-ed, SCMP reporter Alex Lo suggested that Tan’s investments—which include stakes in China’s largest sanctioned chipmaker, SMIC, as well as “several” companies on US trade blacklists, SCMP separately reported—seem no different than other US executives and firms with substantial investments in Chinese firms.
“Cotton accused [Tan] of having extensive investments in China,” Lo wrote. “Well, name me a Wall Street or Silicon Valley titan in the past quarter of a century who didn’t have investment or business in China. Elon Musk? Apple? BlackRock?”
He also noted that “numerous news reports” indicated that “Cadence staff in China hid the dodgy sales from the company’s compliance officers and bosses at the US headquarters,” which Intel may explain to Cotton if a response comes later today.
Any red flags that Intel’s response may raise seems likely to heighten Trump’s scrutiny, as he looks to make what Reuters reported was yet another “unprecedented intervention” by a president in a US firm’s business. Previously, Trump surprised the tech industry by threatening the first-ever tariffs aimed at a US company (Apple) and more recently, Trump struck an unusual deal with Nvidia and AMD that gives US a 15 percent cut of the firms’ revenue from China chip sales.
However, Trump was seemingly impressed by Tan after some face-time this week. Trump came out of their meeting professing that Tan has an “amazing story,” Bloomberg reported, noting that any agreement between Trump and Tan “would likely help Intel build out” its planned $28 billion chip complex in Ohio.
Those chip fabs—boosted by CHIPS Act funding—were supposed to put Intel on track to launch operations by 2030, but delays have set that back by five years, Bloomberg reported. That almost certainly scrambles another timeline that Biden’s Commerce Secretary Gina Raimondo had suggested would ensure that “20 percent of the world’s most advanced chips are made in the US by the end of the decade.”
Why Intel may be into Trump’s deal
At one point, Intel was the undisputed leader in chip manufacturing, Bloomberg noted, but its value plummeted from $288 billion in 2020 to $104 billion today. The chipmaker has been struggling for a while—falling behind as Nvidia grew to dominate the AI chip industry—and 2024 was its “first unprofitable year since 1986,” Reuters reported. As the dismal year wound down, Intel’s longtime CEO Pat Gelsinger retired.
Helming Intel for more than 40 years, Gelsinger acknowledged the “challenging year.” Now Tan is expected to turn it around. To do that, he may need to deprioritize the manufacturing process that Gelsinger pushed, which Tan suspects may have caused Intel being viewed as an outdated firm, anonymous insiders told Reuters. Sources suggest he’s planning to pivot Intel to focus more on “a next-generation chipmaking process where Intel expects to have advantages over Taiwan’s TSMC,” which currently dominates chip manufacturing and even counts Intel as a customer, Reuters reported. As it stands now, TSMC “produces about a third of Intel’s supply,” SCMP reported.
This pivot is supposedly how Tan expects Intel can eventually poach TSMC’s biggest customers like Apple and Nvidia, Reuters noted.
Intel has so far claimed that any discussions of Tan’s supposed plans amount to nothing but speculation. But if Tan did go that route, one source told Reuters that Intel would likely have to take a write-off that industry analysts estimate could trigger losses “of hundreds of millions, if not billions, of dollars.”
Perhaps facing that hurdle, Tan might be open to agreeing to the US purchasing a financial stake in the company while he rights the ship.
Trump/Intel deal reminiscent of TikTok deal
Any deal would certainly deepen the government’s involvement in the US chip industry, which is widely viewed as critical to US national security.
While unusual, the deal does seem somewhat reminiscent to the TikTok buyout that the Trump administration has been trying to iron out since he took office. Through that deal, the US would acquire enough ownership divested from China-linked entities to supposedly appease national security concerns, but China has been hesitant to sign off on any of Trump’s proposals so far.
Last month, Trump admitted that he wasn’t confident that he could sell China on the TikTok deal, which TikTok suggested would have resulted in a glitchier version of the app for American users. More recently, Trump’s commerce secretary threatened to shut down TikTok if China refuses to approve the current version of the deal.
Perhaps the terms of a US deal with Intel could require Tan to divest certain holdings that the US fears compromises the CEO. Under terms of the CHIPS Act grant, Intel is already required to be “a responsible steward of American taxpayer dollars and to comply with applicable security regulations,” Cotton reminded the company in his letter.
But social media users in Malaysia and Singapore have criticized Cotton of the “usual case of racism” in attacking Intel’s CEO, SCMP reported. They noted that Cotton “was the same person who repeatedly accused TikTok CEO Shou Zi Chew of ties with the Chinese Communist Party despite his insistence of being a Singaporean,” SCMP reported.
“Now it’s the Intel’s CEO’s turn on the chopping block for being [ethnic] Chinese,” a Facebook user, Michael Ong, said.
Tensions were so high that there was even a social media push for Tan to “call on Trump’s bluff and resign, saying ‘Intel is the next Nokia’ and that Chinese firms would gladly take him instead,” SCMP reported.
So far, Tan has not criticized the Trump administration for questioning his background, but he did issue a statement yesterday, seemingly appealing to Trump by emphasizing his US patriotism.
“I love this country and am profoundly grateful for the opportunities it has given me,” Tan said. “I also love this company. Leading Intel at this critical moment is not just a job—it’s a privilege.”
Trump’s Intel attacks rooted in Biden beef?
In his op-ed, SCMP’s Lo suggested that “Intel itself makes a good punching bag” as the biggest recipient of CHIPS Act funding. The CHIPS Act was supposed to be Biden’s lasting legacy in the US, and Trump has resolved to dismantle it, criticizing supposed handouts to tech firms that Trump prefers to strong-arm into US manufacturing instead through unpredictable tariff regimes.
“The attack on Intel is also an attack on Trump’s predecessor, Biden, whom he likes to blame for everything, even though the industrial policies of both administrations and their tech war against China are similar,” Lo wrote.
At least one lawmaker is ready to join critics who question if Trump’s trade war is truly motivated by national security concerns. On Friday, US representative Raja Krishnamoorthi (D.-Ill.) sent a letter to Trump “expressing concern” over Trump allowing Nvidia to resume exports of its H20 chips to China.
“Trump’s reckless policy on AI chip exports sells out US security to Beijing,” Krishnamoorthi warned.
“Allowing even downgraded versions of cutting-edge AI hardware to flow” to the People’s Republic of China (PRC) “risks accelerating Beijing’s capabilities and eroding our technological edge,” Krishnamoorthi wrote. Further, “the PRC can build the largest AI supercomputers in the world by purchasing a moderately larger number of downgraded Blackwell chips—and achieve the same capability to train frontier AI models and deploy them at scale for national security purposes.”
Krishnamoorthi asked Trump to send responses by August 22 to four questions. Perhaps most urgently, he wants Trump to explain “what specific legal authority would allow the US government to “extract revenue sharing as a condition for the issuance of export licenses” and what exactly he intends to do with those funds.
Trump was also asked to confirm if the president followed protocols established by Congress to ensure proper export licensing through the agreement. Finally, Krishnamoorthi demanded to know if Congress was ever “informed or consulted at any point during the negotiation or development of this reported revenue-sharing agreement with NVIDIA and AMD.”
“The American people deserve transparency,” Krishnamoorthi wrote. “Our export control regime must be based on genuine security considerations, not creative taxation schemes disguised as national security policy.”
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
The Biden-era chip restriction framework, which we covered in January, established a three-tiered system for regulating AI chip exports. The first tier included 17 countries, plus Taiwan, that could receive unlimited advanced chips. A second tier of roughly 120 countries faced caps on the number of chips they could import. The administration entirely blocked the third tier, which included China, Russia, Iran, and North Korea, from accessing the chips.
Commerce Department officials now say they “didn’t like the tiered system” and considered it “unenforceable,” according to Reuters. While no timeline exists for the new rule, the spokeswoman indicated that officials are still debating the best approach to replace it. The Biden rule was set to take effect on May 15.
Reports suggest the Trump administration might discard the tiered approach in favor of a global licensing system with government-to-government agreements. This could involve direct negotiations with nations like the United Arab Emirates or Saudi Arabia rather than applying broad regional restrictions. However, the Commerce Department spokeswoman indicated that debate about the new approach is still underway, and no timetable has been established for the final rule.
“The key here is not whether Broadband Internet Service Providers utilize telecommunications; it is instead whether they do so while offering to consumers the capability to do more,” Griffin wrote, concluding that “they do.”
“The FCC exceeded its statutory authority,” Griffin wrote, at one point accusing the FCC of arguing for a reading of the statute “that is too sweeping.”
The three-judge panel ordered a stay of the FCC’s order imposing net neutrality rules—known as the Safeguarding and Securing the Open Internet Order.
In a statement, FCC chair Jessica Rosenworcel suggested that Congress would likely be the only path to safeguard net neutrality moving forward. In the federal register, experts noted that net neutrality is critical to boosting new applications, services, or content, warning that without clear rules, the next Amazon or YouTube could be throttled before it can get off the ground.
“Consumers across the country have told us again and again that they want an Internet that is fast, open, and fair,” Rosenworcel said. “With this decision it is clear that Congress now needs to heed their call, take up the charge for net neutrality, and put open Internet principles in federal law.”
Under Joe Biden’s direction, the US Trade Representative (USTR) launched a probe Monday into China’s plans to globally dominate markets for legacy chips—alleging that China’s unfair trade practices threaten US national security and could thwart US efforts to build up a domestic semiconductor supply chain.
Unlike the most advanced chips used to power artificial intelligence that are currently in short supply, these legacy chips rely on older manufacturing processes and are more ubiquitous in mass-market products. They’re used in tech for cars, military vehicles, medical devices, smartphones, home appliances, space projects, and much more.
China apparently “plans to build more than 60 percent of the world’s new legacy chip capacity over the next decade,” and Commerce Secretary Gina Raimondo said evidence showed this was “discouraging investment elsewhere and constituted unfair competition,” Reuters reported.
Most people purchasing common goods don’t even realize they’re using Chinese chips, including government agencies, and the probe is meant to fix that by flagging everywhere Chinese chips are found in the US. Raimondo said she was “fairly alarmed” that research showed “two-thirds of US products using chips had Chinese legacy chips in them, and half of US companies did not know the origin of their chips including some in the defense industry.”
To deter harms from any of China’s alleged anticompetitive behavior, the USTR plans to spend a year investigating all of China’s acts, policies, and practices that could be helping China achieve global dominance in the foundational semiconductor market.
The agency will start by probing “China’s manufacturing of foundational semiconductors (also known as legacy or mature node semiconductors),” the press release said, “including to the extent that they are incorporated as components into downstream products for critical industries like defense, automotive, medical devices, aerospace, telecommunications, and power generation and the electrical grid.”
Additionally, the probe will assess China’s potential impact on “silicon carbide substrates (or other wafers used as inputs into semiconductor fabrication)” to ensure China isn’t burdening or restricting US commerce.
Some officials were frustrated that Biden didn’t launch the probe sooner, the Financial Times reported. It will ultimately be up to Donald Trump’s administration to complete the investigation, but Biden and Trump have long been aligned on US-China trade strategies, so Trump is not necessarily expected to meddle with the probe. Reuters noted that the probe could set Trump up to pursue his campaign promise of imposing a 60 percent tariff on all goods from China, but FT pointed out that Trump could also plan to use tariffs as a “bargaining chip” in his own trade negotiations.
How do you get an AI model to confess what’s inside?
Credit: Aurich Lawson | Getty Images
Since ChatGPT became an instant hit roughly two years ago, tech companies around the world have rushed to release AI products while the public is still in awe of AI’s seemingly radical potential to enhance their daily lives.
But at the same time, governments globally have warned it can be hard to predict how rapidly popularizing AI can harm society. Novel uses could suddenly debut and displace workers, fuel disinformation, stifle competition, or threaten national security—and those are just some of the obvious potential harms.
While governments scramble to establish systems to detect harmful applications—ideally before AI models are deployed—some of the earliest lawsuits over ChatGPT show just how hard it is for the public to crack open an AI model and find evidence of harms once a model is released into the wild. That task is seemingly only made harder by an increasingly thirsty AI industry intent on shielding models from competitors to maximize profits from emerging capabilities.
The less the public knows, the seemingly harder and more expensive it is to hold companies accountable for irresponsible AI releases. This fall, ChatGPT-maker OpenAI was even accused of trying to profit off discovery by seeking to charge litigants retail prices to inspect AI models alleged as causing harms.
Under that protocol, the NYT could hire an expert to review highly confidential OpenAI technical materials “on a secure computer in a secured room without Internet access or network access to other computers at a secure location” of OpenAI’s choosing. In this closed-off arena, the expert would have limited time and limited queries to try to get the AI model to confess what’s inside.
The NYT seemingly had few concerns about the actual inspection process but bucked at OpenAI’s intended protocol capping the number of queries their expert could make through an application programming interface to $15,000 worth of retail credits. Once litigants hit that cap, OpenAI suggested that the parties split the costs of remaining queries, charging the NYT and co-plaintiffs half-retail prices to finish the rest of their discovery.
In September, the NYT told the court that the parties had reached an “impasse” over this protocol, alleging that “OpenAI seeks to hide its infringement by professing an undue—yet unquantified—’expense.'” According to the NYT, plaintiffs would need $800,000 worth of retail credits to seek the evidence they need to prove their case, but there’s allegedly no way it would actually cost OpenAI that much.
“OpenAI has refused to state what its actual costs would be, and instead improperly focuses on what it charges its customers for retail services as part of its (for profit) business,” the NYT claimed in a court filing.
In its defense, OpenAI has said that setting the initial cap is necessary to reduce the burden on OpenAI and prevent a NYT fishing expedition. The ChatGPT maker alleged that plaintiffs “are requesting hundreds of thousands of dollars of credits to run an arbitrary and unsubstantiated—and likely unnecessary—number of searches on OpenAI’s models, all at OpenAI’s expense.”
How this court debate resolves could have implications for future cases where the public seeks to inspect models causing alleged harms. It seems likely that if a court agrees OpenAI can charge retail prices for model inspection, it could potentially deter lawsuits from any plaintiffs who can’t afford to pay an AI expert or commercial prices for model inspection.
Lucas Hansen, co-founder of CivAI—a company that seeks to enhance public awareness of what AI can actually do—told Ars that probably a lot of inspection can be done on public models. But often, public models are fine-tuned, perhaps censoring certain queries and making it harder to find information that a model was trained on—which is the goal of NYT’s suit. By gaining API access to original models instead, litigants could have an easier time finding evidence to prove alleged harms.
It’s unclear exactly what it costs OpenAI to provide that level of access. Hansen told Ars that costs of training and experimenting with models “dwarfs” the cost of running models to provide full capability solutions. Developers have noted in forums that costs of API queries quickly add up, with one claiming OpenAI’s pricing is “killing the motivation to work with the APIs.”
The NYT’s lawyers and OpenAI declined to comment on the ongoing litigation.
A recent Gallup survey suggests that Americans are more trusting of AI than ever but still twice as likely to believe AI does “more harm than good” than that the benefits outweigh the harms. Hansen’s CivAI creates demos and interactive software for education campaigns helping the public to understand firsthand the real dangers of AI. He told Ars that while it’s hard for outsiders to trust a study from “some random organization doing really technical work” to expose harms, CivAI provides a controlled way for people to see for themselves how AI systems can be misused.
“It’s easier for people to trust the results, because they can do it themselves,” Hansen told Ars.
Hansen also advises lawmakers grappling with AI risks. In February, CivAI joined the Artificial Intelligence Safety Institute Consortium—a group including Fortune 500 companies, government agencies, nonprofits, and academic research teams that help to advise the US AI Safety Institute (AISI). But so far, Hansen said, CivAI has not been very active in that consortium beyond scheduling a talk to share demos.
The AISI is supposed to protect the US from risky AI models by conducting safety testing to detect harms before models are deployed. Testing should “address risks to human rights, civil rights, and civil liberties, such as those related to privacy, discrimination and bias, freedom of expression, and the safety of individuals and groups,” President Joe Biden said in a national security memo last month, urging that safety testing was critical to support unrivaled AI innovation.
“For the United States to benefit maximally from AI, Americans must know when they can trust systems to perform safely and reliably,” Biden said.
But the AISI’s safety testing is voluntary, and while companies like OpenAI and Anthropic have agreed to the voluntary testing, not every company has. Hansen is worried that AISI is under-resourced and under-budgeted to achieve its broad goals of safeguarding America from untold AI harms.
“The AI Safety Institute predicted that they’ll need about $50 million in funding, and that was before the National Security memo, and it does not seem like they’re going to be getting that at all,” Hansen told Ars.
The AISI was probably never going to be funded well enough to detect and deter all AI harms, but with its future unclear, even the limited safety testing the US had planned could be stalled at a time when the AI industry continues moving full speed ahead.
That could largely leave the public at the mercy of AI companies’ internal safety testing. As frontier models from big companies will likely remain under society’s microscope, OpenAI has promised to increase investments in safety testing and help establish industry-leading safety standards.
According to OpenAI, that effort includes making models safer over time, less prone to producing harmful outputs, even with jailbreaks. But OpenAI has a lot of work to do in that area, as Hansen told Ars that he has a “standard jailbreak” for OpenAI’s most popular release, ChatGPT, “that almost always works” to produce harmful outputs.
The AISI did not respond to Ars’ request to comment.
NYT “nowhere near done” inspecting OpenAI models
For the public, who often become guinea pigs when AI acts unpredictably, risks remain, as the NYT case suggests that the costs of fighting AI companies could go up while technical hiccups could delay resolutions. Last week, an OpenAI filing showed that NYT’s attempts to inspect pre-training data in a “very, very tightly controlled environment” like the one recommended for model inspection were allegedly continuously disrupted.
“The process has not gone smoothly, and they are running into a variety of obstacles to, and obstructions of, their review,” the court filing describing NYT’s position said. “These severe and repeated technical issues have made it impossible to effectively and efficiently search across OpenAI’s training datasets in order to ascertain the full scope of OpenAI’s infringement. In the first week of the inspection alone, Plaintiffs experienced nearly a dozen disruptions to the inspection environment, which resulted in many hours when News Plaintiffs had no access to the training datasets and no ability to run continuous searches.”
OpenAI was additionally accused of refusing to install software the litigants needed and randomly shutting down ongoing searches. Frustrated after more than 27 days of inspecting data and getting “nowhere near done,” the NYT keeps pushing the court to order OpenAI to provide the data instead. In response, OpenAI said plaintiffs’ concerns were either “resolved” or discussions remained “ongoing,” suggesting there was no need for the court to intervene.
So far, the NYT claims that it has found millions of plaintiffs’ works in the ChatGPT pre-training data but has been unable to confirm the full extent of the alleged infringement due to the technical difficulties. Meanwhile, costs keep accruing in every direction.
“While News Plaintiffs continue to bear the burden and expense of examining the training datasets, their requests with respect to the inspection environment would be significantly reduced if OpenAI admitted that they trained their models on all, or the vast majority, of News Plaintiffs’ copyrighted content,” the court filing said.
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
That’s not the only uncertainty at play. Just last week, House Speaker Mike Johnson—a staunch Trump supporter—said that Republicans “probably will” repeal the bipartisan CHIPS and Science Act, which is a Biden initiative to spur domestic semiconductor chip production, among other aims. Trump has previously spoken out against the bill. After getting some pushback on his comments from Democrats, Johnson said he would like to “streamline” the CHIPS Act instead, according to The Associated Press.
Then there’s the Elon Musk factor. The tech billionaire spent tens of millions through a political action committee supporting Trump’s campaign and has been angling for regulatory influence in the new administration. His AI company, xAI, which makes the Grok-2 language model, stands alongside his other ventures—Tesla, SpaceX, Starlink, Neuralink, and X (formerly Twitter)—as businesses that could see regulatory changes in his favor under a new administration.
What might take its place
If Trump strips away federal regulation of AI, state governments may step in to fill any federal regulatory gaps. For example, in March, Tennessee enacted protections against AI voice cloning, and in May, Colorado created a tiered system for AI deployment oversight. In September, California passed multiple AI safety bills, one requiring companies to publish details about their AI training methods and a contentious anti-deepfake bill aimed at protecting the likenesses of actors.
So far, it’s unclear what Trump’s policies on AI might represent besides “deregulate whenever possible.” During his campaign, Trump promised to support AI development centered on “free speech and human flourishing,” though he provided few specifics. He has called AI “very dangerous” and spoken about its high energy requirements.
Trump allies at the America First Policy Institute have previously stated they want to “Make America First in AI” with a new Trump executive order, which still only exists as a speculative draft, to reduce regulations on AI and promote a series of “Manhattan Projects” to advance military AI capabilities.
During his previous administration, Trump signed AI executive orders that focused on research institutes and directing federal agencies to prioritize AI development while mandating that federal agencies “protect civil liberties, privacy, and American values.”
But with a different AI environment these days in the wake of ChatGPT and media-reality-warping image synthesis models, those earlier orders don’t likely point the way to future positions on the topic. For more details, we’ll have to wait and see what unfolds.
Google isn’t alone in eyeballing nuclear power as an energy source for massive datacenters. In September, Ars reported on a plan from Microsoft that would re-open the Three Mile Island nuclear power plant in Pennsylvania to fulfill some of its power needs. And the US administration is getting into the nuclear act as well, signing a bipartisan ADVANCE act in July with the aim of jump-starting new nuclear power technology.
AI is driving demand for nuclear
In some ways, it would be an interesting twist if demand for training and running power-hungry AI models, which are often criticized as wasteful, ends up kick-starting a nuclear power renaissance that helps wean the US off fossil fuels and eventually reduces the impact of global climate change. These days, almost every Big Tech corporate position could be seen as an optics play designed to increase shareholder value, but this may be one of the rare times when the needs of giant corporations accidentally align with the needs of the planet.
Even from a cynical angle, the partnership between Google and Kairos Power represents a step toward the development of next-generation nuclear power as an ostensibly clean energy source (especially when compared to coal-fired power plants). As the world sees increasing energy demands, collaborations like this one, along with adopting solutions like solar and wind power, may play a key role in reducing greenhouse gas emissions.
Despite that potential upside, some experts are deeply skeptical of the Google-Kairos deal, suggesting that this recent rush to nuclear may result in Big Tech ownership of clean power generation. Dr. Sasha Luccioni, Climate and AI Lead at Hugging Face, wrote on X, “One step closer to a world of private nuclear power plants controlled by Big Tech to power the generative AI boom. Instead of rethinking the way we build and deploy these systems in the first place.”
Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we’re seemingly entering a new age of media skepticism: the era of what I’m calling “deep doubt.” While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people’s existing skepticism toward online content from strangers may be reaching new heights.
Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.
The concept behind “deep doubt” isn’t new, but its real-world impact is becoming increasingly apparent. Since the term “deepfake” first surfaced in 2017, we’ve seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump’s baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried “AI” again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.
Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend years ago, coining the term “liar’s dividend” in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.
The rise of deepfakes, the persistence of doubt
Doubt has been a political weapon since ancient times. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.
Over the past decade, the rise of deep-learning technology has made it increasingly easy for people to craft false or modified pictures, audio, text, or video that appear to be non-synthesized organic media. Deepfakes were named after a Reddit user going by the name “deepfakes,” who shared AI-faked pornography on the service, swapping out the face of a performer with the face of someone else who wasn’t part of the original recording.
In the 20th century, one could argue that a certain part of our trust in media produced by others was a result of how expensive and time-consuming it was, and the skill it required, to produce documentary images and films. Even texts required a great deal of time and skill. As the deep doubt phenomenon grows, it will erode this 20th-century media sensibility. But it will also affect our political discourse, legal systems, and even our shared understanding of historical events that rely on that media to function—we rely on others to get information about the world. From photorealistic images to pitch-perfect voice clones, our perception of what we consider “truth” in media will need recalibration.
In April, a panel of federal judges highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials. The concern emerged during a meeting of the US Judicial Conference’s Advisory Committee on Evidence Rules, where the judges discussed the challenges of authenticating digital evidence in an era of increasingly sophisticated AI technology. Ultimately, the judges decided to postpone making any AI-related rule changes, but their meeting shows that the subject is already being considered by American judges.
Enlarge/ US President Joe Biden (C) speaks during a tour of the TSMC Semiconductor Manufacturing Facility in Phoenix, Arizona, on December 6, 2022.
In the hopes of dodging a significant projected worker shortage in the next few years, the Biden administration will finally start funding workforce development projects to support America’s ambitions to become the world’s leading chipmaker through historic CHIPS and Science Act investments.
The Workforce Partner Alliance (WFPA) will be established through the CHIPS Act’s first round of funding focused on workers, officials confirmed in a press release. The program is designed to “focus on closing workforce and skills gaps in the US for researchers, engineers, and technicians across semiconductor design, manufacturing, and production,” a program requirements page said.
Bloomberg reported that the US risks a technician shortage reaching 90,000 by 2030. This differs slightly from Natcast’s forecast, which found that out of “238,000 jobs the industry is projected to create by 2030,” the semiconductor industry “will be unable to fill more than 67,000.”
Whatever the industry demand will actually be, with a projected tens of thousands of jobs needing to be filled just as the country is hoping to produce more chips than ever, the Biden administration is hoping to quickly train enough workers to fill openings for “researchers, engineers, and technicians across semiconductor design, manufacturing, and production,” a WFPA site said.
To do this, a “wide range of workforce solution providers” are encouraged to submit “high-impact” WFPA project proposals that can be completed within two years, with total budgets of between $500,000 and $2 million per award, the press release said.
Examples of “evidence-based workforce development strategies and methodologies that may be considered for this program” include registered apprenticeship and pre-apprenticeship programs, colleges or universities offering semiconductor industry-relevant degrees, programs combining on-the-job training with effective education or mentorship, and “experiential learning opportunities such as co-ops, externships, internships, or capstone projects.” While programs supporting construction activities will not be considered, programs designed to “reduce barriers” to entry in the semiconductor industry can use funding to support workers’ training, such as for providing childcare or transportation for workers.
“Making investments in the US semiconductor workforce is an opportunity to serve underserved communities, to connect individuals to good-paying sustainable jobs across the country, and to develop a robust workforce ecosystem that supports an industry essential to the national and economic security of the US,” Natcast said.
Between four to 10 projects will be selected, providing opportunities for “established programs with a track record of success seeking to scale,” as well as for newer programs “that meet a previously unaddressed need, opportunity, or theory of change” to be launched or substantially expanded.
The deadline to apply for funding is July 26, which gives applicants less than one month to get their proposals together. Applicants must have a presence in the US but can include for-profit organizations, accredited education institutions, training programs, state and local government agencies, and nonprofit organizations, Natcast’s eligibility requirements said.
Natcast—the nonprofit entity created to operate the National Semiconductor Technology Center Consortium—will manage the WFPA. An FAQ will be provided soon, Natcast said, but in the meantime, the agency is giving a brief window to submit questions about the program. Curious applicants can send questions to wfpa2024@natcast.org until 11: 59 pm ET on July 9.
Awardees will be notified by early fall, Natcast said.
Planning the future of US chip workforce
In Natcast’s press release, Deirdre Hanford, Natcast’s CEO, said that the WFPA will “accelerate progress in the US semiconductor industry by tackling its most critical challenges, including the need for a highly skilled workforce that can meet the evolving demands of the industry.”
And the senior manager of Natcast’s workforce development programs, Michael Barnes, said that the WFPA will be critical to accelerating the industry’s growth in the US.
“It is imperative that we develop a domestic semiconductor workforce ecosystem that can support the industry’s anticipated growth and strengthen American national security, economic prosperity, and global competitiveness,” Barnes said.
On Friday, the US Department of Homeland Security announced the formation of an Artificial Intelligence Safety and Security Board that consists of 22 members pulled from the tech industry, government, academia, and civil rights organizations. But given the nebulous nature of the term “AI,” which can apply to a broad spectrum of computer technology, it’s unclear if this group will even be able to agree on what exactly they are safeguarding us from.
President Biden directed DHS Secretary Alejandro Mayorkas to establish the board, which will meet for the first time in early May and subsequently on a quarterly basis.
The fundamental assumption posed by the board’s existence, and reflected in Biden’s AI executive order from October, is that AI is an inherently risky technology and that American citizens and businesses need to be protected from its misuse. Along those lines, the goal of the group is to help guard against foreign adversaries using AI to disrupt US infrastructure; develop recommendations to ensure the safe adoption of AI tech into transportation, energy, and Internet services; foster cross-sector collaboration between government and businesses; and create a forum where AI leaders to share information on AI security risks with the DHS.
It’s worth noting that the ill-defined nature of the term “Artificial Intelligence” does the new board no favors regarding scope and focus. AI can mean many different things: It can power a chatbot, fly an airplane, control the ghosts in Pac-Man, regulate the temperature of a nuclear reactor, or play a great game of chess. It can be all those things and more, and since many of those applications of AI work very differently, there’s no guarantee any two people on the board will be thinking about the same type of AI.
This confusion is reflected in the quotes provided by the DHS press release from new board members, some of whom are already talking about different types of AI. While OpenAI, Microsoft, and Anthropic are monetizing generative AI systems like ChatGPT based on large language models (LLMs), Ed Bastian, the CEO of Delta Air Lines, refers to entirely different classes of machine learning when he says, “By driving innovative tools like crew resourcing and turbulence prediction, AI is already making significant contributions to the reliability of our nation’s air travel system.”
So, defining the scope of what AI exactly means—and which applications of AI are new or dangerous—might be one of the key challenges for the new board.
A roundtable of Big Tech CEOs attracts criticism
For the inaugural meeting of the AI Safety and Security Board, the DHS selected a tech industry-heavy group, populated with CEOs of four major AI vendors (Sam Altman of OpenAI, Satya Nadella of Microsoft, Sundar Pichai of Alphabet, and Dario Amodei of Anthopic), CEO Jensen Huang of top AI chipmaker Nvidia, and representatives from other major tech companies like IBM, Adobe, Amazon, Cisco, and AMD. There are also reps from big aerospace and aviation: Northrop Grumman and Delta Air Lines.
Upon reading the announcement, some critics took issue with the board composition. On LinkedIn, founder of The Distributed AI Research Institute (DAIR) Timnit Gebru especially criticized OpenAI’s presence on the board and wrote, “I’ve now seen the full list and it is hilarious. Foxes guarding the hen house is an understatement.”
On Wednesday, the US House of Representatives passed a bill with a vote of 352–65 that could block TikTok in the US. Fifteen Republicans and 50 Democrats voted in opposition, and one Democrat voted present, CNN reported.
TikTok is not happy. A spokesperson told Ars, “This process was secret and the bill was jammed through for one reason: it’s a ban. We are hopeful that the Senate will consider the facts, listen to their constituents, and realize the impact on the economy, 7 million small businesses, and the 170 million Americans who use our service.”
Under the law—which still must pass the Senate, a more significant hurdle, where less consensus is expected and a companion bill has not yet been introduced—app stores and hosting services would face steep consequences if they provide access to apps controlled by US foreign rivals. That includes allowing the app to be updated or maintained by US users who already have the app on their devices.
Violations subject app stores and hosting services to fines of $5,000 for each individual US user “determined to have accessed, maintained, or updated a foreign adversary-controlled application.” With 170 million Americans currently on TikTok, that could add up quickly to eye-popping fines.
If the bill becomes law, app stores and hosting services would have 180 days to limit access to foreign adversary-controlled apps. The bill specifically names TikTok and ByteDance as restricted apps, making it clear that lawmakers intend to quash the alleged “national security threat” that TikTok poses in the US.
House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-Wash.), a proponent of the bill, has said that “foreign adversaries like China pose the greatest national security threat of our time. With applications like TikTok, these countries are able to target, surveil, and manipulate Americans.” The proposed bill “ends this practice by banning applications controlled by foreign adversaries of the United States that pose a clear national security risk.”
McMorris Rodgers has also made it clear that “our goal is to get this legislation onto the president’s desk.” Joe Biden has indicated he will sign the bill into law, leaving the Senate as the final hurdle to clear. Senators told CNN that they were waiting to see what happened in the House before seeking a path forward in the Senate that would respect TikTok users’ civil liberties.
Attempts to ban TikTok have historically not fared well in the US, with a recent ban in Montana being reversed by a federal judge last December. Judge Donald Molloy granted TikTok’s request for a preliminary injunction, denouncing Montana’s ban as an unconstitutional infringement of Montana-based TikTok users’ rights.
More recently, the American Civil Liberties Union (ACLU) has slammed House lawmakers for rushing the bill through Congress, accusing lawmakers of attempting to stifle free speech. ACLU senior policy counsel Jenna Leventoff said in a press release that lawmakers were “once again attempting to trade our First Amendment rights for cheap political points during an election year.”
“Just because the bill sponsors claim that banning TikTok isn’t about suppressing speech, there’s no denying that it would do just that,” Leventoff said.