He also accused the European Commission of “protectionism” and an “anti-American” attitude.
“If Europe has its own satellite constellation then great, I think the more the better. But more broadly, I think Europe is caught a little bit between the US and China. And it’s sort of time for choosing,” he said.
The European Commission said it had “always enforced and would continue to enforce laws fairly and without discrimination to all companies operating in the EU, in full compliance with global rules.”
Shares in European satellite providers such as Eutelsat and SES soared in recent weeks despite the companies’ heavy debts, in response to the commission saying that Brussels “should fund Ukrainian [military] access to services that can be provided by EU-based commercial providers.”
Industry experts warned that despite the positivity, no single European network could yet compete with Starlink’s offering.
Carr said that European telecoms companies Nokia and Ericsson should move more of their manufacturing to the US as both face being hit with Trump’s import tariffs.
The two companies are the largest vendors of mobile network infrastructure equipment in the US. Carr said there had been a historic “mistake” in US industrial policy, which meant there was no significant American company competing in the telecom vendor market.
“I don’t love that current situation we’re in,” he said.
Carr added that he would “look at” granting the companies faster regulatory clearances on new technology if they moved to the US.
Last month, Ericsson chief executive Börje Ekholm told the FT the company would consider expanding manufacturing in the US depending on how potential tariffs affected it. The Swedish telecoms equipment maker first opened an American factory in Lewisville, Texas, in 2020.
“We’ve been ramping up [production in the US] already. Do we need bigger changes? We will have to see,” Ekholm added.
Nokia said that the US was the company’s “second home.”
“Around 90 percent of all US communications utilizes Nokia equipment at some point. We have five manufacturing sites and five R&D hubs in the US including Nokia Bell Labs,” they added.
Officials blame changing requirements for much of the delays and rising costs. NASA managers dramatically changed their plans for the Gateway program in 2020, when they decided to launch the PPE and HALO on the same rocket, prompting major changes to their designs.
Jared Isaacman, Trump’s nominee for NASA administrator, declined to commit to the Gateway program during a confirmation hearing before the Senate Commerce Committee on April 9. Sen. Ted Cruz (R-Texas), the committee’s chairman, pressed Isaacman on the Lunar Gateway. Cruz is one of the Gateway program’s biggest backers in Congress since it is managed by Johnson Space Center in Texas. If it goes ahead, Gateway would guarantee numerous jobs at NASA’s mission control in Houston throughout its 15-year lifetime.
“That’s an area that if I’m confirmed, I would love to roll up my sleeves and further understand what’s working right?” Isaacman replied to Cruz. “What are the opportunities the Gateway presents to us? And where are some of the challenges, because I think the Gateway is a component of many programs that are over budget and behind schedule.”
The pressure shell for the Habitation and Logistics Outpost (HALO) module arrived in Gilbert, Arizona, last week for internal outfitting. Credit: NASA/Josh Valcarcel
Checking in with Gateway
Nevertheless, the Gateway program achieved a milestone one week before Isaacman’s confirmation hearing. The metallic pressure shell for the HALO module was shipped from its factory in Italy to Arizona. The HALO module is only partially complete, and it lacks life support systems and other hardware it needs to operate in space.
Over the next couple of years, Northrop Grumman will outfit the habitat with those components and connect it with the Power and Propulsion Element under construction at Maxar Technologies in Silicon Valley. This stage of spacecraft assembly, along with prelaunch testing, often uncovers problems that can drive up costs and trigger more delays.
Ars recently spoke with Jon Olansen, a bio-mechanical engineer and veteran space shuttle flight controller who now manages the Gateway program at Johnson Space Center. A transcript of our conversation with Olansen is below. It is lightly edited for clarity and brevity.
Ars: The HALO module has arrived in Arizona from Italy. What’s next?
Olansen: This HALO module went through significant effort from the primary and secondary structure perspective out at Thales Alenia Space in Italy. That was most of their focus in getting the vehicle ready to ship to Arizona. Now that it’s in Arizona, Northrop is setting it up in their facility there in Gilbert to be able to do all of the outfitting of the systems we need to actually execute the missions we want to do, keep the crew safe, and enable the science that we’re looking to do. So, if you consider your standard spacecraft, you’re going to have all of your command-and-control capabilities, your avionics systems, your computers, your network management, all of the things you need to control the vehicle. You’re going to have your power distribution capabilities. HALO attaches to the Power and Propulsion Element, and it provides the primary power distribution capability for the entire station. So that’ll all be part of HALO. You’ll have your standard thermal systems for active cooling. You’ll have the vehicle environmental control systems that will need to be installed, [along with] some of the other crew systems that you can think of, from lighting, restraint, mobility aids, all the different types of crew systems. Then, of course, all of our science aspects. So we have payload lockers, both internally, as well as payload sites external that we’ll have available, so pretty much all the different systems that you would need for a human-rated spacecraft.
Ars: What’s the latest status of the Power and Propulsion Element?
Olansen: PPE is fairly well along in their assembly and integration activities. The central cylinder has been integrated with the propulsion tanks… Their propulsion module is in good shape. They’re working on the avionics shelves associated with that spacecraft. So, with both vehicles, we’re really trying to get the assembly done in the next year or so, so we can get into integrated spacecraft testing at that point in time.
Ars: What’s in the critical path in getting to the launch pad?
Olansen: The assembly and integration activity is really the key for us. It’s to get to the full vehicle level test. All the different activities that we’re working on across the vehicles are making substantive progress. So, it’s a matter of bringing them all in and doing the assembly and integration in the appropriate sequences, so that we get the vehicles put together the way we need them and get to the point where we can actually power up the vehicles and do all the testing we need to do. Obviously, software is a key part of that development activity, once we power on the vehicles, making sure we can do all the control work that we need to do for those vehicles.
[There are] a couple of key pieces I will mention along those lines. On the PPE side, we have the electrical propulsion system. The thrusters associated with that system are being delivered. Those will go through acceptance testing at the Glenn Research Center [in Ohio] and then be integrated on the spacecraft out at Maxar; so that work is ongoing as we speak. Out at ESA, ESA is providing the HALO lunar communication system. That’ll be delivered later this year. That’ll be installed on HALO as part of its integrated test and checkout and then launch on HALO. That provides the full communication capability down to the lunar surface for us, where PPE provides the communication capability back to Earth. So, those are key components that we’re looking to get delivered later this year.
Jon Olansen, manager of NASA’s Gateway program at Johnson Space Center in Houston. Credit: NASA/Andrew Carlsen
Ars: What’s the status of the electric propulsion thrusters for the PPE?
Olansen: The first one has actually been delivered already, so we’ll have the opportunity to go through, like I said, the acceptance testing for those. The other flight units are right on the heels of the first one that was delivered. They’ll make it through their acceptance testing, then get delivered to Maxar, like I said, for integration into PPE. So, that work is already in progress. [The Power and Propulsion Element will have three xenon-fueled 12-kilowatt Hall thrusters produced by Aerojet Rocketdyne, and four smaller 6-kilowatt thrusters.]
Ars: The Government Accountability Office (GAO) outlined concerns last year about keeping the mass of Gateway within the capability of its rocket. Has there been any progress on that issue? Will you need to remove components from the HALO module and launch them on a future mission? Will you narrow your launch windows to only launch on the most fuel-efficient trajectories?
Olansen: We’re working the plan. Now that we’re launching the two vehicles together, we’re working mass management. Mass management is always an issue with spacecraft development, so it’s no different for us. All of the things you described are all knobs that are in the trade space as we proceed, but fundamentally, we’re working to design the optimal spacecraft that we can, first. So, that’s the key part. As we get all the components delivered, we can measure mass across all of those components, understand what our integrated mass looks like, and we have several different options to make sure that we’re able to execute the mission we need to execute. All of those will be balanced over time based on the impacts that are there. There’s not a need for a lot of those decisions to happen today. Those that are needed from a design perspective, we’ve already made. Those that are needed from enabling future decisions, we’ve already made all of those. So, really, what we’re working through is being able to, at the appropriate time, make decisions necessary to fly the vehicle the way we need to, to get out to NRHO [Near Rectilinear Halo Orbit, an elliptical orbit around the Moon], and then be able to execute the Artemis missions in the future.
Ars: The GAO also discussed a problem with Gateway’s controllability with something as massive as Starship docked to it. What’s the latest status of that problem?
Olansen: There are a number of different risks that we work through as a program, as you’d expect. We continue to look at all possibilities and work through them with due diligence. That’s our job, to be able to do that on a daily basis. With the stack controllability [issue], where that came from for GAO, we were early in the assessments of what the potential impacts could be from visiting vehicles, not just any one [vehicle] but any visiting vehicle. We’re a smaller space station than ISS, so making sure we understand the implications of thruster firings as vehicles approach the station, and the implications associated with those, is where that stack controllability conversation came from.
The bus that Maxar typically designs doesn’t have to generally deal with docking. Part of what we’ve been doing is working through ways that we can use the capabilities that are already built into that spacecraft differently to provide us the control authority we need when we have visiting vehicles, as well as working with the visiting vehicles and their design to make sure that they’re minimizing the impact on the station. So, the combination of those two has largely, over the past year since that report came out, improved where we are from a stack controllability perspective. We still have forward work to close out all of the different potential cases that are there. We’ll continue to work through those. That’s standard forward work, but we’ve been able to make some updates, some software updates, some management updates, and logic updates that really allow us to control the stack effectively and have the right amount of control authority for the dockings and undockings that we will need to execute for the missions.
“It’s making our work unsafe, and it’s unsanitary for any workplace,” but especially an active laboratory full of fire-reactive chemicals and bacteria, one Montlake researcher said.
Press officers at NOAA, the Commerce Department, and the White House did not respond to requests for comment.
Montlake employees were informed last week that a contract for safety services — which includes the staff who move laboratory waste off-campus to designated disposal sites — would lapse after April 9, leaving just one person responsible for this task. Hazardous waste “pickups from labs may be delayed,” employees were warned in a recent email.
The building maintenance team’s contract expired Wednesday, which decimated the staff that had handled plumbing, HVAC, and the elevators. Other contacts lapsed in late March, leaving the Seattle lab with zero janitorial staff and a skeleton crew of IT specialists.
During a big staff meeting at Montlake on Wednesday, lab leaders said they had no updates on when the contracts might be renewed, one researcher said. They also acknowledged it was unfair that everyone would need to pitch in on janitorial duties on top of their actual jobs.
Nick Tolimieri, a union representative for Montlake employees, said the problem is “all part of the large-scale bullying program” to push out federal workers. It seems like every Friday “we get some kind of message that makes you unable to sleep for the entire weekend,” he said. Now, with these lapsed contracts, it’s getting “more and more petty.”
The problems, large and small, at Montlake provide a case study of the chaos that’s engulfed federal workers across many agencies as the Trump administration has fired staff, dumped contracts, and eliminated long-time operational support. Yesterday, hundreds of NOAA workers who had been fired in February, then briefly reinstated, were fired again.
Dolphins are generally regarded as some of the smartest creatures on the planet. Research has shown they can cooperate, teach each other new skills, and even recognize themselves in a mirror. For decades, scientists have attempted to make sense of the complex collection of whistles and clicks dolphins use to communicate. Researchers might make a little headway on that front soon with the help of Google’s open AI model and some Pixel phones.
Google has been finding ways to work generative AI into everything else it does, so why not its collaboration with the Wild Dolphin Project (WDP)? This group has been studying dolphins since 1985 using a non-invasive approach to track a specific community of Atlantic spotted dolphins. The WDP creates video and audio recordings of dolphins, along with correlating notes on their behaviors.
One of the WDP’s main goals is to analyze the way dolphins vocalize and how that can affect their social interactions. With decades of underwater recordings, researchers have managed to connect some basic activities to specific sounds. For example, Atlantic spotted dolphins have signature whistles that appear to be used like names, allowing two specific individuals to find each other. They also consistently produce “squawk” sound patterns during fights.
WDP researchers believe that understanding the structure and patterns of dolphin vocalizations is necessary to determine if their communication rises to the level of a language. “We do not know if animals have words,” says WDP’s Denise Herzing.
An overview of DolphinGemma
The ultimate goal is to speak dolphin, if indeed there is such a language. The pursuit of this goal has led WDP to create a massive, meticulously labeled data set, which Google says is perfect for analysis with generative AI.
Meet DolphinGemma
The large language models (LLMs) that have become unavoidable in consumer tech are essentially predicting patterns. You provide them with an input, and the models predict the next token over and over until they have an output. When a model has been trained effectively, that output can sound like it was created by a person. Google and WDP hope it’s possible to do something similar with DolphinGemma for marine mammals.
DolphinGemma is based on Google’s Gemma open AI models, which are themselves built on the same foundation as the company’s commercial Gemini models. The dolphin communication model uses a Google-developed audio technology called SoundStream to tokenize dolphin vocalizations, allowing the sounds to be fed into the model as they’re recorded.
In our new 3-part series, we remember the people and ideas that made the Internet.
Credit: Collage by Aurich Lawson
Credit: Collage by Aurich Lawson
In a very real sense, the Internet, this marvelous worldwide digital communications network that you’re using right now, was created because one man was annoyed at having too many computer terminals in his office.
The year was 1966. Robert Taylor was the director of the Advanced Research Projects Agency’s Information Processing Techniques Office. The agency was created in 1958 by President Eisenhower in response to the launch of Sputnik. So Taylor was in the Pentagon, a great place for acronyms like ARPA and IPTO. He had three massive terminals crammed into a room next to his office. Each one was connected to a different mainframe computer. They all worked slightly differently, and it was frustrating to remember multiple procedures to log in and retrieve information.
Author’s re-creation of Bob Taylor’s office with three teletypes. Credit: Rama & Musée Bolo (Wikipedia/Creative Commons), steve lodefink (Wikipedia/Creative Commons), The Computer Museum @ System Source
In those days, computers took up entire rooms, and users accessed them through teletype terminals—electric typewriters hooked up to either a serial cable or a modem and a phone line. ARPA was funding multiple research projects across the United States, but users of these different systems had no way to share their resources with each other. Wouldn’t it be great if there was a network that connected all these computers?
The dream is given form
Taylor’s predecessor, Joseph “J.C.R.” Licklider, had released a memo in 1963 that whimsically described an “Intergalactic Computer Network” that would allow users of different computers to collaborate and share information. The idea was mostly aspirational, and Licklider wasn’t able to turn it into a real project. But Taylor knew that he could.
In a 1998 interview, Taylor explained: “In most government funding, there are committees that decide who gets what and who does what. In ARPA, that was not the way it worked. The person who was responsible for the office that was concerned with that particular technology—in my case, computer technology—was the person who made the decision about what to fund and what to do and what not to do. The decision to start the ARPANET was mine, with very little or no red tape.”
Taylor marched into the office of his boss, Charles Herzfeld. He described how a network could save ARPA time and money by allowing different institutions to share resources. He suggested starting with a small network of four computers as a proof of concept.
“Is it going to be hard to do?” Herzfeld asked.
“Oh no. We already know how to do it,” Taylor replied.
“Great idea,” Herzfeld said. “Get it going. You’ve got a million dollars more in your budget right now. Go.”
Taylor wasn’t lying—at least, not completely. At the time, there were multiple people around the world thinking about computer networking. Paul Baran, working for RAND, published a paper in 1964 describing how a distributed military networking system could be made resilient even if some nodes were destroyed in a nuclear attack. Over in the UK, Donald Davies independently came up with a similar concept (minus the nukes) and invented a term for the way these types of networks would communicate. He called it “packet switching.”
On a regular phone network, after some circuit switching, a caller and answerer would be connected via a dedicated wire. They had exclusive use of that wire until the call was completed. Computers communicated in short bursts and didn’t require pauses the way humans did. So it would be a waste for two computers to tie up a whole line for extended periods. But how could many computers talk at the same time without their messages getting mixed up?
Packet switching was the answer. Messages were divided into multiple snippets. The order and destination were included with each message packet. The network could then route the packets in any way that made sense. At the destination, all the appropriate packets were put into the correct order and reassembled. It was like moving a house across the country: It was more efficient to send all the parts in separate trucks, each taking their own route to avoid congestion.
A simplified diagram of how packet switching works. Credit: Jeremy Reimer
By the end of 1966, Taylor had hired a program director, Larry Roberts. Roberts sketched a diagram of a possible network on a napkin and met with his team to propose a design. One problem was that each computer on the network would need to use a big chunk of its resources to manage the packets. In a meeting, Wes Clark passed a note to Roberts saying, “You have the network inside-out.” Clark’s alternative plan was to ship a bunch of smaller computers to connect to each host. These dedicated machines would do all the hard work of creating, moving, and reassembling packets.
With the design complete, Roberts sent out a request for proposals for constructing the ARPANET. All they had to do now was pick the winning bid, and the project could begin.
BB&N and the IMPs
IBM, Control Data Corporation, and AT&T were among the first to respond to the request. They all turned it down. Their reasons were the same: None of these giant companies believed the network could be built. IBM and CDC thought the dedicated computers would be too expensive, but AT&T flat-out said that packet switching wouldn’t work on its phone network.
In late 1968, ARPA announced a winner for the bid: Bolt Beranek and Newman. It seemed like an odd choice. BB&N had started as a consulting firm that calculated acoustics for theaters. But the need for calculations led to the creation of a computing division, and its first manager had been none other than J.C.R. Licklider. In fact, some BB&N employees had been working on a plan to build a network even before the ARPA bid was sent out. Robert Kahn led the team that drafted BB&N’s proposal.
Their plan was to create a network of “Interface Message Processors,” or IMPs, out of Honeywell 516 computers. They were ruggedized versions of the DDP-516 16-bit minicomputer. Each had 24 kilobytes of core memory and no mass storage other than a paper tape reader, and each cost $80,000 (about $700,000 today). In comparison, an IBM 360 mainframe cost between $7 million and $12 million at the time.
An original IMP, the world’s first router. It was the size of a large refrigerator. Credit: Steve Jurvetson (CC BY 2.0)
The 516’s rugged appearance appealed to BB&N, who didn’t want a bunch of university students tampering with its IMPs. The computer came with no operating system, but it didn’t really have enough RAM for one. The software to control the IMPs was written on bare metal using the 516’s assembly language. One of the developers was Will Crowther, who went on to create the first computer adventure game.
One other hurdle remained before the IMPs could be put to use: The Honeywell design was missing certain components needed to handle input and output. BB&N employees were dismayed that the first 516, which they named IMP-0, didn’t have working versions of the hardware additions they had requested.
It fell on Ben Barker, a brilliant undergrad student interning at BB&N, to manually fix the machine. Barker was the best choice, even though he had slight palsy in his hands. After several stressful 16-hour days wrapping and unwrapping wires, all the changes were complete and working. IMP-0 was ready.
In the meantime, Steve Crocker at the University of California, Los Angeles, was working on a set of software specifications for the host computers. It wouldn’t matter if the IMPs were perfect at sending and receiving messages if the computers themselves didn’t know what to do with them. Because the host computers were part of important academic research, Crocker didn’t want to seem like he was a dictator telling people what to do with their machines. So he titled his draft a “Request for Comments,” or RFC.
This one act of politeness forever changed the nature of computing. Every change since has been done as an RFC, and the culture of asking for comments pervades the tech industry even today.
RFC No. 1 proposed two types of host software. The first was the simplest possible interface, in which a computer pretended to be a dumb terminal. This was dubbed a “terminal emulator,” and if you’ve ever done any administration on a server, you’ve probably used one. The second was a more complex protocol that could be used to transfer large files. This became FTP, which is still used today.
A single IMP connected to one computer wasn’t much of a network. So it was very exciting in September 1969 when IMP-1 was delivered to BB&N and then shipped via air freight to UCLA. The first test of the ARPANET was done with simultaneous phone support. The plan was to type “LOGIN” to start a login sequence. This was the exchange:
“Did you get the L?”
“I got the L!”
“Did you get the O?”
“I got the O!”
“Did you get the G?”
“Oh no, the computer crashed!”
It was an inauspicious beginning. The computer on the other end was helpfully filling in the “GIN” part of “LOGIN,” but the terminal emulator wasn’t expecting three characters at once and locked up. It was the first time that autocomplete had ruined someone’s day. The bug was fixed, and the test completed successfully.
IMP-2, IMP-3, and IMP-4 were delivered to the Stanford Research Institute (where Doug Engelbart was keen to expand his vision of connecting people), UC Santa Barbara, and the University of Utah.
Now that the four-node test network was complete, the team at BB&N could work with the researchers at each node to put the ARPANET through its paces. They deliberately created the first ever denial of service attack in January 1970, flooding the network with packets until it screeched to a halt.
The original ARPANET, predecessor of the Internet. Circles are IMPs, and rectangles are computers. Credit: DARPA
Surprisingly, many of the administrators of the early ARPANET nodes weren’t keen to join the network. They didn’t like the idea of anyone else being able to use resources on “their” computers. Taylor reminded them that their hardware and software projects were mostly ARPA-funded, so they couldn’t opt out.
The next month, Stephen Carr, Stephen Crocker, and Vint Cerf released RFC No. 33. It described a Network Control Protocol (NCP) that standardized how the hosts would communicate with each other. After this was adopted, the network was off and running.
J.C.R. Licklider, Bob Taylor, Larry Roberts, Steve Crocker, and Vint Cerf. Credit: US National Library of Medicine, WIRED, Computer Timeline, Steve Crocker, Vint Cerf
The ARPANET grew significantly over the next few years. Important events included the first ever email between two different computers, sent by Roy Tomlinson in July 1972. Another groundbreaking demonstration involved a PDP-10 in Harvard simulating, in real-time, an aircraft landing on a carrier. The data was sent over the ARPANET to a MIT-based graphics terminal, and the wireframe graphical view was shipped back to a PDP-1 at Harvard and displayed on a screen. Although it was primitive and slow, it was technically the first gaming stream.
A big moment came in October 1972 at the International Conference on Computer Communication. This was the first time the network had been demonstrated to the public. Interest in the ARPANET was growing, and people were excited. A group of AT&T executives noticed a brief crash and laughed, confident that they were correct in thinking that packet switching would never work. Overall, however, the demonstration was a resounding success.
But the ARPANET was no longer the only network out there.
The rest of the world had not been standing still. In Hawaii, Norman Abramson and Franklin Kuo created ALOHAnet, which connected computers on the islands using radio. It was the first public demonstration of a wireless packet switching network. In the UK, Donald Davies’ team developed the National Physical Laboratory (NPL) network. It seemed like a good idea to start connecting these networks together, but they all used different protocols, packet formats, and transmission rates. In 1972, the heads of several national networking projects created an International Networking Working Group. Cerf was chosen to lead it.
The first attempt to bridge this gap was SATNET, also known as the Atlantic Packet Satellite Network. Using satellite links, it connected the US-based ARPANET with networks in the UK. Unfortunately, SATNET itself used its own set of protocols. In true tech fashion, an attempt to make a universal standard had created one more standard instead.
Robert Kahn asked Vint Cerf to try and fix these problems once and for all. They came up with a new plan called the Transmission Control Protocol, or TCP. The idea was to connect different networks through specialized computers, called “gateways,” that translated and forwarded packets. TCP was like an envelope for packets, making sure they got to the right destination on the correct network. Because some networks were not guaranteed to be reliable, when one computer successfully received a complete and undamaged message, it would send an acknowledgement (ACK) back to the sender. If the ACK wasn’t received in a certain amount of time, the message was retransmitted.
In December 1974, Cerf, Yogen Dalal, and Carl Sunshine wrote a complete specification for TCP. Two years later, Cerf and Kahn, along with a dozen others, demonstrated the first three-network system. The demo connected packet radio, the ARPANET, and SATNET, all using TCP. Afterward, Cerf, Jon Postel, and Danny Cohen suggested a small but important change: They should take out all the routing information and put it into a new protocol, called the Internet Protocol (IP). All the remaining stuff, like breaking and reassembling messages, detecting errors, and retransmission, would stay in TCP. Thus, in 1978, the protocol officially became known as, and was forever thereafter, TCP/IP.
A map of the Internet in 1977. White dots are IMPs, and rectangles are host computers. Jagged lines connect to other networks. Credit: The Computer History Museum
If the story of creating the Internet was a movie, the release of TCP/IP would have been the triumphant conclusion. But things weren’t so simple. The world was changing, and the path ahead was murky at best.
At the time, joining the ARPANET required leasing high-speed phone lines for $100,000 per year. This limited it to large universities, research companies, and defense contractors. The situation led the National Science Foundation (NSF) to propose a new network that would be cheaper to operate. Other educational networks arose at around the same time. While it made sense to connect these networks to the growing Internet, there was no guarantee that this would continue. And there were other, larger forces at work.
By the end of the 1970s, computers had improved significantly. The invention of the microprocessor set the stage for smaller, cheaper computers that were just beginning to enter people’s homes. Bulky teletypes were being replaced with sleek, TV-like terminals. The first commercial online service, CompuServe, was released to the public in 1979. For just $5 per hour, you could connect to a private network, get weather and financial reports, and trade gossip with other users. At first, these systems were completely separate from the Internet. But they grew quickly. By 1987, CompuServe had 380,000 subscribers.
A magazine ad for CompuServe from 1980. Credit: marbleriver
Meanwhile, the adoption of TCP/IP was not guaranteed. At the beginning of the 1980s, the Open Systems Interconnection (OSI) group at the International Standardization Organization (ISO) decided that what the world needed was more acronyms—and also a new, global, standardized networking model.
The OSI model was first drafted in 1980, but it wasn’t published until 1984. Nevertheless, many European governments, and even the US Department of Defense, planned to transition from TCP/IP to OSI. It seemed like this new standard was inevitable.
The seven-layer OSI model. If you ever thought there were too many layers, you’re not alone. Credit: BlueCat Networks
While the world waited for OSI, the Internet continued to grow and evolve. In 1981, the fourth version of the IP protocol, IPv4, was released. On January 1, 1983, the ARPANET itself fully transitioned to using TCP/IP. This date is sometimes referred to as the “birth of the Internet,” although from a user’s perspective, the network still functioned the same way it had for years.
A map of the Internet from 1982. Ovals are networks, and rectangles are gateways. Hosts are not shown, but number in the hundreds. Note the appearance of modern-looking IPv4 addresses. Credit: Jon Postel
In 1986, the NFSNET came online, running under TCP/IP and connected to the rest of the Internet. It also used a new standard, the Domain Name System (DNS). This system, still in use today, used easy-to-remember names to point to a machine’s individual IP address. Computer names were assigned “top-level” domains based on their purpose, so you could connect to “frodo.edu” at an educational institution, or “frodo.gov” at a governmental one.
The NFSNET grew rapidly, dwarfing the ARPANET in size. In 1989, the original ARPANET was decommissioned. The IMPs, long since obsolete, were retired. However, all the ARPANET hosts were successfully migrated to other Internet networks. Like a Ship of Theseus, the ARPANET lived on even after every component of it was replaced.
The exponential growth of the ARPANET/Internet during its first two decades. Credit: Jeremy Reimer
Still, the experts and pundits predicted that all of these systems would eventually have to transfer over to the OSI model. The people who had built the Internet were not impressed. In 1987, writing RFC No. 1,000, Crocker said, “If we had only consulted the ancient mystics, we would have seen immediately that seven layers were required.”
The Internet pioneers felt they had spent many years refining and improving a working system. But now, OSI had arrived with a bunch of complicated standards and expected everyone to adopt their new design. Vint Cerf had a more pragmatic outlook. In 1982, he left ARPA for a new job at MCI, where he helped build the first commercial email system (MCI Mail) that was connected to the Internet. While at MCI, he contacted researchers at IBM, Digital, and Hewlett-Packard and convinced them to experiment with TCP/IP. Leadership at these companies still officially supported OSI, however.
The debate raged on through the latter half of the 1980s and into the early 1990s. Tired of the endless arguments, Cerf contacted the head of the National Institute of Standards and Technology (NIST) and asked him to write a blue ribbon report comparing OSI and TCP/IP. Meanwhile, while planning a successor to IPv4, the Internet Advisory Board (IAB) was looking at the OSI Connectionless Network Protocol and its 128-bit addressing for inspiration. In an interview with Ars, Vint Cerf explained what happened next.
“It was deliberately misunderstood by firebrands in the IETF [Internet Engineering Task Force] that we are traitors by adopting OSI,” he said. “They raised a gigantic hoo-hah. The IAB was deposed, and the authority in the system flipped. IAB used to be the decision makers, but the fight flips it, and IETF becomes the standard maker.”
To calm everybody down, Cerf performed a striptease at a meeting of the IETF in 1992. He revealed a T-shirt that said “IP ON EVERYTHING.” At the same meeting, David Clark summarized the feelings of the IETF by saying, “We reject kings, presidents, and voting. We believe in rough consensus and running code.”
Vint Cerf strips down to the bare essentials. Credit: Boardwatch and Light Reading
The fate of the Internet
The split design of TCP/IP, which was a small technical choice at the time, had long-lasting political implications. In 2001, David Clark and Marjory Blumenthal wrote a paper that looked back on the Protocol War. They noted that the Internet’s complex functions were performed at the endpoints, while the network itself ran only the IP part and was concerned simply with moving data from place to place. These “end-to-end principles” formed the basis of “… the ‘Internet Philosophy’: freedom of action, user empowerment, end-user responsibility for actions undertaken, and lack of controls ‘in’ the Net that limit or regulate what users can do,” they said.
In other words, the battle between TCP/IP and OSI wasn’t just about two competing sets of acronyms. On the one hand, you had a small group of computer scientists who had spent many years building a relatively open network and wanted to see it continue under their own benevolent guidance. On the other hand, you had a huge collective of powerful organizations that believed they should be in charge of the future of the Internet—and maybe the behavior of everyone on it.
But this impossible argument and the ultimate fate of the Internet was about to be decided, and not by governments, committees, or even the IETF. The world was changed forever by the actions of one man. He was a mild-mannered computer scientist, born in England and working for a physics research institute in Switzerland.
That’s the story covered in the next article in our series.
I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.
Agents using debugging tools drastically outperformed those that didn’t, but their success rate still wasn’t high enough. Credit: Microsoft Research
This approach is much more successful than relying on the models as they’re usually used, but when your best case is a 48.4 percent success rate, you’re not ready for primetime. The limitations are likely because the models don’t fully understand how to best use the tools, and because their current training data is not tailored to this use case.
“We believe this is due to the scarcity of data representing sequential decision-making behavior (e.g., debugging traces) in the current LLM training corpus,” the blog post says. “However, the significant performance improvement… validates that this is a promising research direction.”
This initial report is just the start of the efforts, the post claims. The next step is to “fine-tune an info-seeking model specialized in gathering the necessary information to resolve bugs.” If the model is large, the best move to save inference costs may be to “build a smaller info-seeking model that can provide relevant information to the larger one.”
This isn’t the first time we’ve seen outcomes that suggest some of the ambitious ideas about AI agents directly replacing developers are pretty far from reality. There have been numerous studies already showing that even though an AI tool can sometimes create an application that seems acceptable to the user for a narrow task, the models tend to produce code laden with bugs and security vulnerabilities, and they aren’t generally capable of fixing those problems.
This is an early step on the path to AI coding agents, but most researchers agree it remains likely that the best outcome is an agent that saves a human developer a substantial amount of time, not one that can do everything they can do.
After declaring the FTC to be under White House control, Trump fired both Democratic members despite a US law and Supreme Court precedent stating that the president cannot fire commissioners without good cause.
House Commerce Committee leaders said the all-Republican FTC will end the “partisan mismanagement” allegedly seen under the Biden-era FTC and then-Chair Lina Khan. “In the last administration, the FTC abandoned its rich bipartisan tradition and historical mission, in favor of a radical agenda and partisan mismanagement,” said a statement issued by Reps. Brett Guthrie (R-Ky) and Gus Bilirakis (R-Fla.). “The Commission needs to return to protecting Americans from bad actors and preserving competition in the marketplace.”
Consumer advocacy group Public Knowledge thanked Senate Democrats for voting against Meador. “In order for the FTC to be effective, it needs to have five independent commissioners doing the work,” said Sara Collins, the group’s director of government affairs. “By voting ‘no’ on this confirmation, these senators have shown that it is still important to prioritize protecting consumers and supporting a healthier marketplace over turning a blind eye to President Trump’s unlawful termination of Democratic Commissioners Slaughter and Bedoya.”
Democrats sue Trump
The two Democrats are challenging the firings in a lawsuit that said “it is bedrock, binding precedent that a President cannot remove an FTC Commissioner without cause.” Trump “purported to terminate Plaintiffs as FTC Commissioners, not because they were inefficient, neglectful of their duties, or engaged in malfeasance, but simply because their ‘continued service on the FTC is’ supposedly ‘inconsistent with [his] Administration’s priorities,'” the lawsuit said.
US law says an FTC commissioner “may be removed by the President for inefficiency, neglect of duty, or malfeasance in office.” A 1935 Supreme Court ruling said that “Congress intended to restrict the power of removal to one or more of those causes.”
Slaughter and Bedoya sued Trump in US District Court for the District of Columbia and asked the court to declare “the President’s purported termination of Plaintiffs Slaughter and Bedoya unlawful and that Plaintiffs Slaughter and Bedoya are Commissioners of the Federal Trade Commission.”
The tariff fees are typically paid on a product’s declared value rather than the retail cost. So a $170 price bump could be close to what the company’s US arm will pay to import the Watch 3 in the midst of a trade war. Many technology firms have attempted to stockpile products in the US ahead of tariffs, but it’s possible OnePlus simply couldn’t do that because it had to fix its typo.
Losing its greatest advantage?
Like past OnePlus wearables, the Watch 3 is a chunky, high-power device with a stainless steel case. It sports a massive 1.5-inch OLED screen, the latest Snapdragon W5 wearable processor, 32GB of storage, and 2GB of RAM. It runs Google’s Wear OS for smart features, but it also has a dialed-back power-saving mode that runs separate RTOS software. This robust hardware adds to the manufacturing cost, which also means higher tariffs now. As it currently stands, the Watch 3 is just too expensive given the competition.
OnePlus has managed to piece together a growing ecosystem of devices, including phones, tablets, earbuds, and, yes, smartwatches. With a combination of competitive prices and high-end specs, it successfully established a foothold in the US market, something few Chinese OEMs have accomplished.
The implications go beyond wearables. OnePlus also swings for the fences with its phone hardware, using the best Arm chips and expensive, high-end OLED panels. OnePlus tends to price its phones lower than similar Samsung and Google hardware, so it doesn’t make as much on each phone. If the tariffs stick, that strategy could be unviable.
This week, as part of the process to develop a budget for fiscal-year 2026, the Trump White House shared the draft version of its budget request for NASA with the space agency.
This initial version of the administration’s budget request calls for an approximately 20 percent overall cut to the agency’s budget across the board, effectively $5 billion from an overall topline of about $25 billion. However, the majority of the cuts are concentrated within the agency’s Science Mission Directorate, which oversees all planetary science, Earth science, astrophysics research, and more.
According to the “passback” documents given to NASA officials on Thursday, the space agency’s science programs would receive nearly a 50 percent cut in funding. After the agency received $7.5 billion for science in fiscal-year 2025, the Trump administration has proposed a science topline budget of just $3.9 billion for the coming fiscal year.
Detailing the cuts
Among the proposals were: A two-thirds cut to astrophysics, down to $487 million; a greater than two-thirds cut to heliophysics, down to $455 million; a greater than 50 percent cut to Earth science, down to $1.033 billion; and a 30 percent cut to Planetary science, down to $1.929 billion.
Although the budget would continue support for ongoing missions such as the Hubble Space Telescope and the James Webb Space Telescope, it would kill the much-anticipated Nancy Grace Roman Space Telescope, an observatory seen as on par with those two world-class instruments that is already fully assembled and on budget for a launch in two years.
“Passback supports continued operation of the Hubble and James Webb Space Telescopes and assumes no funding is provided for other telescopes,” the document states.
Tesla has a lot riding on the swift success of its so-called Full Self-Driving software.
Credit: Kai Eckhardt/picture alliance via Getty Images
Job cuts at the US traffic safety regulator instigated by Elon Musk’s so-called Department of Government Efficiency disproportionately hit staff assessing self-driving risks, hampering oversight of technology on which the world’s richest man has staked the future of Tesla.
Of roughly 30 National Highway Traffic Safety Administration workers dismissed in February as part of Musk’s campaign to shrink the federal workforce, many were in the “office of vehicle automation safety,” people familiar with the situation told the Financial Times.
The cuts are part of mass firings by Doge that have affected at least 20,000 federal employees and raised widespread concern over potential conflicts of interest for Musk given many of the targeted agencies regulate or have contracts with his businesses.
The NHTSA, which has been a thorn in Tesla’s side for years, has eight active investigations into the company after receiving—and publishing—more than 10,000 complaints from members of the public.
Morale at the agency, which has ordered dozens of Tesla recalls and delayed the rollout of the group’s self-driving and driver-assistance software, has plunged following Doge’s opening salvo of job cuts, according to current and former NHTSA staff.
“There is a clear conflict of interest in allowing someone with a business interest influence over appointments and policy at the agency regulating them,” said one former senior NHTSA figure, who was not among the Doge-led layoffs.
Remaining agency employees are now warily watching the experience of other federal regulators that have crossed Musk’s companies.
“Musk has attacked the Federal Aviation Administration and Federal Communications Commission to benefit SpaceX,” said another former top official at the regulator. “Why would he spare NHTSA?”
Musk has repeatedly clashed with federal and state authorities. Last year he called for the FAA chief to resign and sharply criticized the FCC for revoking a 2022 deal for his satellite telecommunications company Starlink to provide rural broadband.
The NHTSA said in a statement that safety remained its top priority and that it would enforce the law on any carmaker in line with its rules and investigations. “The agency’s investigations have been and will continue to be independent,” it added.
Musk, Doge, and Tesla did not immediately respond to requests for comment.
The dismissals, instigated by email on Valentine’s Day, affected roughly 4 percent of the agency’s 800 staff and included employees who had been promised promotions as well as newly hired workers, according to seven people familiar with the matter.
Staff working on vehicle automation safety were disproportionately affected, some of the people said, because the division was only formed in 2023 so comprised many newer hires still on probation.
The email cited poor performance as a reason for the dismissals. However, one senior figure still at the NHTSA rejected the notion that this was the basis for the layoffs. Another said morale was low after “some huge talent losses.”
Doge’s actions could hamper Tesla’s plans, according to one laid-off agency worker, who said the dismissals would “certainly weaken NHTSA’s ability to understand self-driving technologies.”
“This is an office that should be on the cutting edge of how to handle AVs [autonomous vehicles] and figuring out what future rulemaking should look like,” said another former NHTSA employee. “It would be ironic if Doge slowed down Tesla.”
The company has a lot riding on the swift success of its so-called Full Self-Driving software.
Musk has promised customers and investors that Tesla will launch a driverless ride-hailing service in Austin, Texas, by June and start production of a fleet of autonomous “cybercabs” next year.
To do so, Tesla needs an exemption from the NHTSA to operate a non-standard driverless vehicle on American roads because Musk’s cybercabs have neither pedals nor a steering wheel.
“Letting Doge fire those in the autonomous division is sheer madness—we should be lobbying to add people to NHTSA,” said one manager at Tesla. They “need to be developing a national framework for AVs, otherwise Tesla doesn’t have a prayer for scale in FSD or robotaxis.”
The NHTSA’s decision on the cybercab exemption and the future of its proposed AV STEP program to evaluate and oversee driverless and assisted cars will be closely watched considering the high stakes for Tesla.
Current and former NHTSA officials have privately expressed concerns about Musk’s ambitious rollout plans and how he would wield his influence to ensure a speedy launch of the cybercab and unsupervised FSD on US roads.
The government could “speed up the [AV STEP] application process and weaken it in some way so the safety case is less onerous to meet,” one person told the FT.
The future of crash reporting is another area of concern for those at the agency, following reports that the Trump administration may seek to loosen or eliminate disclosure rules.
After a spate of incidents, the NHTSA in 2021 introduced a standing general order that requires carmakers to report within 24 hours any serious accidents involving vehicles equipped with advanced driver assistance or automated driving systems.
Enforcing the order has been a vital tool for the agency to launch investigations into Tesla and other carmakers because there is no federal regulatory framework to govern cars not under human control.
It was critical for a recall of 2 million Teslas in December 2023 for an update that would force drivers to pay attention when its autopilot assistance software was engaged.
“Crash reporting is vital, the massive Tesla recall on autopilot could not have occurred without it. We got a huge amount of info on crashes and followed up with demands for more data and video,” said one person involved in the recall. “But everything seems to be fair game right now.”
One person familiar with Musk’s thinking said the company felt unfairly penalized by the rules because its sensors and video recording are more advanced than rivals’ so it files more complete data.
“Reporters see that we are reporting more incidents—many of which have nothing to do with autopilot—and have told the wrong story about our safety record,” the person said. “There is a healthy amount of frustration about that dynamic… the idea our bar for safety is lower is just wrong.”
The NHTSA has shown no signs of backing down, overseeing three new recalls of Tesla vehicles since Trump took office, most recently ordering 46,000 Cybertrucks to be checked after discovering an exterior panel was prone to falling off because of faulty glue.
Of its eight active investigations into Tesla vehicles, five concern Musk’s claims about the capabilities of the company’s Autopilot driver-assistance system and its FSD software—central promises of Tesla’s value proposition and the subject of thousands of consumer complaints.
The agency has received an average of 20 per month on FSD since the software was launched, according to an FT analysis of more than 10,000 complaints.
A sharp rise in complaints about so-called “phantom braking” at the start of 2022 triggered one of the investigations. In one, about a mid-October 2024 incident, a Tesla Model 3 in FSD suddenly stopped in front of a car that would have crashed into it had the Tesla driver not taken back control of the vehicle and accelerated.
“Software is so far from being ready to be safely used,” the Model 3 driver said in the complaint.
While multiple Tesla tech updates in the past two years have reduced complaints about braking glitches, other software issues persist. The FT analysis, which used artificial intelligence to categorize complaints, shows errors connected to driver-assist tools such as FSD and Autopilot still make up a large share of complaints made against the company in the past year.
In February, the driver of a 2024 Cybertruck reported that FSD disengaged without warning, causing the vehicle to suddenly accelerate and nearly collide head-on with another car. The owner said they contacted Tesla service but the vehicle was neither inspected nor repaired.
Former Apple executive Jonathan Morrison has been nominated by Trump as the NHTSA’s next administrator and must find a way to navigate the agency through the perceived conflicts of interest with Musk, without being accused of stifling AV innovation.
“Elon has done a lot of really interesting things with tech that were thought to be impossible,” said one former top NHTSA official.
“What concerns me is that Tesla is not known for taking a slow and methodical approach; they move fast and break things, and people are at risk because of that. There have been preventable deaths, so it’s an immediate concern for us.”
WASHINGTON, DC—Over the course of a nearly three-hour committee hearing Wednesday, the nominee to lead NASA for the Trump administration faced difficult questions from US senators who sought commitments to specific projects.
However, maneuvering like a pilot with more than 7,000 hours in jets and ex-military aircraft, entrepreneur and private astronaut Jared Isaacman dodged most of their questions and would not be pinned down. His basic message to members of the Senate Committee on Commerce, Science, and Transportation was that NASA is an exceptional agency that does the impossible, but that it also faces some challenges. NASA, he said, receives an “extraordinary” budget, and he vowed to put taxpayer dollars to efficient use in exploring the universe and retaining the nation’s lead on geopolitical competitors in space.
“I have lived the American dream, and I owe this nation a great debt,” said Isaacman, who founded his first business at 16 in his parents’ basement and would go on to found an online payments company, Shift4, that would make him a billionaire. Isaacman is also an avid pilot who self-funded and led two private missions to orbit on Crew Dragon. Leading NASA would be “the privilege of a lifetime,” he said.
The hearing took place in the Russell Senate Office building next to the US Capitol on Wednesday morning, in an expansive room with marbled columns and three large chandeliers. There was plenty of spaceflight royalty on hand, including the four astronauts who will fly on the Artemis II mission, as well as the six private citizens who flew with Isaacman on his two Dragon missions.
“This may be the most badass assemblage we’ve had at a Senate hearing,” said US Sen. Ted Cruz, R-Texas, chair of the committee, commenting on the astronauts in the room.
Committed to staying at the Moon?
However, when the meeting got down to brass tacks, there were sharp questions for Isaacman.
Cruz opened the hearing by stating his priorities for NASA clearly and explicitly: He is most focused on ensuring the United States does not cede any of its preeminence to China in space, and this starts with low-Earth orbit and the Moon.
“Make no mistake, the Chinese Communist Party has been explicit in its desire to dominate space, putting a fully functional space station in low-Earth orbit and robotic rovers on the far side of the Moon,” he said. “We are not headed for the next space race; it is already here.”
Cruz wanted Isaacman to commit to not just flying human missions to the Moon, but also to a sustained presence on the surface or in cislunar space.
In response, Isaacman said he would see that NASA returns humans to the Moon as quickly as possible, beating China in the process. This includes flying Artemis II around the Moon in 2026, and then landing the Artemis III mission later this decade.
The disagreement came over what to do after this. Isaacman, echoing the Trump administration, said the agency should also press onward, sending humans to Mars as soon as possible. Cruz, however, wanted Isaacman to say NASA would establish a sustained presence at the Moon. The committee has written authorizing legislation to mandate this, Cruz reminded Isaacman.
“If that’s the law, then I am committed to it,” Isaacman said.
NASA astronauts Reid Wiseman, left, Victor Glover, Christina Koch, and CSA (Canadian Space Agency) astronaut Jeremy Hansen watch as Jared Isaacman testifies on Wednesday.
Credit: NASA/Bill Ingalls
NASA astronauts Reid Wiseman, left, Victor Glover, Christina Koch, and CSA (Canadian Space Agency) astronaut Jeremy Hansen watch as Jared Isaacman testifies on Wednesday. Credit: NASA/Bill Ingalls
Cruz also sought Isaacman’s commitment to flying the International Space Station through at least 2030, which is the space agency’s current date for retiring the orbital laboratory. Isaacman said that seemed reasonable and added that NASA should squeeze every possible bit of research out of it until then. However, when Cruz pressed Isaacman about the Lunar Gateway, a space station NASA is developing to fly in an elliptical orbit around the Moon, Isaacman would not be drawn in. He replied that he would work with Congress and space agency officials to determine which programs are working and which ones are not.
The Gateway is a program championed by Cruz since it is managed by Johnson Space Center in Texas. Parochial interests aside, a lot of space community stakeholders question the value of the Gateway to NASA’s exploration plans.
Ten centers and the future of SLS
One of the most tense interactions came between Isaacman and Sen. Maria Cantwell, D-Wash., who wanted commitments from Isaacman that he would not close any of NASA’s 10 field centers, and also that the space agency would fly the Artemis II and Artemis III missions on the Space Launch System rocket.
Regarding field centers, there has been discussion about making the space agency more efficient by closing some of them. This is a politically sensitive topic, and naturally, politicians from states where those centers are located are protective of them. At the same time, there is a general recognition that it would be more cost-effective for NASA to consolidate its operations as part of modernization.
Isaacman did not answer Cantwell’s question about field centers directly. Rather, he said he had not been fully briefed on the administration’s plans for NASA’s structure. “Senator, there’s only so much I can be briefed on in advance of a hearing,” he said. In response to further prodding, Isaacman said, “I fully expect to roll up my sleeves” when it came to ideas to restructure NASA.
Cantwell and other Senators pressed Isaacman on plans to use NASA’s Space Launch System rocket as part of the overall plan to get astronauts to the lunar surface. Isaacman sounded as if he were on board with flying the Artemis II as envisioned—no surprise, then, that this crew was in the audience—and said he wanted to get a crew of Artemis III to the lunar surface as quickly as possible. But he questioned why it has taken NASA so long, and at such great expense, to get its deep space human exploration plans moving.
He noted, correctly, that presidential administrations dating back to 1989 have been releasing plans for sending humans to the Moon or Mars, and that significantly more than $100 billion has been spent on various projects over nearly four decades. For all of that, Isaacman and his private Polaris Dawn crewmates remain the humans to have flown the farthest from Earth since the Apollo Program. They did so last year.
“Why is it taking us so long, and why is it costing us so much to go to the Moon?” he asked.
In one notable exchange, Isaacman said NASA’s current architecture for the Artemis lunar plans, based on the SLS rocket and Orion spacecraft, is probably not the ideal “long-term” solution to NASA’s deep space transportation plans. The smart reading of this is that Isaacman may be willing to fly the Artemis II and Artemis III missions as conceived, given that much of the hardware is already built. But everything that comes after this, including SLS rocket upgrades and the Lunar Gateway, could be on the chopping block. Ars wrote more about why this is a reasonable path forward last September.
Untangling a relationship with SpaceX
Some of the most intelligent questions came from US Sen. Andy Kim, D-New Jersey. During his time allotment, Kim also pressed Isaacman on the question of a sustained presence on the Moon. Isaacman responded that it was critical for NASA to get astronauts on the Moon, along with robotic missions, to determine the “economic, scientific, and national security value” of the Moon. With this information, he said, NASA will be better positioned to determine whether and why it should have an enduring presence on the Moon.
If this were so, Kim subsequently asked what the economic, scientific, and national security value of sending humans to Mars was. Not responding directly to this question, Isaacman reiterated that NASA should do both Moon and Mars exploration in parallel. NASA will need to become much more efficient to afford that, and some of the US Senators appeared skeptical. But Isaacman seems to truly believe this and wants to take a stab at making NASA more cost-effective and “mission focused.”
Throughout the hearing, Isaacman appeared to win the approval of various senators with his repeated remarks that he was committed to NASA’s science programs and that he was eager to help NASA uphold its reputation for making the impossible possible. He also said it is a “fundamental” obligation of the space agency to inspire the next generation of scientists.
A challenging moment came during questioning from Sen. Edward Markey, D-Mass., who expressed his concern about Isaacman’s relationship to SpaceX founder Elon Musk. Isaacman was previously an investor in SpaceX and has paid for two Dragon missions. In a letter written in March, Isaacman explained how he would disentangle his “actual and apparent” conflicts of interest with SpaceX.
However, Markey wanted to know if Isaacman would be pulling levers at NASA for Musk, and for the financial benefit of SpaceX. Markey pressed multiple times on whether Musk was in the room at Mar-A-Lago late last year when Trump offered Isaacman the position of NASA administrator. Isaacman declined to say, reiterating multiple times that his meeting was with Trump, not anyone else. Asked if he had discussed his plans for NASA with Musk, Isaacman said, “I have not.”
Earlier in the hearing, Isaacman sought to make clear that he was not beholden to Musk in any way.
“My loyalty is to this nation, the space agency, and its world-changing mission,” Isaacman said. Yes, he acknowledged he would talk to contractors for the space agency. It is important to draw on a broad range of perspectives, Isaacman said. But he wanted to make this clear: NASA works for the nation, and the contractors, he added, “work for us.”
A full committee vote on Isaacman is expected later this month after April 15, and if successful, the nomination would pass to the full Senate. Isaacman could be confirmed late this month or in May.
As President Donald Trump signed a slew of executive orders Tuesday aimed at keeping coal power alive in the United States, he repeatedly blamed his predecessor, Democrats, and environmental regulations for the industry’s dramatic contraction over the past two decades.
But across the country, state and local officials and electric grid operators have been confronting a factor in coal’s demise that is not easily addressed with the stroke of a pen: its cost.
For example, Maryland’s only remaining coal generating station, Talen Energy’s 1.3-gigawatt Brandon Shores plant, will be staying open beyond its previously planned June 1 shutdown, under a deal that regional grid operator PJM brokered earlier this year with the company, state officials, and the Sierra Club.
Talen had decided to close the plant two years ago because it determined that running the plant was uneconomical. But PJM said the plant was necessary to maintain the reliability of the grid. To keep Brandon Shores open while extra transmission is built to bolster the grid, Maryland ratepayers will be forced to pay close to $1 billion.
“There’s some people who say that Brandon Shores was retiring because of Maryland’s climate policy,” says David Lapp, who leads the Maryland Office of People’s Counsel, which fought the deal on behalf of ratepayers. “But it was purely a decision made by a generation company that’s operating in a free market.”
Cheaper power from natural gas and renewable energy has been driving down use of coal across the United States for roughly 20 years. Coal plants now provide about 15 percent of the nation’s electricity, down from more than 50 percent in 2000.
In some cases, state and local officials have raised concerns over whether the loss of coal plants will make the grid more vulnerable to blackouts. In Utah, for example, the Intermountain Power Agency’s 1,800-megawatt coal power facility in Utah’s West Desert is the largest US coal plant that was scheduled to shut down this year, according to the US Energy Information Administration. IPA is going forward with its plan to switch to natural gas plants that can be made cleaner-operating by using hydrogen fuel. But under a new law, IPA will shut down the coal plants in a state where it can be easily restarted, said IPA spokesman John Ward. The Utah legislature voted last month in favor of a new process in which the state of Utah will look for new customers and possibly a new operator to keep the coal plant running.