Whether anything ever lived on Mars is unknown. And the present environment, with harsh temperatures, intense radiation, and a sparse atmosphere, isn’t exactly propitious for life. Despite the red planet’s brutality, lichens that inhabit some of the harshest environments on Earth could possibly survive there.
Lichens are symbionts, or two organisms that are in a cooperative relationship. There is a fungal component (most are about 90 percent fungus) and a photosynthetic component (algae or cyanobacteria). To see if some species of lichen had what it takes to survive on Mars, a team of researchers led by botanist Kaja Skubała used the Space Research Center of the Polish Academy of Sciences to expose the lichen species Diploschistes muscorum and Cetrarea aculeata to simulate Mars conditions.
“Our study is the first to demonstrate that the metabolism of the fungal partner in lichen symbiosis was active while being in a Mars-like environment,” the researchers said in a study recently published in IMA Fungus. “X-rays associated with solar flares and SEPs reaching Mars should not affect the potential habitability of lichens on this planet.”
Martian ionizing radiation is threatening to most forms of life because it can cause damage at the cellular level. It can also get in the way of physical, genetic, morphological, and biochemical processes, depending on the organism and radiation level.
Going to extremes
Lichens have an edge when it comes to survival. They share characteristics with other organisms that can handle high levels of stress, including a low metabolism, not needing much in the way of nutrition, and longevity. Much like tardigrades, lichens can stay in a desiccated state for extended periods until they are rehydrated. Other lichen adaptations to extreme conditions include metabolites that screen out UV rays and melanin pigments that also defend against radiation.
Before a critical point in development, the animals failed to close the wound made by the cut, causing the two embryo halves to simply spew cells out into the environment. Somewhat later, however, there was excellent survival, and the head portion of the embryo could regenerate a tail segment. This tells us that the normal signaling pathways present in the embryo are sufficient to drive the process forward.
But the tail of the embryo at this stage doesn’t appear to be capable of rebuilding its head. But the researchers found that they could inhibit wnt signaling in these posterior fragments, and that was enough to allow the head to develop.
Lacking muscle
One possibility here is that wnt signaling is widely active in the posterior of the embryo at this point, blocking formation of anterior structures. Alternatively, the researchers hypothesize that the problem is with the muscle cells that normally help organize the formation of a stem-cell-filled blastema, which is needed to kick off the regeneration process. Since the anterior end of the embryo develops earlier, they suggest there may simply not be enough muscle cells in the tail to kick off this process at early stages of development.
To test their hypothesis, they performed a somewhat unusual experiment. They started by cutting off the tails of embryos and saving them for 24 hours. At that point, they cut the front end off tails, creating a new wound to heal. At this point, regeneration proceeded as normal, and the tails grew a new head. This isn’t definitive evidence that muscle cells are what’s missing at early stages, but it does indicate that some key developmental step happens in the tail within the 24-hour window after the first cut.
The results reinforce the idea that regeneration of major body parts requires the re-establishment of the signals that lay out organization of the embryo in development—something that gets complicated if those signals are currently acting to organize the embryo. And it clearly shows that the cells needed to do this reorganization aren’t simply set aside early on in development but instead take some time to appear. All of that information will help clarify the bigger-picture question of how these animals manage such a complex regeneration process.
The Curiosity mission started near the bottom of the crater, at the base of a formation called Aeolis Mons, or Mount Sharp, where NASA expected to find the earliest geological samples. The idea then was to climb up Mount Sharp and collect samples from later and later geological periods at increasing elevations, tracing the history of habitability and the great drying up of Mars. On the way, the carbon missed by the satellites was finally found.
An imperfect cycle
Tutolo’s team focused their attention on four sediment samples Curiosity drilled after climbing over a kilometer up Mount Sharp. The samples were examined with the rover’s Chemistry and Mineralogy instrument, which uses X-ray diffraction to determine their composition. It turned out the samples contained roughly between 5 and 10 percent of siderite. “It was an iron carbonate, directly analogous to a mineral called calcite found in sedimentary rocks like limestone. The difference is it has iron in its cation site rather than calcium,” Tutolo explained. “We expected that because Mars is much richer in iron—that’s why it is the red planet.”
The siderite found in the samples was also pure, which Tutolo thinks indicates it has formed through an evaporation process akin to what we see in evaporated lakes on Earth. This, in turn, was the first evidence we’ve found of the ancient Martian carbon cycle. “Now we have evidence that confirms the models,” Tutolo claims. The carbon from the atmosphere was being sequestered in the rocks on Mars just as it is on Earth. The problem was, unlike on Earth, it couldn’t get out of these rocks.
“On Earth, whenever oceanic plates get subducted into the mantle, all of the limestone that was formed before gets cooked off, and the carbon dioxide gets back to the atmosphere through volcanoes,” Tutolo explains. Mars, on the other hand, has never had efficient plate tectonics. A large portion of carbon that got trapped in Martian rocks stayed in those rocks forever, thinning out the atmosphere. While it’s likely the red planet had its own carbon cycle, it was an imperfect one that eventually turned it into the lifeless desert it is today.
For their study, Gaby et al. organized an on-campus “Speed-Friending” event for 40 female volunteers, consisting of four distinct phases. First, participants had their headshots taken. Next, they looked at pictures of all the other women participating and rated friendship potential based solely on visual cues. Then the women wore a T-shirt for 12 hours as they went about their daily activities, which were then collected and placed in plastic bags. Finally, participants rated the friendship potential of anonymized participants based solely on smelling each T-shirt, followed by a live session during which they interacted with each woman for four minutes and rated their friendship potential. This was followed by a second round of smelling the T-shirts and once again rating friendship potential.
The results: There was a strong correlation between the in-person evaluations of friendship potential and those based solely on smelling the T-shirts, with remarkable consistency. And the ratings made after live interactions accurately predicted changes in the assessments made in the final round of odor-based testing, suggesting a learned response element.
“Everybody showed they had a consistent signature of what they liked,” said co-author Vivian Zayas of Cornell University. “And the consistency was not that, in the group, one person smelled really bad and one person smelled really good. No, it was idiosyncratic. I might like person A over B over C based on scent, and this pattern predicts who I end up liking in the chat. People take a lot in when they’re meeting face to face. But scent—which people are registering at some level, though probably not consciously—forecasts whether you end up liking this person.”
The authors acknowledged that their study was limited to college-aged heterosexual women and that there could be differences in how olfactory and other cues function in other groups: older or younger women, non-American women, men, and so forth. “Future studies might consider a wider age range, investigate individuals at different stages of development, focus on how these cues function in male-male platonic interactions, or examine how scent in daily interactions shapes friendship judgments in other cultures,” they wrote.
For six years, Ziska and a large team of research colleagues in China and the US grew rice in controlled fields, subjecting it to varying levels of carbon dioxide and temperature. They found that when both increased, in line with projections by climate scientists, the amount of arsenic and inorganic arsenic in rice grains also went up.
Arsenic is found naturally in some foods, including fish and shellfish, and in waters and soils.
Inorganic arsenic is found in industrial materials and gets into water—including water used to submerge rice paddies.
Rice is easily inundated with weeds and other crops, but it has one advantage: It grows well in water. So farmers germinate the seeds, and when the seedlings are ready, plant them in wet soil. They then flood their fields, which suppresses weeds, but allows the rice to flourish. Rice readily absorbs the water and everything in it—including arsenic, either naturally occurring or not. Most of the world’s rice is grown this way.
The new research demonstrates that climate change will ramp up those levels.
“What happens in rice, because of complex biogeochemical processes in the soil, when temperatures and CO2 go up, inorganic arsenic also does,” Ziska said. “And it’s this inorganic arsenic that poses the greatest health risk.”
Exposure to inorganic arsenic has been linked to cancers of the skin, bladder, and lung, heart disease, and neurological problems in infants. Research has found that in parts of the world with high consumption of rice, inorganic arsenic increases cancer risk.
Building an observatory on the Moon would be a huge challenge—but it would be worth it.
Credit: Aurich Lawson | Getty Images
Credit: Aurich Lawson | Getty Images
There is a signal, born in the earliest days of the cosmos. It’s weak. It’s faint. It can barely register on even the most sensitive of instruments. But it contains a wealth of information about the formation of the first stars, the first galaxies, and the mysteries of the origins of the largest structures in the Universe.
Despite decades of searching for this signal, astronomers have yet to find it. The problem is that our Earth is too noisy, making it nearly impossible to capture this whisper. The solution is to go to the far side of the Moon, using its bulk to shield our sensitive instruments from the cacophony of our planet.
Building telescopes on the far side of the Moon would be the greatest astronomical challenge ever considered by humanity. And it would be worth it.
The science
We have been scanning and mapping the wider cosmos for a century now, ever since Edwin Hubble discovered that the Andromeda “nebula” is actually a galaxy sitting 2.5 million light-years away. Our powerful Earth-based observatories have successfully mapped the detailed location to millions of galaxies, and upcoming observatories like the Vera C. Rubin Observatory and Nancy Grace Roman Space Telescope will map millions more.
And for all that effort, all that technological might and scientific progress, we have surveyed less than 1 percent of the volume of the observable cosmos.
The vast bulk of the Universe will remain forever unobservable to traditional telescopes. The reason is twofold. First, most galaxies will simply be too dim and too far away. Even the James Webb Space Telescope, which is explicitly designed to observe the first generation of galaxies, has such a limited field of view that it can only capture a handful of targets at a time.
Second, there was a time, within the first few hundred million years after the Big Bang, before stars and galaxies had even formed. Dubbed the “cosmic dark ages,” this time naturally makes for a challenging astronomical target because there weren’t exactly a lot of bright sources to generate light for us to look at.
But there was neutral hydrogen. Most of the Universe is made of hydrogen, making it the most common element in the cosmos. Today, almost all of that hydrogen is ionized, existing in a super-heated plasma state. But before the first stars and galaxies appeared, the cosmic reserves of hydrogen were cool and neutral.
Neutral hydrogen is made of a single proton and a single electron. Each of these particles has a quantum property known as spin (which kind of resembles the familiar, macroscopic property of spin, but it’s not quite the same—though that’s a different article). In its lowest-energy state, the proton and electron will have spins oriented in opposite directions. But sometimes, through pure random quantum chance, the electron will spontaneously flip around. Very quickly, the hydrogen notices and gets the electron to flip back to where it belongs. This process releases a small amount of energy in the form of a photon with a wavelength of 21 centimeters.
This quantum transition is exceedingly rare, but with enough neutral hydrogen, you can build a substantial signal. Indeed, observations of 21-cm radiation have been used extensively in astronomy, especially to build maps of cold gas reservoirs within the Milky Way.
So the cosmic dark ages aren’t entirely dark; those clouds of primordial neutral hydrogen are emitting tremendous amounts of 21-cm radiation. But that radiation was emitted in the distant past, well over 13 billion years ago. As it has traveled through the cosmic distances, all those billions of light-years on its way to our eager telescopes, it has experienced the redshift effects of our expanding Universe.
By the time that dark age 21-cm radiation reaches us, it has stretched by a factor of 10, turning the neutral hydrogen signal into radio waves with wavelengths of around 2 meters.
The astronomy
Humans have become rather fond of radio transmissions in the past century. Unfortunately, the peak of this primordial signal from the dark ages sits right below the FM dial of your radio, which pretty much makes it impossible to detect from Earth. Our emissions are simply too loud, too noisy, and too difficult to remove. Teams of astronomers have devised clever ways to reduce or eliminate interference, featuring arrays scattered around the most desolate deserts in the world, but they have not been able to confirm the detection of a signal.
So those astronomers have turned in desperation to the quietest desert they can think of: the far side of the Moon.
It wasn’t until 1959 when the Soviet Luna 3 probe gave us our first glimpse of the Moon’s far side, and it wasn’t until 2019 when the Chang’e 4 mission made the first soft landing. Compared to the near side, and especially low-Earth orbit, there is very little human activity there. We’ve had more active missions on the surface of Mars than on the lunar far side.
And that makes the far side of the Moon the ideal location for a dark-age-hunting radio telescope, free from human interference and noise.
Ideas abound to make this a possibility. The first serious attempt was DARE, the Dark Ages Radio Explorer. Rather than attempting the audacious goal of building an actual telescope on the surface, DARE was a NASA-funded concept to develop an observatory (and when it comes to radio astronomy, “observatory” can be as a simple as a single antenna) to orbit the Moon and take data when it’s on the opposite side as the Earth.
For various bureaucratic reasons, NASA didn’t develop the DARE concept further. But creative astronomers have put forward even bolder proposals.
The FarView concept, for example, is a proposed radio telescope array that would dwarf anything on the Earth. It would be sensitive to frequency ranges between 5 and 40 MHz, allowing it to target the dark ages and the birth of the first stars. The proposed design contains 100,000 individual elements, with each element consisting of a single, simple dipole antenna, dispersed over a staggering 200 square kilometers. It would be infeasible to deliver that many antennae directly to the surface of the Moon. Instead, we’d have to build them, mining lunar regolith and turning it into the necessary components.
The design of this array is what’s called an interferometer. Instead of a single big dish, the individual antennae collect data on their own and then correlate all their signals together later. The effective resolution of an interferometer is the same as a single dish as big as the widest distance among the elements. The downside of an interferometer is that most of the incoming radiation just hits dirt (or in this case, lunar regolith), so the interferometer has to collect a lot of data to build up a decent signal.
Attempting these kinds of observations on the Earth requires constant maintenance and cleaning to remove radio interference and have essentially sunk all attempts to measure the dark ages. But a lunar-based interferometer will have all the time in the world it needs, providing a much cleaner and easier-to-analyze stream of data.
If you’re not in the mood for building 100,000 antennae on the Moon’s surface, then another proposal seeks to use the Moon’s natural features—namely, its craters. If you squint hard enough, they kind of look like radio dishes already. The idea behind the project, named the Lunar Crater Radio Telescope, is to find a suitable crater and use it as the support structure for a gigantic, kilometer-wide telescope.
This idea isn’t without precedent. Both the beloved Arecibo and the newcomer FAST observatories used depressions in the natural landscape of Puerto Rico and China, respectively, to take most of the load off of the engineering to make their giant dishes. The Lunar Telescope would be larger than both of those combined, and it would be tuned to hunt for dark ages radio signals that we can’t observe using Earth-based observatories because they simply bounce off the Earth’s ionosphere (even before we have to worry about any additional human interference). Essentially, the only way that humanity can access those wavelengths is by going beyond our ionosphere, and the far side of the Moon is the best place to park an observatory.
The engineering
The engineering challenges we need to overcome to achieve these scientific dreams are not small. So far, humanity has only placed a single soft-landed mission on the distant side of the Moon, and both of these proposals require an immense upgrade to our capabilities. That’s exactly why both far-side concepts were funded by NIAC, NASA’s Innovative Advanced Concepts program, which gives grants to researchers who need time to flesh out high-risk, high-reward ideas.
With NIAC funds, the designers of the Lunar Crater Radio Telescope, led by Saptarshi Bandyopadhyay at the Jet Propulsion Laboratory, have already thought of the challenges they will need to overcome to make the mission a success. Their mission leans heavily on another JPL concept, the DuAxel, which consists of a rover that can split into two single-axel rovers connected by a tether.
To build the telescope, several DuAxels are sent to the crater. One of each pair “sits” to anchor itself on the crater wall, while another one crawls down the slope. At the center, they are met with a telescope lander that has deployed guide wires and the wire mesh frame of the telescope (again, it helps for assembling purposes that radio dishes are just strings of metal in various arrangements). The pairs on the crater rim then hoist their companions back up, unfolding the mesh and lofting the receiver above the dish.
The FarView observatory is a much more capable instrument—if deployed, it would be the largest radio interferometer ever built—but it’s also much more challenging. Led by Ronald Polidan of Lunar Resources, Inc., it relies on in-situ manufacturing processes. Autonomous vehicles would dig up regolith, process and refine it, and spit out all the components that make an interferometer work: the 100,000 individual antennae, the kilometers of cabling to run among them, the solar arrays to power everything during lunar daylight, and batteries to store energy for round-the-lunar-clock observing.
If that sounds intense, it’s because it is, and it doesn’t stop there. An astronomical telescope is more than a data collection device. It also needs to crunch some numbers and get that precious information back to a human to actually study it. That means that any kind of far side observing platform, especially the kinds that will ingest truly massive amounts of data such as these proposals, would need to make one of two choices.
Choice one is to perform most of the data correlation and processing on the lunar surface, sending back only highly refined products to Earth for further analysis. Achieving that would require landing, installing, and running what is essentially a supercomputer on the Moon, which comes with its own weight, robustness, and power requirements.
The other choice is to keep the installation as lightweight as possible and send the raw data back to Earthbound machines to handle the bulk of the processing and analysis tasks. This kind of data throughput is outright impossible with current technology but could be achieved with experimental laser-based communication strategies.
The future
Astronomical observatories on the far side of the Moon face a bit of a catch-22. To deploy and run a world-class facility, either embedded in a crater or strung out over the landscape, we need some serious lunar manufacturing capabilities. But those same capabilities come with all the annoying radio fuzz that already bedevil Earth-based radio astronomy.
Perhaps the best solution is to open up the Moon to commercial exploitation but maintain the far side as a sort of out-world nature preserve, owned by no company or nation, left to scientists to study and use as a platform for pristine observations of all kinds.
It will take humanity several generations, if not more, to develop the capabilities needed to finally build far-side observatories. But it will be worth it, as those facilities will open up the unseen Universe for our hungry eyes, allowing us to pierce the ancient fog of our Universe’s past, revealing the machinations of hydrogen in the dark ages, the birth of the first stars, and the emergence of the first galaxies. It will be a fountain of cosmological and astrophysical data, the richest possible source of information about the history of the Universe.
Ever since Galileo ground and polished his first lenses and through the innovations that led to the explosion of digital cameras, astronomy has a storied tradition of turning the technological triumphs needed to achieve science goals into the foundations of various everyday devices that make life on Earth much better. If we’re looking for reasons to industrialize and inhabit the Moon, the noble goal of pursuing a better understanding of the Universe makes for a fine motivation. And we’ll all be better off for it.
More than 45 million people in the US are fans of bowling, with national competitions awarding millions of dollars. Bowlers usually rely on instinct and experience, earned through lots and lots of practice, to boost their strike percentage. A team of physicists has come up with a mathematical model to better predict ball trajectories, outlined in a new paper published in the journal AIP Advances. The resulting equations take into account such factors as the composition and resulting pattern of the oil used on bowling lanes, as well as the inevitable asymmetries of bowling balls and player variability.
The authors already had a strong interest in bowling. Three are regular bowlers and quite skilled at the sport; a fourth, Curtis Hooper of Longborough University in the UK, is a coach for Team England at the European Youth Championships. Hooper has been studying the physics of bowling for several years, including an analysis of the 2017 Weber Cup, as well as papers devising mathematical models for the application of lane conditioners and oil patterns in bowling.
The calculations involved in such research are very complicated because there are so many variables that can affect a ball’s trajectory after being thrown. Case in point: the thin layer of oil that is applied to bowling lanes, which Hooper found can vary widely in volume and shape among different venues, plus the lack of uniformity in applying the layer, which creates an uneven friction surface.
Per the authors, most research to date has relied on statistically analyzing empirical data, such as a 2018 report by the US Bowling Congress that looked at data generated by 37 bowlers. (Hooper relied on ball-tracking data for his 2017 Weber Cup analysis.) A 2009 analysis showed that the optimal location for the ball to strike the headpin is about 6 centimeters off-center, while the optimal entry angle for the ball to hit is about 6 degrees. However, such an approach struggles to account for the inevitable player variability. No bowler hits their target 100 percent of the time, and per Hooper et al., while the best professionals can come within 0.1 degrees from the optimal launch angle, this slight variation can nonetheless result in a difference of several centimeters down-lane.
Regardless of the identity of the satellite, this image is remarkable for several reasons.
First, despite so many satellites flying in space, it’s still rare to see a real picture—not just an artist’s illustration—of what one actually looks like in orbit. For example, SpaceX has released photos of Starlink satellites in launch configuration, where dozens of the spacecraft are stacked together to fit inside the payload compartment of the Falcon 9 rocket. But there are fewer well-resolved views of a satellite in its operational environment, with solar arrays extended like the wings of a bird.
This is changing as commercial companies place more and more imaging satellites in orbit. Several companies provide “non-Earth imaging” services by repurposing Earth observation cameras to view other objects in space. These views can reveal information that can be useful in military or corporate espionage.
Secondly, the Google Earth capture offers a tangible depiction of a satellite’s speed. An object in low-Earth orbit must travel at more than 17,000 mph (more than 27,000 km per hour) to keep from falling back into the atmosphere.
While the B-2’s motion caused it to appear a little smeared in the Google Earth image a few years ago, the satellite’s velocity created a different artifact. The satellite appears five times in different colors, which tells us something about how the image was made. Airbus’ Pleiades satellites take pictures in multiple spectral bands: blue, green, red, panchromatic, and near-infrared.
At lower left, the black outline of the satellite is the near-infrared capture. Moving up, you can see the satellite in red, blue, and green, followed by the panchromatic, or black-and-white, snapshot with the sharpest resolution. Typically, the Pleiades satellites record these images a split-second apart and combine the colors to generate an accurate representation of what the human eye might see. But this doesn’t work so well for a target moving at nearly 5 miles per second.
The Trump administration has been using federal research funding as a cudgel. The government has blocked billions of dollars in research funds and threatened to put a hold on even more in order to compel universities to adopt what it presents as essential reforms. In the case of Columbia University, that includes changes in the leadership of individual academic departments.
On Friday, the government sent a list of demands that it presented as necessary to “maintain Harvard’s financial relationship with the federal government.” On Monday, Harvard responded that accepting these demands would “allow itself to be taken over by the federal government.” The university also changed its home page into an extensive tribute to the research that would be eliminated if the funds were withheld.
Harvard posted the letter it received from federal officials, listing their demands. Some of it is what you expect, given the Trump administration’s interests. The admissions and hiring departments would be required to drop all diversity efforts, with data on faculty and students to be handed over to the federal government for auditing. As at other institutions, there are also some demands presented as efforts against antisemitism, such as the defunding of pro-Palestinian groups. More generally, it demands that university officials “prevent admitting students hostile to the American values and institutions.”
There are also a bunch of basic culture war items, such as a demand for a mask ban, and a ban on “de-platforming” speakers on campus. In addition, the government wants the university to screen all faculty hires for plagiarism issues, which is what caused Harvard’s former president to resign after she gave testimony to Congress. Any violation of these updated conduct codes by a non-citizen would require an immediate report to the Department of Homeland Security and State Department, presumably so they can prepare to deport them.
Officials blame changing requirements for much of the delays and rising costs. NASA managers dramatically changed their plans for the Gateway program in 2020, when they decided to launch the PPE and HALO on the same rocket, prompting major changes to their designs.
Jared Isaacman, Trump’s nominee for NASA administrator, declined to commit to the Gateway program during a confirmation hearing before the Senate Commerce Committee on April 9. Sen. Ted Cruz (R-Texas), the committee’s chairman, pressed Isaacman on the Lunar Gateway. Cruz is one of the Gateway program’s biggest backers in Congress since it is managed by Johnson Space Center in Texas. If it goes ahead, Gateway would guarantee numerous jobs at NASA’s mission control in Houston throughout its 15-year lifetime.
“That’s an area that if I’m confirmed, I would love to roll up my sleeves and further understand what’s working right?” Isaacman replied to Cruz. “What are the opportunities the Gateway presents to us? And where are some of the challenges, because I think the Gateway is a component of many programs that are over budget and behind schedule.”
The pressure shell for the Habitation and Logistics Outpost (HALO) module arrived in Gilbert, Arizona, last week for internal outfitting. Credit: NASA/Josh Valcarcel
Checking in with Gateway
Nevertheless, the Gateway program achieved a milestone one week before Isaacman’s confirmation hearing. The metallic pressure shell for the HALO module was shipped from its factory in Italy to Arizona. The HALO module is only partially complete, and it lacks life support systems and other hardware it needs to operate in space.
Over the next couple of years, Northrop Grumman will outfit the habitat with those components and connect it with the Power and Propulsion Element under construction at Maxar Technologies in Silicon Valley. This stage of spacecraft assembly, along with prelaunch testing, often uncovers problems that can drive up costs and trigger more delays.
Ars recently spoke with Jon Olansen, a bio-mechanical engineer and veteran space shuttle flight controller who now manages the Gateway program at Johnson Space Center. A transcript of our conversation with Olansen is below. It is lightly edited for clarity and brevity.
Ars: The HALO module has arrived in Arizona from Italy. What’s next?
Olansen: This HALO module went through significant effort from the primary and secondary structure perspective out at Thales Alenia Space in Italy. That was most of their focus in getting the vehicle ready to ship to Arizona. Now that it’s in Arizona, Northrop is setting it up in their facility there in Gilbert to be able to do all of the outfitting of the systems we need to actually execute the missions we want to do, keep the crew safe, and enable the science that we’re looking to do. So, if you consider your standard spacecraft, you’re going to have all of your command-and-control capabilities, your avionics systems, your computers, your network management, all of the things you need to control the vehicle. You’re going to have your power distribution capabilities. HALO attaches to the Power and Propulsion Element, and it provides the primary power distribution capability for the entire station. So that’ll all be part of HALO. You’ll have your standard thermal systems for active cooling. You’ll have the vehicle environmental control systems that will need to be installed, [along with] some of the other crew systems that you can think of, from lighting, restraint, mobility aids, all the different types of crew systems. Then, of course, all of our science aspects. So we have payload lockers, both internally, as well as payload sites external that we’ll have available, so pretty much all the different systems that you would need for a human-rated spacecraft.
Ars: What’s the latest status of the Power and Propulsion Element?
Olansen: PPE is fairly well along in their assembly and integration activities. The central cylinder has been integrated with the propulsion tanks… Their propulsion module is in good shape. They’re working on the avionics shelves associated with that spacecraft. So, with both vehicles, we’re really trying to get the assembly done in the next year or so, so we can get into integrated spacecraft testing at that point in time.
Ars: What’s in the critical path in getting to the launch pad?
Olansen: The assembly and integration activity is really the key for us. It’s to get to the full vehicle level test. All the different activities that we’re working on across the vehicles are making substantive progress. So, it’s a matter of bringing them all in and doing the assembly and integration in the appropriate sequences, so that we get the vehicles put together the way we need them and get to the point where we can actually power up the vehicles and do all the testing we need to do. Obviously, software is a key part of that development activity, once we power on the vehicles, making sure we can do all the control work that we need to do for those vehicles.
[There are] a couple of key pieces I will mention along those lines. On the PPE side, we have the electrical propulsion system. The thrusters associated with that system are being delivered. Those will go through acceptance testing at the Glenn Research Center [in Ohio] and then be integrated on the spacecraft out at Maxar; so that work is ongoing as we speak. Out at ESA, ESA is providing the HALO lunar communication system. That’ll be delivered later this year. That’ll be installed on HALO as part of its integrated test and checkout and then launch on HALO. That provides the full communication capability down to the lunar surface for us, where PPE provides the communication capability back to Earth. So, those are key components that we’re looking to get delivered later this year.
Jon Olansen, manager of NASA’s Gateway program at Johnson Space Center in Houston. Credit: NASA/Andrew Carlsen
Ars: What’s the status of the electric propulsion thrusters for the PPE?
Olansen: The first one has actually been delivered already, so we’ll have the opportunity to go through, like I said, the acceptance testing for those. The other flight units are right on the heels of the first one that was delivered. They’ll make it through their acceptance testing, then get delivered to Maxar, like I said, for integration into PPE. So, that work is already in progress. [The Power and Propulsion Element will have three xenon-fueled 12-kilowatt Hall thrusters produced by Aerojet Rocketdyne, and four smaller 6-kilowatt thrusters.]
Ars: The Government Accountability Office (GAO) outlined concerns last year about keeping the mass of Gateway within the capability of its rocket. Has there been any progress on that issue? Will you need to remove components from the HALO module and launch them on a future mission? Will you narrow your launch windows to only launch on the most fuel-efficient trajectories?
Olansen: We’re working the plan. Now that we’re launching the two vehicles together, we’re working mass management. Mass management is always an issue with spacecraft development, so it’s no different for us. All of the things you described are all knobs that are in the trade space as we proceed, but fundamentally, we’re working to design the optimal spacecraft that we can, first. So, that’s the key part. As we get all the components delivered, we can measure mass across all of those components, understand what our integrated mass looks like, and we have several different options to make sure that we’re able to execute the mission we need to execute. All of those will be balanced over time based on the impacts that are there. There’s not a need for a lot of those decisions to happen today. Those that are needed from a design perspective, we’ve already made. Those that are needed from enabling future decisions, we’ve already made all of those. So, really, what we’re working through is being able to, at the appropriate time, make decisions necessary to fly the vehicle the way we need to, to get out to NRHO [Near Rectilinear Halo Orbit, an elliptical orbit around the Moon], and then be able to execute the Artemis missions in the future.
Ars: The GAO also discussed a problem with Gateway’s controllability with something as massive as Starship docked to it. What’s the latest status of that problem?
Olansen: There are a number of different risks that we work through as a program, as you’d expect. We continue to look at all possibilities and work through them with due diligence. That’s our job, to be able to do that on a daily basis. With the stack controllability [issue], where that came from for GAO, we were early in the assessments of what the potential impacts could be from visiting vehicles, not just any one [vehicle] but any visiting vehicle. We’re a smaller space station than ISS, so making sure we understand the implications of thruster firings as vehicles approach the station, and the implications associated with those, is where that stack controllability conversation came from.
The bus that Maxar typically designs doesn’t have to generally deal with docking. Part of what we’ve been doing is working through ways that we can use the capabilities that are already built into that spacecraft differently to provide us the control authority we need when we have visiting vehicles, as well as working with the visiting vehicles and their design to make sure that they’re minimizing the impact on the station. So, the combination of those two has largely, over the past year since that report came out, improved where we are from a stack controllability perspective. We still have forward work to close out all of the different potential cases that are there. We’ll continue to work through those. That’s standard forward work, but we’ve been able to make some updates, some software updates, some management updates, and logic updates that really allow us to control the stack effectively and have the right amount of control authority for the dockings and undockings that we will need to execute for the missions.
Pitting the Brown Bess against the long rifle, testing the first military submarine, and more.
The colonial victory against the British in the American Revolutionary War was far from a predetermined outcome. In addition to good strategy and the timely appearance of key allies like the French, Continental soldiers relied on several key technological innovations in weaponry. But just how accurate is an 18th-century musket when it comes to hitting a target? Did the rifle really determine the outcome of the war? And just how much damage did cannon inflict? A team of military weapons experts and re-enactors set about testing some of those questions in a new NOVA documentary, Revolutionary War Weapons.
The documentary examines the firing range and accuracy of Brown Bess muskets and long rifles used by both the British and the Continental Army during the Battles of Lexington and Concord; the effectiveness of Native American tomahawks for close combat (no, they were usually not thrown as depicted in so many popular films, but there are modern throwing competitions today); and the effectiveness of cannons against the gabions and other defenses employed to protect the British fortress during the pivotal Siege of Yorktown. There is even a fascinating segment on the first military submarine, dubbed “the Turtle,” created by American inventor David Bushnell.
To capture all the high-speed ballistics action, director Stuart Powell relied upon a range of high-speed cameras called the Phantom Range. “It is like a supercomputer,” Powell told Ars. “It is a camera, but it doesn’t feel like a camera. You need to be really well-coordinated on the day when you’re using it because it bursts for, like, 10 seconds. It doesn’t record constantly because it’s taking so much data. Depending on what the frame rate is, you only get a certain amount of time. So you’re trying to coordinate that with someone trying to fire a 250-year-old piece of technology. If the gun doesn’t go off, if something goes wrong on set, you’ll miss it. Then it takes five minutes to reboot and get ready for the new shot. So a lot of the shoot revolves around the camera; that’s not normally the case.”
Constraints to keep the run time short meant that not every experiment the crew filmed ended up in the final document, according to Powell. For instance, there was one experiment in a hypoxia chamber for the segment on the Turtle, meant to see how long a person could function once the sub had descended, limiting the oxygen supply. “We felt there was slightly too much on the Turtle,” said Powell. “It took up a third of the whole film.” Also cut, for similar reasons, were power demonstrations for the musket, using boards instead of ballistic gel. But these cuts were anomalies in the tightly planned shooting schedule; most of the footage found its way onscreen.
The task of setting up all those field experiments fell to experts like military historian and weapons expert Joel Bohy, who is a frequent appraiser for Antiques Roadshow. We caught up with Bohy to learn more.
Redcoat re-enactors play out the Battle of Lexington. GBH/NOVA
Ars Technica: Obviously you can’t work with the original weapons because they’re priceless. How did you go about making replicas as close as possible to the originals?
Joel Bohy: Prior to our live fire studies, I started to collect the best contemporary reproductions of all of the different arms that were used. Over the years, I’ve had these custom-built, and now I have about 14 of them, so that we can cover pretty much every different type of arm used in the Revolution. I have my pick when we want to go out to the range and shoot at ballistics gelatin. We’ve published some great papers. The latest one was in conjunction with a bullet strike study where we went through and used modern forensic techniques to not only locate where each shooter was, what caliber the gun was, using ballistics rods and lasers, but we also had 18th-century house sections built and shot at the sections to replicate that damage. It was a validation study, and those firearms came in very handy.
Ars Technica: What else can we learn from these kinds of experiments?
Joel Bohy: One of the things that’s great about the archeology end of it is when we’re finding fired ammunition. I mostly volunteer with archaeologists on the Revolutionary War. One of my colleagues has worked on the Little Bighorn battlefield doing firing pin impressions, which leave a fingerprint, so he could track troopers and Native Americans across the battlefields. With [the Revolutionary War], it’s harder to do because we’re using smooth-bore guns that don’t necessarily leave a signature. But what they do leave is a caliber, and they also leave a location. We GIS all this stuff and map it, and it’s told us things about the battles that we never knew before. We just did one last August that hasn’t been released yet that changes where people thought a battle took place.
We like to combine that with our live fire studies. So when we [conduct the latter], we take a shot, then we metal detect each shot, bag it, tag it. We record all the data that we see on our musket balls that we fired so that when we’re on an archeology project, we can correlate that with what we see in the ground. We can see if it hits a tree, if it hits rocks, how close was a soldier when they fired—all based upon the deformation of the musket ball.
Ars Technica: What is the experience of shooting a replica of a musket compared to, say, a modern rifle?
Joel Bohy: It’s a lot different. When you’re firing a modern rifle, you pull the trigger and it’s very quick—a matter of milliseconds and the bullet’s downrange. With the musket, it’s similar, but it’s slower, and you can anticipate the shot. By the time the cock goes down, the flint strikes the hammer, it ignites the powder in the pan, which goes through the vent and sets off the charge—there’s a lot more time involved in that. So you can anticipate and flinch. You may not necessarily get the best shot as you would on a more modern rifle. There’s still a lot of kick, and there’s a lot more smoke because of the black powder that’s being used. With modern smokeless powder, you have very little smoke compared to the muskets.
Ars Technica: It’s often said that throughout the history of warfare, whoever has the superior weapons wins. This series presents a more nuanced picture of how such conflicts play out.
John Hargreaves making David Bushnell’s submarine bomb. GBH/Nova
Joel Bohy: In the Revolutionary War, you have both sides basically using the same type of firearm. Yes, some were using rifles, depending on what region you were from, and units in the British Army used rifles. But for the most part, they’re all using flintlock mechanisms and smoothbore guns. What comes into play in the Revolution is, on the [Continental] side, they don’t have the supply of arms that the British do. There was an embargo in place in 1774 so that no British arms could be shipped into Boston and North America. So you have a lot of innovation with gunsmiths and blacksmiths and clockmakers, who were taking older gun parts, barrels, and locks and building a functional firearm.
You saw a lot of the Americans at the beginning of the war trying to scrape through with these guns made from old parts and cobbled together. They’re functional. We didn’t really have that lock-making and barrel-making industry here. A lot of that stuff we had imported. So even if a gun was being made here, the firing mechanism and the barrels were imported. So we had to come up with another way to do it.
We started to receive a trickle of arms from the French in 1777, and to my mind, that’s what helped change the outcome of the war. Not only did we have French troops arriving, but we also had French cloth, shoes, hats, tin, powder, flints, and a ton of arms being shipped in. The French took all of their old guns from their last model that they had issued to the army, and they basically sold them all to us. So we had this huge influx of French arms that helped resupply us and made the war viable for us.
Close-up of a cannon firing. GBH/NOVA
Ars Technica: There are a lot of popular misconceptions about the history of the American Civil War. What are a couple of things that you wish more Americans understood about that conflict?
Joel Bohy: The onset of the American Revolution, April 1775, when the war began—these weren’t just a bunch of farmers who grabbed their rifle from over the fireplace and went out and beat the British Army. These people had been training and arming themselves for a long time. They had been doing it for generations before in wars with Native forces and the French since the 17th century. So by the time the Revolution broke out, they were as prepared as they could be for it.
“The rifle won the Revolution” is one of the things that I hear. No, it didn’t. Like I said, the French arms coming in helped us win the Revolution. A rifle is a tool, just like a smoothbore musket is. It has its benefits and it has its downfalls. It’s slower to load, you can’t mount a bayonet on it, but it’s more accurate, whereas the musket, you can load and fire faster, and you can mount a bayonet. So the gun that really won the Revolution was the musket, not the rifle.
It’s all well and good to be proud of being an American and our history and everything else, but these people just didn’t jump out of bed and fight. These people were training, they were drilling, they were preparing and arming and supplying not just arms, but food, cloth, tents, things that they would need to continue to have an army once the war broke out. It wasn’t just a big—poof—this happened and we won.
Revolutionary War Weapons is now streaming on YouTube and is also available on PBS.
Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.
“It’s making our work unsafe, and it’s unsanitary for any workplace,” but especially an active laboratory full of fire-reactive chemicals and bacteria, one Montlake researcher said.
Press officers at NOAA, the Commerce Department, and the White House did not respond to requests for comment.
Montlake employees were informed last week that a contract for safety services — which includes the staff who move laboratory waste off-campus to designated disposal sites — would lapse after April 9, leaving just one person responsible for this task. Hazardous waste “pickups from labs may be delayed,” employees were warned in a recent email.
The building maintenance team’s contract expired Wednesday, which decimated the staff that had handled plumbing, HVAC, and the elevators. Other contacts lapsed in late March, leaving the Seattle lab with zero janitorial staff and a skeleton crew of IT specialists.
During a big staff meeting at Montlake on Wednesday, lab leaders said they had no updates on when the contracts might be renewed, one researcher said. They also acknowledged it was unfair that everyone would need to pitch in on janitorial duties on top of their actual jobs.
Nick Tolimieri, a union representative for Montlake employees, said the problem is “all part of the large-scale bullying program” to push out federal workers. It seems like every Friday “we get some kind of message that makes you unable to sleep for the entire weekend,” he said. Now, with these lapsed contracts, it’s getting “more and more petty.”
The problems, large and small, at Montlake provide a case study of the chaos that’s engulfed federal workers across many agencies as the Trump administration has fired staff, dumped contracts, and eliminated long-time operational support. Yesterday, hundreds of NOAA workers who had been fired in February, then briefly reinstated, were fired again.