Science

rover-finds-hints-of-an-ancient-martian-carbon-cycle

Rover finds hints of an ancient Martian carbon cycle

The Curiosity mission started near the bottom of the crater, at the base of a formation called Aeolis Mons, or Mount Sharp, where NASA expected to find the earliest geological samples. The idea then was to climb up Mount Sharp and collect samples from later and later geological periods at increasing elevations, tracing the history of habitability and the great drying up of Mars. On the way, the carbon missed by the satellites was finally found.

An imperfect cycle

Tutolo’s team focused their attention on four sediment samples Curiosity drilled after climbing over a kilometer up Mount Sharp. The samples were examined with the rover’s Chemistry and Mineralogy instrument, which uses X-ray diffraction to determine their composition. It turned out the samples contained roughly between 5 and 10 percent of siderite. “It was an iron carbonate, directly analogous to a mineral called calcite found in sedimentary rocks like limestone. The difference is it has iron in its cation site rather than calcium,” Tutolo explained. “We expected that because Mars is much richer in iron—that’s why it is the red planet.”

The siderite found in the samples was also pure, which Tutolo thinks indicates it has formed through an evaporation process akin to what we see in evaporated lakes on Earth. This, in turn, was the first evidence we’ve found of the ancient Martian carbon cycle. “Now we have evidence that confirms the models,” Tutolo claims. The carbon from the atmosphere was being sequestered in the rocks on Mars just as it is on Earth. The problem was, unlike on Earth, it couldn’t get out of these rocks.

“On Earth, whenever oceanic plates get subducted into the mantle, all of the limestone that was formed before gets cooked off, and the carbon dioxide gets back to the atmosphere through volcanoes,” Tutolo explains. Mars, on the other hand, has never had efficient plate tectonics. A large portion of carbon that got trapped in Martian rocks stayed in those rocks forever, thinning out the atmosphere. While it’s likely the red planet had its own carbon cycle, it was an imperfect one that eventually turned it into the lifeless desert it is today.

Rover finds hints of an ancient Martian carbon cycle Read More »

women-rely-partly-on-smell-when-choosing-friends

Women rely partly on smell when choosing friends

For their study, Gaby et al. organized an on-campus “Speed-Friending” event for 40 female volunteers, consisting of four distinct phases. First, participants had their headshots taken. Next, they looked at pictures of all the other women participating and rated friendship potential based solely on visual cues. Then the women wore a T-shirt for 12 hours as they went about their daily activities, which were then collected and placed in plastic bags. Finally, participants rated the friendship potential of anonymized participants based solely on smelling each T-shirt, followed by a live session during which they interacted with each woman for four minutes and rated their friendship potential. This was followed by a second round of smelling the T-shirts and once again rating friendship potential.

The results: There was a strong correlation between the in-person evaluations of friendship potential and those based solely on smelling the T-shirts, with remarkable consistency. And the ratings made after live interactions accurately predicted changes in the assessments made in the final round of odor-based testing, suggesting a learned response element.

“Everybody showed they had a consistent signature of what they liked,” said co-author Vivian Zayas of Cornell University. “And the consistency was not that, in the group, one person smelled really bad and one person smelled really good. No, it was idiosyncratic. I might like person A over B over C based on scent, and this pattern predicts who I end up liking in the chat. People take a lot in when they’re meeting face to face. But scent—which people are registering at some level, though probably not consciously—forecasts whether you end up liking this person.”

The authors acknowledged that their study was limited to college-aged heterosexual women and that there could be differences in how olfactory and other cues function in other groups: older or younger women, non-American women, men, and so forth. “Future studies might consider a wider age range, investigate individuals at different stages of development, focus on how these cues function in male-male platonic interactions, or examine how scent in daily interactions shapes friendship judgments in other cultures,” they wrote.

Scientific Reports, 2025. DOI: 10.1038/s41598-025-94350-1  (About DOIs).

Women rely partly on smell when choosing friends Read More »

climate-change-will-make-rice-toxic,-say-researchers

Climate change will make rice toxic, say researchers

For six years, Ziska and a large team of research colleagues in China and the US grew rice in controlled fields, subjecting it to varying levels of carbon dioxide and temperature. They found that when both increased, in line with projections by climate scientists, the amount of arsenic and inorganic arsenic in rice grains also went up.

Arsenic is found naturally in some foods, including fish and shellfish, and in waters and soils.

Inorganic arsenic is found in industrial materials and gets into water—including water used to submerge rice paddies.

Rice is easily inundated with weeds and other crops, but it has one advantage: It grows well in water. So farmers germinate the seeds, and when the seedlings are ready, plant them in wet soil. They then flood their fields, which suppresses weeds, but allows the rice to flourish. Rice readily absorbs the water and everything in it—including arsenic, either naturally occurring or not. Most of the world’s rice is grown this way.

The new research demonstrates that climate change will ramp up those levels.

“What happens in rice, because of complex biogeochemical processes in the soil, when temperatures and CO2 go up, inorganic arsenic also does,” Ziska said. “And it’s this inorganic arsenic that poses the greatest health risk.”

Exposure to inorganic arsenic has been linked to cancers of the skin, bladder, and lung, heart disease, and neurological problems in infants. Research has found that in parts of the world with high consumption of rice, inorganic arsenic increases cancer risk.

Climate change will make rice toxic, say researchers Read More »

looking-at-the-universe’s-dark-ages-from-the-far-side-of-the-moon

Looking at the Universe’s dark ages from the far side of the Moon


meet you in the dark side of the moon

Building an observatory on the Moon would be a huge challenge—but it would be worth it.

A composition of the moon with the cosmos radiating behind it

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

There is a signal, born in the earliest days of the cosmos. It’s weak. It’s faint. It can barely register on even the most sensitive of instruments. But it contains a wealth of information about the formation of the first stars, the first galaxies, and the mysteries of the origins of the largest structures in the Universe.

Despite decades of searching for this signal, astronomers have yet to find it. The problem is that our Earth is too noisy, making it nearly impossible to capture this whisper. The solution is to go to the far side of the Moon, using its bulk to shield our sensitive instruments from the cacophony of our planet.

Building telescopes on the far side of the Moon would be the greatest astronomical challenge ever considered by humanity. And it would be worth it.

The science

We have been scanning and mapping the wider cosmos for a century now, ever since Edwin Hubble discovered that the Andromeda “nebula” is actually a galaxy sitting 2.5 million light-years away. Our powerful Earth-based observatories have successfully mapped the detailed location to millions of galaxies, and upcoming observatories like the Vera C. Rubin Observatory and Nancy Grace Roman Space Telescope will map millions more.

And for all that effort, all that technological might and scientific progress, we have surveyed less than 1 percent of the volume of the observable cosmos.

The vast bulk of the Universe will remain forever unobservable to traditional telescopes. The reason is twofold. First, most galaxies will simply be too dim and too far away. Even the James Webb Space Telescope, which is explicitly designed to observe the first generation of galaxies, has such a limited field of view that it can only capture a handful of targets at a time.

Second, there was a time, within the first few hundred million years after the Big Bang, before stars and galaxies had even formed. Dubbed the “cosmic dark ages,” this time naturally makes for a challenging astronomical target because there weren’t exactly a lot of bright sources to generate light for us to look at.

But there was neutral hydrogen. Most of the Universe is made of hydrogen, making it the most common element in the cosmos. Today, almost all of that hydrogen is ionized, existing in a super-heated plasma state. But before the first stars and galaxies appeared, the cosmic reserves of hydrogen were cool and neutral.

Neutral hydrogen is made of a single proton and a single electron. Each of these particles has a quantum property known as spin (which kind of resembles the familiar, macroscopic property of spin, but it’s not quite the same—though that’s a different article). In its lowest-energy state, the proton and electron will have spins oriented in opposite directions. But sometimes, through pure random quantum chance, the electron will spontaneously flip around. Very quickly, the hydrogen notices and gets the electron to flip back to where it belongs. This process releases a small amount of energy in the form of a photon with a wavelength of 21 centimeters.

This quantum transition is exceedingly rare, but with enough neutral hydrogen, you can build a substantial signal. Indeed, observations of 21-cm radiation have been used extensively in astronomy, especially to build maps of cold gas reservoirs within the Milky Way.

So the cosmic dark ages aren’t entirely dark; those clouds of primordial neutral hydrogen are emitting tremendous amounts of 21-cm radiation. But that radiation was emitted in the distant past, well over 13 billion years ago. As it has traveled through the cosmic distances, all those billions of light-years on its way to our eager telescopes, it has experienced the redshift effects of our expanding Universe.

By the time that dark age 21-cm radiation reaches us, it has stretched by a factor of 10, turning the neutral hydrogen signal into radio waves with wavelengths of around 2 meters.

The astronomy

Humans have become rather fond of radio transmissions in the past century. Unfortunately, the peak of this primordial signal from the dark ages sits right below the FM dial of your radio, which pretty much makes it impossible to detect from Earth. Our emissions are simply too loud, too noisy, and too difficult to remove. Teams of astronomers have devised clever ways to reduce or eliminate interference, featuring arrays scattered around the most desolate deserts in the world, but they have not been able to confirm the detection of a signal.

So those astronomers have turned in desperation to the quietest desert they can think of: the far side of the Moon.

It wasn’t until 1959 when the Soviet Luna 3 probe gave us our first glimpse of the Moon’s far side, and it wasn’t until 2019 when the Chang’e 4 mission made the first soft landing. Compared to the near side, and especially low-Earth orbit, there is very little human activity there. We’ve had more active missions on the surface of Mars than on the lunar far side.

Chang’e-4 landing zone on the far side of the moon. Credit: Xiao Xiao and others (CC BY 4.0)

And that makes the far side of the Moon the ideal location for a dark-age-hunting radio telescope, free from human interference and noise.

Ideas abound to make this a possibility. The first serious attempt was DARE, the Dark Ages Radio Explorer. Rather than attempting the audacious goal of building an actual telescope on the surface, DARE was a NASA-funded concept to develop an observatory (and when it comes to radio astronomy, “observatory” can be as a simple as a single antenna) to orbit the Moon and take data when it’s on the opposite side as the Earth.

For various bureaucratic reasons, NASA didn’t develop the DARE concept further. But creative astronomers have put forward even bolder proposals.

The FarView concept, for example, is a proposed radio telescope array that would dwarf anything on the Earth. It would be sensitive to frequency ranges between 5 and 40 MHz, allowing it to target the dark ages and the birth of the first stars. The proposed design contains 100,000 individual elements, with each element consisting of a single, simple dipole antenna, dispersed over a staggering 200 square kilometers. It would be infeasible to deliver that many antennae directly to the surface of the Moon. Instead, we’d have to build them, mining lunar regolith and turning it into the necessary components.

The design of this array is what’s called an interferometer. Instead of a single big dish, the individual antennae collect data on their own and then correlate all their signals together later. The effective resolution of an interferometer is the same as a single dish as big as the widest distance among the elements. The downside of an interferometer is that most of the incoming radiation just hits dirt (or in this case, lunar regolith), so the interferometer has to collect a lot of data to build up a decent signal.

Attempting these kinds of observations on the Earth requires constant maintenance and cleaning to remove radio interference and have essentially sunk all attempts to measure the dark ages. But a lunar-based interferometer will have all the time in the world it needs, providing a much cleaner and easier-to-analyze stream of data.

If you’re not in the mood for building 100,000 antennae on the Moon’s surface, then another proposal seeks to use the Moon’s natural features—namely, its craters. If you squint hard enough, they kind of look like radio dishes already. The idea behind the project, named the Lunar Crater Radio Telescope, is to find a suitable crater and use it as the support structure for a gigantic, kilometer-wide telescope.

This idea isn’t without precedent. Both the beloved Arecibo and the newcomer FAST observatories used depressions in the natural landscape of Puerto Rico and China, respectively, to take most of the load off of the engineering to make their giant dishes. The Lunar Telescope would be larger than both of those combined, and it would be tuned to hunt for dark ages radio signals that we can’t observe using Earth-based observatories because they simply bounce off the Earth’s ionosphere (even before we have to worry about any additional human interference). Essentially, the only way that humanity can access those wavelengths is by going beyond our ionosphere, and the far side of the Moon is the best place to park an observatory.

The engineering

The engineering challenges we need to overcome to achieve these scientific dreams are not small. So far, humanity has only placed a single soft-landed mission on the distant side of the Moon, and both of these proposals require an immense upgrade to our capabilities. That’s exactly why both far-side concepts were funded by NIAC, NASA’s Innovative Advanced Concepts program, which gives grants to researchers who need time to flesh out high-risk, high-reward ideas.

With NIAC funds, the designers of the Lunar Crater Radio Telescope, led by Saptarshi Bandyopadhyay at the Jet Propulsion Laboratory, have already thought of the challenges they will need to overcome to make the mission a success. Their mission leans heavily on another JPL concept, the DuAxel, which consists of a rover that can split into two single-axel rovers connected by a tether.

To build the telescope, several DuAxels are sent to the crater. One of each pair “sits” to anchor itself on the crater wall, while another one crawls down the slope. At the center, they are met with a telescope lander that has deployed guide wires and the wire mesh frame of the telescope (again, it helps for assembling purposes that radio dishes are just strings of metal in various arrangements). The pairs on the crater rim then hoist their companions back up, unfolding the mesh and lofting the receiver above the dish.

The FarView observatory is a much more capable instrument—if deployed, it would be the largest radio interferometer ever built—but it’s also much more challenging. Led by Ronald Polidan of Lunar Resources, Inc., it relies on in-situ manufacturing processes. Autonomous vehicles would dig up regolith, process and refine it, and spit out all the components that make an interferometer work: the 100,000 individual antennae, the kilometers of cabling to run among them, the solar arrays to power everything during lunar daylight, and batteries to store energy for round-the-lunar-clock observing.

If that sounds intense, it’s because it is, and it doesn’t stop there. An astronomical telescope is more than a data collection device. It also needs to crunch some numbers and get that precious information back to a human to actually study it. That means that any kind of far side observing platform, especially the kinds that will ingest truly massive amounts of data such as these proposals, would need to make one of two choices.

Choice one is to perform most of the data correlation and processing on the lunar surface, sending back only highly refined products to Earth for further analysis. Achieving that would require landing, installing, and running what is essentially a supercomputer on the Moon, which comes with its own weight, robustness, and power requirements.

The other choice is to keep the installation as lightweight as possible and send the raw data back to Earthbound machines to handle the bulk of the processing and analysis tasks. This kind of data throughput is outright impossible with current technology but could be achieved with experimental laser-based communication strategies.

The future

Astronomical observatories on the far side of the Moon face a bit of a catch-22. To deploy and run a world-class facility, either embedded in a crater or strung out over the landscape, we need some serious lunar manufacturing capabilities. But those same capabilities come with all the annoying radio fuzz that already bedevil Earth-based radio astronomy.

Perhaps the best solution is to open up the Moon to commercial exploitation but maintain the far side as a sort of out-world nature preserve, owned by no company or nation, left to scientists to study and use as a platform for pristine observations of all kinds.

It will take humanity several generations, if not more, to develop the capabilities needed to finally build far-side observatories. But it will be worth it, as those facilities will open up the unseen Universe for our hungry eyes, allowing us to pierce the ancient fog of our Universe’s past, revealing the machinations of hydrogen in the dark ages, the birth of the first stars, and the emergence of the first galaxies. It will be a fountain of cosmological and astrophysical data, the richest possible source of information about the history of the Universe.

Ever since Galileo ground and polished his first lenses and through the innovations that led to the explosion of digital cameras, astronomy has a storied tradition of turning the technological triumphs needed to achieve science goals into the foundations of various everyday devices that make life on Earth much better. If we’re looking for reasons to industrialize and inhabit the Moon, the noble goal of pursuing a better understanding of the Universe makes for a fine motivation. And we’ll all be better off for it.

Photo of Paul Sutter

Looking at the Universe’s dark ages from the far side of the Moon Read More »

the-physics-of-bowling-strike-after-strike

The physics of bowling strike after strike

More than 45 million people in the US are fans of bowling, with national competitions awarding millions of dollars. Bowlers usually rely on instinct and experience, earned through lots and lots of practice, to boost their strike percentage. A team of physicists has come up with a mathematical model to better predict ball trajectories, outlined in a new paper published in the journal AIP Advances. The resulting equations take into account such factors as the composition and resulting pattern of the oil used on bowling lanes, as well as the inevitable asymmetries of bowling balls and player variability.

The authors already had a strong interest in bowling. Three are regular bowlers and quite skilled at the sport; a fourth, Curtis Hooper of Longborough University in the UK, is a coach for Team England at the European Youth Championships. Hooper has been studying the physics of bowling for several years, including an analysis of the 2017 Weber Cup, as well as papers devising mathematical models for the application of lane conditioners and oil patterns in bowling.

The calculations involved in such research are very complicated because there are so many variables that can affect a ball’s trajectory after being thrown. Case in point: the thin layer of oil that is applied to bowling lanes, which Hooper found can vary widely in volume and shape among different venues, plus the lack of uniformity in applying the layer, which creates an uneven friction surface.

Per the authors, most research to date has relied on statistically analyzing empirical data, such as a 2018 report by the US Bowling Congress that looked at data generated by 37 bowlers. (Hooper relied on ball-tracking data for his 2017 Weber Cup analysis.) A 2009 analysis showed that the optimal location for the ball to strike the headpin is about 6 centimeters off-center, while the optimal entry angle for the ball to hit is about 6 degrees. However, such an approach struggles to account for the inevitable player variability. No bowler hits their target 100 percent of the time, and per Hooper et al., while the best professionals can come within 0.1 degrees from the optimal launch angle, this slight variation can nonetheless result in a difference of several centimeters down-lane.

The physics of bowling strike after strike Read More »

here’s-how-a-satellite-ended-up-as-a-ghostly-apparition-on-google-earth

Here’s how a satellite ended up as a ghostly apparition on Google Earth

Regardless of the identity of the satellite, this image is remarkable for several reasons.

First, despite so many satellites flying in space, it’s still rare to see a real picture—not just an artist’s illustration—of what one actually looks like in orbit. For example, SpaceX has released photos of Starlink satellites in launch configuration, where dozens of the spacecraft are stacked together to fit inside the payload compartment of the Falcon 9 rocket. But there are fewer well-resolved views of a satellite in its operational environment, with solar arrays extended like the wings of a bird.

This is changing as commercial companies place more and more imaging satellites in orbit. Several companies provide “non-Earth imaging” services by repurposing Earth observation cameras to view other objects in space. These views can reveal information that can be useful in military or corporate espionage.

Secondly, the Google Earth capture offers a tangible depiction of a satellite’s speed. An object in low-Earth orbit must travel at more than 17,000 mph (more than 27,000 km per hour) to keep from falling back into the atmosphere.

While the B-2’s motion caused it to appear a little smeared in the Google Earth image a few years ago, the satellite’s velocity created a different artifact. The satellite appears five times in different colors, which tells us something about how the image was made. Airbus’ Pleiades satellites take pictures in multiple spectral bands: blue, green, red, panchromatic, and near-infrared.

At lower left, the black outline of the satellite is the near-infrared capture. Moving up, you can see the satellite in red, blue, and green, followed by the panchromatic, or black-and-white, snapshot with the sharpest resolution. Typically, the Pleiades satellites record these images a split-second apart and combine the colors to generate an accurate representation of what the human eye might see. But this doesn’t work so well for a target moving at nearly 5 miles per second.

Here’s how a satellite ended up as a ghostly apparition on Google Earth Read More »

after-harvard-says-no-to-feds,-$2.2-billion-of-research-funding-put-on-hold

After Harvard says no to feds, $2.2 billion of research funding put on hold

The Trump administration has been using federal research funding as a cudgel. The government has blocked billions of dollars in research funds and threatened to put a hold on even more in order to compel universities to adopt what it presents as essential reforms. In the case of Columbia University, that includes changes in the leadership of individual academic departments.

On Friday, the government sent a list of demands that it presented as necessary to “maintain Harvard’s financial relationship with the federal government.” On Monday, Harvard responded that accepting these demands would “allow itself to be taken over by the federal government.” The university also changed its home page into an extensive tribute to the research that would be eliminated if the funds were withheld.

In response, the Trump administration later put $2.2 billion of Harvard’s research funding on hold.

Diversity, but only the right kind

Harvard posted the letter it received from federal officials, listing their demands. Some of it is what you expect, given the Trump administration’s interests. The admissions and hiring departments would be required to drop all diversity efforts, with data on faculty and students to be handed over to the federal government for auditing. As at other institutions, there are also some demands presented as efforts against antisemitism, such as the defunding of pro-Palestinian groups. More generally, it demands that university officials “prevent admitting students hostile to the American values and institutions.”

There are also a bunch of basic culture war items, such as a demand for a mask ban, and a ban on “de-platforming” speakers on campus. In addition, the government wants the university to screen all faculty hires for plagiarism issues, which is what caused Harvard’s former president to resign after she gave testimony to Congress. Any violation of these updated conduct codes by a non-citizen would require an immediate report to the Department of Homeland Security and State Department, presumably so they can prepare to deport them.

After Harvard says no to feds, $2.2 billion of research funding put on hold Read More »

lunar-gateway’s-skeleton-is-complete—its-next-stop-may-be-trump’s-chopping-block

Lunar Gateway’s skeleton is complete—its next stop may be Trump’s chopping block

Officials blame changing requirements for much of the delays and rising costs. NASA managers dramatically changed their plans for the Gateway program in 2020, when they decided to launch the PPE and HALO on the same rocket, prompting major changes to their designs.

Jared Isaacman, Trump’s nominee for NASA administrator, declined to commit to the Gateway program during a confirmation hearing before the Senate Commerce Committee on April 9. Sen. Ted Cruz (R-Texas), the committee’s chairman, pressed Isaacman on the Lunar Gateway. Cruz is one of the Gateway program’s biggest backers in Congress since it is managed by Johnson Space Center in Texas. If it goes ahead, Gateway would guarantee numerous jobs at NASA’s mission control in Houston throughout its 15-year lifetime.

That’s an area that if I’m confirmed, I would love to roll up my sleeves and further understand what’s working right?” Isaacman replied to Cruz. “What are the opportunities the Gateway presents to us? And where are some of the challenges, because I think the Gateway is a component of many programs that are over budget and behind schedule.”

The pressure shell for the Habitation and Logistics Outpost (HALO) module arrived in Gilbert, Arizona, last week for internal outfitting. Credit: NASA/Josh Valcarcel

Checking in with Gateway

Nevertheless, the Gateway program achieved a milestone one week before Isaacman’s confirmation hearing. The metallic pressure shell for the HALO module was shipped from its factory in Italy to Arizona. The HALO module is only partially complete, and it lacks life support systems and other hardware it needs to operate in space.

Over the next couple of years, Northrop Grumman will outfit the habitat with those components and connect it with the Power and Propulsion Element under construction at Maxar Technologies in Silicon Valley. This stage of spacecraft assembly, along with prelaunch testing, often uncovers problems that can drive up costs and trigger more delays.

Ars recently spoke with Jon Olansen, a bio-mechanical engineer and veteran space shuttle flight controller who now manages the Gateway program at Johnson Space Center. A transcript of our conversation with Olansen is below. It is lightly edited for clarity and brevity.

Ars: The HALO module has arrived in Arizona from Italy. What’s next?

Olansen: This HALO module went through significant effort from the primary and secondary structure perspective out at Thales Alenia Space in Italy. That was most of their focus in getting the vehicle ready to ship to Arizona. Now that it’s in Arizona, Northrop is setting it up in their facility there in Gilbert to be able to do all of the outfitting of the systems we need to actually execute the missions we want to do, keep the crew safe, and enable the science that we’re looking to do. So, if you consider your standard spacecraft, you’re going to have all of your command-and-control capabilities, your avionics systems, your computers, your network management, all of the things you need to control the vehicle. You’re going to have your power distribution capabilities. HALO attaches to the Power and Propulsion Element, and it provides the primary power distribution capability for the entire station. So that’ll all be part of HALO. You’ll have your standard thermal systems for active cooling. You’ll have the vehicle environmental control systems that will need to be installed, [along with] some of the other crew systems that you can think of, from lighting, restraint, mobility aids, all the different types of crew systems. Then, of course, all of our science aspects. So we have payload lockers, both internally, as well as payload sites external that we’ll have available, so pretty much all the different systems that you would need for a human-rated spacecraft.

Ars: What’s the latest status of the Power and Propulsion Element?

Olansen: PPE is fairly well along in their assembly and integration activities. The central cylinder has been integrated with the propulsion tanks… Their propulsion module is in good shape. They’re working on the avionics shelves associated with that spacecraft. So, with both vehicles, we’re really trying to get the assembly done in the next year or so, so we can get into integrated spacecraft testing at that point in time.

Ars: What’s in the critical path in getting to the launch pad?

Olansen: The assembly and integration activity is really the key for us. It’s to get to the full vehicle level test. All the different activities that we’re working on across the vehicles are making substantive progress. So, it’s a matter of bringing them all in and doing the assembly and integration in the appropriate sequences, so that we get the vehicles put together the way we need them and get to the point where we can actually power up the vehicles and do all the testing we need to do. Obviously, software is a key part of that development activity, once we power on the vehicles, making sure we can do all the control work that we need to do for those vehicles.

[There are] a couple of key pieces I will mention along those lines. On the PPE side, we have the electrical propulsion system. The thrusters associated with that system are being delivered. Those will go through acceptance testing at the Glenn Research Center [in Ohio] and then be integrated on the spacecraft out at Maxar; so that work is ongoing as we speak. Out at ESA, ESA is providing the HALO lunar communication system. That’ll be delivered later this year. That’ll be installed on HALO as part of its integrated test and checkout and then launch on HALO. That provides the full communication capability down to the lunar surface for us, where PPE provides the communication capability back to Earth. So, those are key components that we’re looking to get delivered later this year.

Jon Olansen, manager of NASA’s Gateway program at Johnson Space Center in Houston. Credit: NASA/Andrew Carlsen

Ars: What’s the status of the electric propulsion thrusters for the PPE?

Olansen: The first one has actually been delivered already, so we’ll have the opportunity to go through, like I said, the acceptance testing for those. The other flight units are right on the heels of the first one that was delivered. They’ll make it through their acceptance testing, then get delivered to Maxar, like I said, for integration into PPE. So, that work is already in progress. [The Power and Propulsion Element will have three xenon-fueled 12-kilowatt Hall thrusters produced by Aerojet Rocketdyne, and four smaller 6-kilowatt thrusters.]

Ars: The Government Accountability Office (GAO) outlined concerns last year about keeping the mass of Gateway within the capability of its rocket. Has there been any progress on that issue? Will you need to remove components from the HALO module and launch them on a future mission? Will you narrow your launch windows to only launch on the most fuel-efficient trajectories?

Olansen: We’re working the plan. Now that we’re launching the two vehicles together, we’re working mass management. Mass management is always an issue with spacecraft development, so it’s no different for us. All of the things you described are all knobs that are in the trade space as we proceed, but fundamentally, we’re working to design the optimal spacecraft that we can, first. So, that’s the key part. As we get all the components delivered, we can measure mass across all of those components, understand what our integrated mass looks like, and we have several different options to make sure that we’re able to execute the mission we need to execute. All of those will be balanced over time based on the impacts that are there. There’s not a need for a lot of those decisions to happen today. Those that are needed from a design perspective, we’ve already made. Those that are needed from enabling future decisions, we’ve already made all of those. So, really, what we’re working through is being able to, at the appropriate time, make decisions necessary to fly the vehicle the way we need to, to get out to NRHO [Near Rectilinear Halo Orbit, an elliptical orbit around the Moon], and then be able to execute the Artemis missions in the future.

Ars: The GAO also discussed a problem with Gateway’s controllability with something as massive as Starship docked to it. What’s the latest status of that problem?

Olansen: There are a number of different risks that we work through as a program, as you’d expect. We continue to look at all possibilities and work through them with due diligence. That’s our job, to be able to do that on a daily basis. With the stack controllability [issue], where that came from for GAO, we were early in the assessments of what the potential impacts could be from visiting vehicles, not just any one [vehicle] but any visiting vehicle. We’re a smaller space station than ISS, so making sure we understand the implications of thruster firings as vehicles approach the station, and the implications associated with those, is where that stack controllability conversation came from.

The bus that Maxar typically designs doesn’t have to generally deal with docking. Part of what we’ve been doing is working through ways that we can use the capabilities that are already built into that spacecraft differently to provide us the control authority we need when we have visiting vehicles, as well as working with the visiting vehicles and their design to make sure that they’re minimizing the impact on the station. So, the combination of those two has largely, over the past year since that report came out, improved where we are from a stack controllability perspective. We still have forward work to close out all of the different potential cases that are there. We’ll continue to work through those. That’s standard forward work, but we’ve been able to make some updates, some software updates, some management updates, and logic updates that really allow us to control the stack effectively and have the right amount of control authority for the dockings and undockings that we will need to execute for the missions.

Lunar Gateway’s skeleton is complete—its next stop may be Trump’s chopping block Read More »

live-demos-test-effectiveness-of-revolutionary-war-weapons

Live demos test effectiveness of Revolutionary War weapons


not just men with muskets

Pitting the Brown Bess against the long rifle, testing the first military submarine, and more.

The colonial victory against the British in the American Revolutionary War was far from a predetermined outcome. In addition to good strategy and the timely appearance of key allies like the French, Continental soldiers relied on several key technological innovations in weaponry. But just how accurate is an 18th-century musket when it comes to hitting a target? Did the rifle really determine the outcome of the war? And just how much damage did cannon inflict? A team of military weapons experts and re-enactors set about testing some of those questions in a new NOVA documentary, Revolutionary War Weapons.

The documentary examines the firing range and accuracy of Brown Bess muskets and long rifles used by both the British and the Continental Army during the Battles of Lexington and Concord; the effectiveness of Native American tomahawks for close combat (no, they were usually not thrown as depicted in so many popular films, but there are modern throwing competitions today); and the effectiveness of cannons against the gabions and other defenses employed to protect the British fortress during the pivotal Siege of Yorktown. There is even a fascinating segment on the first military submarine, dubbed “the Turtle,” created by American inventor David Bushnell.

To capture all the high-speed ballistics action, director Stuart Powell relied upon a range of high-speed cameras called the Phantom Range. “It is like a supercomputer,” Powell told Ars. “It is a camera, but it doesn’t feel like a camera. You need to be really well-coordinated on the day when you’re using it because it bursts for, like, 10 seconds. It doesn’t record constantly because it’s taking so much data. Depending on what the frame rate is, you only get a certain amount of time. So you’re trying to coordinate that with someone trying to fire a 250-year-old piece of technology. If the gun doesn’t go off, if something goes wrong on set, you’ll miss it. Then it takes five minutes to reboot and get ready for the new shot. So a lot of the shoot revolves around the camera; that’s not normally the case.”

Constraints to keep the run time short meant that not every experiment the crew filmed ended up in the final document, according to Powell. For instance, there was one experiment in a hypoxia chamber for the segment on the Turtle, meant to see how long a person could function once the sub had descended, limiting the oxygen supply. “We felt there was slightly too much on the Turtle,” said Powell. “It took up a third of the whole film.” Also cut, for similar reasons, were power demonstrations for the musket, using boards instead of ballistic gel. But these cuts were anomalies in the tightly planned shooting schedule; most of the footage found its way onscreen.

The task of setting up all those field experiments fell to experts like military historian and weapons expert Joel Bohy, who is a frequent appraiser for Antiques Roadshow. We caught up with Bohy to learn more.

Redcoat re-enactors play out the Battle of Lexington. GBH/NOVA

Ars Technica: Obviously you can’t work with the original weapons because they’re priceless. How did you go about making replicas as close as possible to the originals?

Joel Bohy: Prior to our live fire studies, I started to collect the best contemporary reproductions of all of the different arms that were used. Over the years, I’ve had these custom-built, and now I have about 14 of them, so that we can cover pretty much every different type of arm used in the Revolution. I have my pick when we want to go out to the range and shoot at ballistics gelatin. We’ve published some great papers. The latest one was in conjunction with a bullet strike study where we went through and used modern forensic techniques to not only locate where each shooter was, what caliber the gun was, using ballistics rods and lasers, but we also had 18th-century house sections built and shot at the sections to replicate that damage. It was a validation study, and those firearms came in very handy.

Ars Technica: What else can we learn from these kinds of experiments?

Joel Bohy: One of the things that’s great about the archeology end of it is when we’re finding fired ammunition. I mostly volunteer with archaeologists on the Revolutionary War. One of my colleagues has worked on the Little Bighorn battlefield doing firing pin impressions, which leave a fingerprint, so he could track troopers and Native Americans across the battlefields. With [the Revolutionary War], it’s harder to do because we’re using smooth-bore guns that don’t necessarily leave a signature. But what they do leave is a caliber, and they also leave a location. We GIS all this stuff and map it, and it’s told us things about the battles that we never knew before. We just did one last August that hasn’t been released yet that changes where people thought a battle took place.

We like to combine that with our live fire studies. So when we [conduct the latter], we take a shot, then we metal detect each shot, bag it, tag it. We record all the data that we see on our musket balls that we fired so that when we’re on an archeology project, we can correlate that with what we see in the ground. We can see if it hits a tree, if it hits rocks, how close was a soldier when they fired—all based upon the deformation of the musket ball.

Ars Technica: What is the experience of shooting a replica of a musket compared to, say, a modern rifle?

Joel Bohy: It’s a lot different. When you’re firing a modern rifle, you pull the trigger and it’s very quick—a matter of milliseconds and the bullet’s downrange. With the musket, it’s similar, but it’s slower, and you can anticipate the shot. By the time the cock goes down, the flint strikes the hammer, it ignites the powder in the pan, which goes through the vent and sets off the charge—there’s a lot more time involved in that. So you can anticipate and flinch. You may not necessarily get the best shot as you would on a more modern rifle. There’s still a lot of kick, and there’s a lot more smoke because of the black powder that’s being used. With modern smokeless powder, you have very little smoke compared to the muskets.

Ars Technica: It’s often said that throughout the history of warfare, whoever has the superior weapons wins. This series presents a more nuanced picture of how such conflicts play out.

John Hargreaves making David Bushnell’s submarine bomb. GBH/Nova

Joel Bohy: In the Revolutionary War, you have both sides basically using the same type of firearm. Yes, some were using rifles, depending on what region you were from, and units in the British Army used rifles. But for the most part, they’re all using flintlock mechanisms and smoothbore guns. What comes into play in the Revolution is, on the [Continental] side, they don’t have the supply of arms that the British do. There was an embargo in place in 1774 so that no British arms could be shipped into Boston and North America. So you have a lot of innovation with gunsmiths and blacksmiths and clockmakers, who were taking older gun parts, barrels, and locks and building a functional firearm.

You saw a lot of the Americans at the beginning of the war trying to scrape through with these guns made from old parts and cobbled together. They’re functional. We didn’t really have that lock-making and barrel-making industry here. A lot of that stuff we had imported. So even if a gun was being made here, the firing mechanism and the barrels were imported. So we had to come up with another way to do it.

We started to receive a trickle of arms from the French in 1777, and to my mind, that’s what helped change the outcome of the war. Not only did we have French troops arriving, but we also had French cloth, shoes, hats, tin, powder, flints, and a ton of arms being shipped in. The French took all of their old guns from their last model that they had issued to the army, and they basically sold them all to us. So we had this huge influx of French arms that helped resupply us and made the war viable for us.

Close-up of a cannon firing. GBH/NOVA

Ars Technica: There are a lot of popular misconceptions about the history of the American Civil War. What are a couple of things that you wish more Americans understood about that conflict?

Joel Bohy: The onset of the American Revolution, April 1775, when the war began—these weren’t just a bunch of farmers who grabbed their rifle from over the fireplace and went out and beat the British Army. These people had been training and arming themselves for a long time. They had been doing it for generations before in wars with Native forces and the French since the 17th century. So by the time the Revolution broke out, they were as prepared as they could be for it.

“The rifle won the Revolution” is one of the things that I hear. No, it didn’t. Like I said, the French arms coming in helped us win the Revolution. A rifle is a tool, just like a smoothbore musket is. It has its benefits and it has its downfalls. It’s slower to load, you can’t mount a bayonet on it, but it’s more accurate, whereas the musket, you can load and fire faster, and you can mount a bayonet. So the gun that really won the Revolution was the musket, not the rifle.

It’s all well and good to be proud of being an American and our history and everything else, but these people just didn’t jump out of bed and fight. These people were training, they were drilling, they were preparing and arming and supplying not just arms, but food, cloth, tents, things that they would need to continue to have an army once the war broke out. It wasn’t just a big—poof—this happened and we won.

Revolutionary War Weapons is now streaming on YouTube and is also available on PBS.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Live demos test effectiveness of Revolutionary War weapons Read More »

noaa-scientists-scrub-toilets,-rethink-experiments-after-service-contracts-end

NOAA scientists scrub toilets, rethink experiments after service contracts end

“It’s making our work unsafe, and it’s unsanitary for any workplace,” but especially an active laboratory full of fire-reactive chemicals and bacteria, one Montlake researcher said.

Press officers at NOAA, the Commerce Department, and the White House did not respond to requests for comment.

Montlake employees were informed last week that a contract for safety services — which includes the staff who move laboratory waste off-campus to designated disposal sites — would lapse after April 9, leaving just one person responsible for this task. Hazardous waste “pickups from labs may be delayed,” employees were warned in a recent email.

The building maintenance team’s contract expired Wednesday, which decimated the staff that had handled plumbing, HVAC, and the elevators. Other contacts lapsed in late March, leaving the Seattle lab with zero janitorial staff and a skeleton crew of IT specialists.

During a big staff meeting at Montlake on Wednesday, lab leaders said they had no updates on when the contracts might be renewed, one researcher said. They also acknowledged it was unfair that everyone would need to pitch in on janitorial duties on top of their actual jobs.

Nick Tolimieri, a union representative for Montlake employees, said the problem is “all part of the large-scale bullying program” to push out federal workers. It seems like every Friday “we get some kind of message that makes you unable to sleep for the entire weekend,” he said. Now, with these lapsed contracts, it’s getting “more and more petty.”

The problems, large and small, at Montlake provide a case study of the chaos that’s engulfed federal workers across many agencies as the Trump administration has fired staff, dumped contracts, and eliminated long-time operational support. Yesterday, hundreds of NOAA workers who had been fired in February, then briefly reinstated, were fired again.

NOAA scientists scrub toilets, rethink experiments after service contracts end Read More »

google-created-a-new-ai-model-for-talking-to-dolphins

Google created a new AI model for talking to dolphins

Dolphins are generally regarded as some of the smartest creatures on the planet. Research has shown they can cooperate, teach each other new skills, and even recognize themselves in a mirror. For decades, scientists have attempted to make sense of the complex collection of whistles and clicks dolphins use to communicate. Researchers might make a little headway on that front soon with the help of Google’s open AI model and some Pixel phones.

Google has been finding ways to work generative AI into everything else it does, so why not its collaboration with the Wild Dolphin Project (WDP)? This group has been studying dolphins since 1985 using a non-invasive approach to track a specific community of Atlantic spotted dolphins. The WDP creates video and audio recordings of dolphins, along with correlating notes on their behaviors.

One of the WDP’s main goals is to analyze the way dolphins vocalize and how that can affect their social interactions. With decades of underwater recordings, researchers have managed to connect some basic activities to specific sounds. For example, Atlantic spotted dolphins have signature whistles that appear to be used like names, allowing two specific individuals to find each other. They also consistently produce “squawk” sound patterns during fights.

WDP researchers believe that understanding the structure and patterns of dolphin vocalizations is necessary to determine if their communication rises to the level of a language. “We do not know if animals have words,” says WDP’s Denise Herzing.

An overview of DolphinGemma

The ultimate goal is to speak dolphin, if indeed there is such a language. The pursuit of this goal has led WDP to create a massive, meticulously labeled data set, which Google says is perfect for analysis with generative AI.

Meet DolphinGemma

The large language models (LLMs) that have become unavoidable in consumer tech are essentially predicting patterns. You provide them with an input, and the models predict the next token over and over until they have an output. When a model has been trained effectively, that output can sound like it was created by a person. Google and WDP hope it’s possible to do something similar with DolphinGemma for marine mammals.

DolphinGemma is based on Google’s Gemma open AI models, which are themselves built on the same foundation as the company’s commercial Gemini models. The dolphin communication model uses a Google-developed audio technology called SoundStream to tokenize dolphin vocalizations, allowing the sounds to be fed into the model as they’re recorded.

Google created a new AI model for talking to dolphins Read More »

a-guide-to-the-“platonic-ideal”-of-a-negroni-and-other-handy-tips

A guide to the “platonic ideal” of a Negroni and other handy tips


Perfumer by day, mixologist by night, Kevin Peterson specializes in crafting scent-paired cocktails.

Kevin Peterson is a “nose” for his own perfume company, Sfumato Fragrances, by day. By night, Sfumato’s retail store in Detroit transforms into Peterson’s craft cocktail bar, Castalia, where he is chief mixologist and designs drinks that pair with carefully selected aromas. He’s also the author of Cocktail Theory: A Sensory Approach to Transcendent Drinks, which grew out of his many (many!) mixology experiments and popular YouTube series, Objective Proof: The Science of Cocktails.

It’s fair to say that Peterson has had an unusual career trajectory. He worked as a line cook and an auto mechanic, and he worked on the production line of a butter factory, among other gigs, before attending culinary school in hopes of becoming a chef. However, he soon realized it wasn’t really what he wanted out of life and went to college, earning an undergraduate degree in physics from Carleton College and a PhD in mechanical engineering from the University of Michigan.

After 10 years as an engineer, he switched focus again and became more serious about his side hobby, perfumery. “Not being in kitchens anymore, I thought—this is a way to keep that little flavor part of my brain engaged,” Peterson told Ars. “I was doing problem sets all day. It was my escape to the sensory realm. ‘OK, my brain is melting—I need a completely different thing to do. Let me go smell smells, escape to my little scent desk.'” He and his wife, Jane Larson, founded Sfumato, which led to opening Castalia, and Peterson finally found his true calling.

Peterson spent years conducting mixology experiments to gather empirical data about the interplay between scent and flavor, correct ratios of ingredients, temperature, and dilution for all the classic cocktails—seeking a “Platonic ideal,” for each, if you will. He supplemented this with customer feedback data from the drinks served at Castalia. All that culminated in Cocktail Theory, which delves into the chemistry of scent and taste, introducing readers to flavor profiles, textures, visual presentation, and other factors that contribute to one’s enjoyment (or lack thereof) of a cocktail. And yes, there are practical tips for building your own home bar, as well as recipes for many of Castalia’s signature drinks.

In essence, Peterson’s work adds scientific rigor to what is frequently called the “Mr. Potato Head” theory of cocktails, a phrase coined by the folks at Death & Company, who operate several craft cocktail bars in key cities. “Let’s say you’ve got some classic cocktail, a daiquiri, that has this many parts of rum, this many parts of lime, this many parts of sugar,” said Peterson, who admits to having a Mr. Potato Head doll sitting on Castalia’s back bar in honor of the sobriquet. “You can think about each ingredient in a more general way: instead of rum, this is the spirit; instead of lime, this is the citrus; sugars are sweetener. Now you can start to replace those things with other things in the same categories.”

We caught up with Peterson to learn more.

Ars Technica: How did you start thinking about the interplay between perfumery and cocktail design and the role that aroma plays in each?

Kevin Peterson: The first step was from food over to perfumery, where I think about building a flavor for a soup, for a sauce, for a curry, in a certain way. “Oh, there’s a gap here that needs to be filled in by some herbs, some spice.” It’s almost an intuitive kind of thing. When I was making scents, I had those same ideas: “OK, the shape of this isn’t quite right. I need this to roughen it up or to smooth out this edge.”

Then I did the same thing for cocktails and realized that those two worlds didn’t really talk to each other. You’ve got two groups of people that study all the sensory elements and how to create the most intriguing sensory impression, but they use different language; they use different toolkits. They’re going for almost the same thing, but there was very little overlap between the two. So I made that my niche: What can perfumery teach bartenders? What can the cocktail world teach perfumery?

Ars Technica: In perfumery you talk about a top, a middle, and a base note. There must be an equivalent in cocktail theory?

Kevin Peterson: In perfumery, that is mostly talking about the time element: top notes perceived first, then middle notes, then base notes as you wear it over the course of a few hours. In the cocktail realm, there is that time element as well. You get some impression when you bring the glass to your nose, something when you sip, something in the aftertaste. But there can also be a spatial element. Some things you feel right at the tip of your tongue, some things you feel in different parts of your face and head, whether that’s a literal impression or you just kind of feel it somewhere where there’s not a literal nerve ending. It’s about filling up that space, or not filling it up, depending on what impression you’re going for—building out the full sensory space.

Ars Technica: You also talk about motifs and supportive effects or ornamental flourishes: themes that you can build on in cocktails.

Kevin Peterson: Something I see in the cocktail world occasionally is that people just put a bunch of ingredients together and figure, “This tastes fine.” But what were you going for here? There are 17 things in here, and it just kind of tastes like you were finger painting: “Hey, I made brown.” Brown is nice. But the motifs that I think about—maybe there’s just one particular element that I want to highlight. Say I’ve got this really great jasmine essence. Everything else in the blend is just there to highlight the jasmine.

If you’re dealing with a really nice mezcal or bourbon or some unique herb or spice, that’s going to be the centerpiece. You’re not trying to get overpowered by some smoky scotch, by some other more intense ingredient. The motif could just be a harmonious combination of elements. I think the perfect old-fashioned is where everything is present and nothing’s dominating. It’s not like the bitters or the whiskey totally took over. There’s the bitters, there’s a little bit of sugar, there’s the spirit. Everything’s playing nicely.

Another motif, I call it a jazz note. A Sazerac is almost the same as an old-fashioned, but it’s got a little bit of absinthe in it. You get all the harmony of the old-fashioned, but then you’re like, “Wait, what’s this weird thing pulling me off to the side? Oh, this absinthe note is kind of separate from everything else that’s going on in the drink.” It’s almost like that tension in a musical composition: “Well, these notes sound nice, but then there’s one that’s just weird.” But that’s what makes it interesting, that weird note. For me, formalizing some of those motifs help me make it clearer. Even if I don’t tell that to the guest during the composition stage, I know this is the effect I’m going for. It helps me build more intentionally when I’ve got a motif in mind.

Ars Technica: I tend to think about cocktails more in terms of chemistry, but there are many elements to taste and perception and flavor. You talk about ingredient matching, molecular matching, and impression matching, i.e., how certain elements will overlap in the brain. What role do each of those play?

Kevin Peterson: A lot of those ideas relate to how we pair scents with cocktails. At my perfume company, we make eight fragrances as our main line. Each scent then gets a paired drink on the cocktail menu. For example, this scent has coriander, cardamom, and nutmeg. What does it mean that the drink is paired with that? Does it need to literally have coriander, cardamom, and nutmeg in it? Does it need to have every ingredient? If the scent has 15 things, do I need to hit every note?

chart with sad neutral and happy faces showing the optimal temperature and dilution for a dauquiri

Peterson made over 100 daiquiris to find the “Platonic ideal” of the classic cocktail Credit: Kevin Peterson

The literal matching is the most obvious. “This has cardamom, that has cardamom.” I can see how that pairs. The molecular matching is essentially just one more step removed: Rosemary has alpha-pinene in it, and juniper berries have alpha-pinene in them. So if the scent has rosemary and the cocktail has gin, they’re both sharing that same molecule, so it’s still exciting that same scent receptor. What I’m thinking about is kind of resonant effects. You’re approaching the same receptor or the same neural structure in two different ways, and you’re creating a bigger peak with that.

The most hand-wavy one to me is the impression matching. Rosemary smells cold, and Fernet-Branca tastes cold even when it’s room temperature. If the scent has rosemary, is Fernet now a good match for that? Some of the neuroscience stuff that I’ve read has indicated that these more abstract ideas are represented by the same sort of neural-firing patterns. Initially, I was hesitant; cold and cold, it doesn’t feel as fulfilling to me. But then I did some more reading and realized there’s some science behind it and have been more intrigued by that lately.

Ars Technica: You do come up with some surprising flavor combinations, like a drink that combined blueberry and horseradish, which frankly sounds horrifying. 

Kevin Peterson: It was a hit on the menu. I would often give people a little taste of the blueberry and then a little taste of the horseradish tincture, and they’d say, “Yeah, I don’t like this.” And then I’d serve them the cocktail, and they’d be like, “Oh my gosh, it actually worked. I can’t believe it.”  Part of the beauty is you take a bunch of things that are at least not good and maybe downright terrible on their own, and then you stir them all together and somehow it’s lovely. That’s basically alchemy right there.

Ars Technica: Harmony between scent and the cocktail is one thing, but you also talk about constructive interference to get a surprising, unexpected, and yet still pleasurable result.

Kevin Peterson: The opposite is destructive interference, where there’s just too much going on. When I’m coming up with a drink, sometimes that’ll happen, where I’m adding more, but the flavor impression is going down. It’s sort of a weird non-linearity of flavor, where sometimes two plus two equals four, sometimes it equals three, sometimes it equals 17. I now have intuition about that, having been in this world for a lot of years, but I still get surprised sometimes when I put a couple things together.

Often with my end-of-the-shift drink, I’ll think, “Oh, we got this new bottle in. I’m going to try that in a Negroni variation.” Then I lose track and finish mopping, and then I sip, and I’m like, “What? Oh my gosh, I did not see this coming at all.” That little spark, or whatever combo creates that, will then often be the first step on some new cocktail development journey.

man's torso in a long-sleeved button down white shirt, with a small glass filled with juniper berries in front of him

Pairing scents with cocktails involves experimenting with many different ingredients Credit: EE Berger

Ars Technica: Smoked cocktails are a huge trend right now. What’s the best way to get a consistently good smoky element?

Kevin Peterson: Smoke is tricky to make repeatable. How many parts per million of smoke are you getting in the cocktail? You could standardize the amount of time that it’s in the box [filled with smoke]. Or you could always burn, say, exactly three grams of hickory or whatever. One thing that I found, because I was writing the book while still running the bar: People have a lot of expectations around how the drink is going to be served. Big ice cubes are not ideal for serving drinks, but people want a big ice cube in their old-fashioned. So we’re still using big ice cubes. There might be a Platonic ideal in terms of temperature, dilution, etc., but maybe it’s not the ideal in terms of visuals or tactile feel, and that is a part of the experience.

With the smoker, you open the doors, smoke billows out, your drink emerges from the smoke, and people say, “Wow, this is great.” So whether you get 100 PPM one time and 220 PPM the next, maybe that gets outweighed by the awesomeness of the presentation. If I’m trying to be very dialed in about it, I’ll either use a commercial smoky spirit—Laphroaig scotch, a smoky mezcal—where I decide that a quarter ounce is the amount of smokiness that I want in the drink. I can just pour the smoke instead of having to burn and time it.

Or I might even make my own smoke: light something on fire and then hold it under a bottle, tip it back up, put some vodka or something in there, shake it up. Now I’ve got smoke particles in my vodka. Maybe I can say, “OK, it’s always going to be one milliliter,” but then you miss out on the presentation—the showmanship, the human interaction, the garnish. I rarely garnish my own drinks, but I rarely send a drink out to a guest ungarnished, even if it’s just a simple orange peel.

Ars Technica: There’s always going to be an element of subjectivity, particularly when it comes to our sensory perceptions. Sometimes you run into a person who just can’t appreciate a certain note.

Kevin Peterson: That was something I grappled with. On the one hand, we’re all kind of living in our own flavor world. Some people are more sensitive to bitter. Different scent receptors are present in different people. It’s tempting to just say, “Well, everything’s so unique. Maybe we just can’t say anything about it at all.” But that’s not helpful either. Somehow, we keep having delicious food and drink and scents that come our way.

A sample page from Cocktail Theory discussing temperature and dilution

A sample page from Cocktail Theory discussing temperature and dilution. Credit: EE Berger

I’ve been taking a lot of survey data in my bar more recently, and definitely the individuality of preference has shown through in the surveys. But another thing that has shown through is that there are some universal trends. There are certain categories. There’s the spirit-forward, bittersweet drinkers, there’s the bubbly citrus folks, there’s the texture folks who like vodka soda. What is the taste? What is the aroma? It’s very minimal, but it’s a very intense texture. Having some awareness of that is critical when you’re making drinks.

One of the things I was going for in my book was to find, for example, the platonically ideal gin and tonic. What are the ratios? What is the temperature? How much dilution to how much spirit is the perfect amount? But if you don’t like gin and tonics, it doesn’t matter if it’s a platonically ideal gin and tonic. So that’s my next project. It’s not just getting the drink right. How do you match that to the right person? What questions do I have to ask you, or do I have to give you taste tests? How do I draw that information out of the customer to determine the perfect drink for them?

We offer a tasting menu, so our full menu is eight drinks, and you get a mini version of each drink. I started giving people surveys when they would do the tasting menu, asking, “Which drink do you think you like the most? Which drink do you think you like the least?” I would have them rate it. Less than half of people predicted their most liked and least liked, meaning if you were just going to order one drink off the menu, your odds are less than a coin flip that you would get the right drink.

Ars Technica: How does all this tie into your “cocktails as storytelling” philosophy? 

Kevin Peterson: So much of flavor impression is non-verbal. Scent is very hard to describe. You can maybe describe taste, but we only have five-ish words, things like bitter, sour, salty, sweet. There’s not a whole lot to say about that: “Oh, it was perfectly balanced.” So at my bar, when we design menus, we’ll put the drinks together, but then we’ll always give the menu a theme. The last menu that we did was the scientist menu, where every drink was made in honor of some scientist who didn’t get the credit they were due in the time they were alive.

Having that narrative element, I think, helps people remember the drink better. It helps them in the moment to latch onto something that they can more firmly think about. There’s a conceptual element. If I’m just doing chores around the house, I drink a beer, it doesn’t need to have a conceptual element. If I’m going out and spending money and it’s my night and I want this to be a more elevated experience, having that conceptual tie-in is an important part of that.

two martini glasses side by side with a cloudy liquid in them a bright red cherry at the bottom of the glass

My personal favorite drink, Corpse Reviver No. 2, has just a hint of absinthe. Credit: Sean Carroll

Ars Technica: Do you have any simple tips for people who are interested in taking their cocktail game to the next level?

Kevin Peterson:  Old-fashioneds are the most fragile cocktail. You have to get all the ratios exactly right. Everything has to be perfect for an old-fashioned to work. Anecdotally, I’ve gotten a lot of old-fashioneds that were terrible out on the town. In contrast, the Negroni is the most robust drink. You can miss the ratios. It’s got a very wide temperature and dilution window where it’s still totally fine. I kind of thought of them in the same way prior to doing the test. Then I found that this band of acceptability is much bigger for the Negroni. So now I think of old-fashioneds as something that either I make myself or I order when I either trust the bartender or I’m testing someone who wants to come work for me.

My other general piece of advice: It can be a very daunting world to try to get into. You may say, “Oh, there’s all these classics that I’m going to have to memorize, and I’ve got to buy all these weird bottles.” My advice is to pick a drink you like and take baby steps away from that drink. Say you like Negronis. That’s three bottles: vermouth, Campari, and gin. Start with that. When you finish that bottle of gin, buy a different type of gin. When you finish the Campari, try a different bittersweet liqueur. See if that’s going to work. You don’t have to drop hundreds of dollars, thousands of dollars, to build out a back bar. You can do it with baby steps.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

A guide to the “platonic ideal” of a Negroni and other handy tips Read More »