astronomy

simulations-find-ghostly-whirls-of-dark-matter-trailing-galaxy-arms

Simulations find ghostly whirls of dark matter trailing galaxy arms

“Basically what you do is you set up a bunch of particles that represent things like stars, gas, and dark matter, and you let them evolve for millions of years,” Bernet says. “Human lives are much too short to witness this happening in real time. We need simulations to help us see more than the present, which is like a single snapshot of the Universe.”

Several other groups already had galaxy simulations they were using to do other science, so the team asked one to see their data. When they found the dark matter imprint they were looking for, they checked for it in another group’s simulation. They found it again, and then in a third simulation as well.

The dark matter spirals are much less pronounced than their stellar counterparts, but the team noted a distinct imprint on the motions of dark matter particles in the simulations. The dark spiral arms lag behind the stellar arms, forming a sort of unseen shadow.

These findings add a new layer of complexity to our understanding of how galaxies evolve, suggesting that dark matter is more than a passive, invisible scaffolding holding galaxies together. Instead, it appears to react to the gravity from stars in galaxies’ spiral arms in a way that may even influence star formation or galactic rotation over cosmic timescales. It could also explain the relatively newfound excess mass along a nearby spiral arm in the Milky Way.

The fact that they saw the same effect in differently structured simulations suggests that these dark matter spirals may be common in galaxies like the Milky Way. But tracking them down in the real Universe may be tricky.

Bernet says scientists could measure dark matter in the Milky Way’s disk. “We can currently measure the density of dark matter close to us with a huge precision,” he says. “If we can extend these measurements to the entire disk with enough precision, spiral patterns should emerge if they exist.”

“I think these results are very important because it changes our expectations for where to search for dark matter signals in galaxies,” Brooks says. “I could imagine that this result might influence our expectation for how dense dark matter is near the solar neighborhood and could influence expectations for lab experiments that are trying to directly detect dark matter.” That’s a goal scientists have been chasing for nearly 100 years.

Ashley writes about space for a contractor for NASA’s Goddard Space Flight Center by day and freelances in her free time. She holds master’s degrees in space studies from the University of North Dakota and science writing from Johns Hopkins University. She writes most of her articles with a baby on her lap.

Simulations find ghostly whirls of dark matter trailing galaxy arms Read More »

milky-way-galaxy-might-not-collide-with-andromeda-after-all

Milky Way galaxy might not collide with Andromeda after all

100,000 computer simulations reveal Milky Way’s fate—and it might not be what we thought.

It’s been textbook knowledge for over a century that our Milky Way galaxy is doomed to collide with another large spiral galaxy, Andromeda, in the next 5 billion years and merge into one even bigger galaxy. But a fresh analysis published in the journal Nature Astronomy is casting that longstanding narrative in a more uncertain light. The authors conclude that the likelihood of this collision and merger is closer to the odds of a coin flip, with a roughly 50 percent probability that the two galaxies will avoid such an event during the next 10 billion years.

Both the Milky Way and the Andromeda galaxies (M31) are part of what’s known as the Local Group (LG), which also hosts other smaller galaxies (some not yet discovered) as well as dark matter (per the prevailing standard cosmological model). Both already have remnants of past mergers and interactions with other galaxies, according to the authors.

“Predicting future mergers requires knowledge about the present coordinates, velocities, and masses of the systems partaking in the interaction,” the authors wrote. That involves not just the gravitational force between them but also dynamical friction. It’s the latter that dominates when galaxies are headed toward a merger, since it causes galactic orbits to decay.

This latest analysis is the result of combining data from the Hubble Space Telescope and the European Space Agency’s (ESA) Gaia space telescope to perform 100,000 Monte Carlo computer simulations, taking into account not just the Milky Way and Andromeda but the full LG system. Those simulations yielded a very different prediction: There is approximately a 50/50 chance of the galaxies colliding within the next 10 billion years. There is still a 2 percent chance that they will collide in the next 4 to 5 billion years. “Based on the best available data, the fate of our galaxy is still completely open,” the authors concluded.

Milky Way galaxy might not collide with Andromeda after all Read More »

testing-a-robot-that-could-drill-into-europa-and-enceladus

Testing a robot that could drill into Europa and Enceladus


We don’t currently have a mission to put it on, but NASA is making sure it’s ready.

Geysers on Saturn’s moon Enceladus Credit: NASA

Europa and Enceladus are two ocean moons that scientists have concluded have liquid water oceans underneath their outer icy shells. The Europa Clipper mission should reach Europa around April of 2030. If it collects data hinting at the moon’s potential habitability, robotic lander missions could be the only way to confirm if there’s really life in there or not.

To make these lander missions happen, NASA’s Jet Propulsion Laboratory team has been working on a robot that could handle the search for life and already tested it on the Matanuska Glacier in Alaska. “At this point this is a pretty mature concept,” says Kevin Hand, a planetary scientist at JPL who led this effort.

Into the unknown

There are only a few things we know for sure about conditions on the surface of Europa, and nearly all of them don’t bode well for lander missions. First, Europa is exposed to very harsh radiation, which is a problem for electronics. The window of visibility—when a potential robotic lander could contact Earth—lasts less than half of the 85 hours it takes for the moon to complete its day-night cycle due to the Europa-Jupiter orbit. So, for more than half the mission, the robot would need to fend for itself, with no human ground teams to get it out of trouble. The lander would also need to run on non-rechargeable batteries, because the vast distance to the Sun would make solar panels prohibitively massive.

And that’s just the beginning. Unlike on Mars, we don’t have any permanent orbiters around Europa that could provide a communication infrastructure, and we don’t have high-resolution imagery of the surface, which would make the landing particularly tricky. “We don’t know what Europa’s surface looks like at the centimeter to meter scale. Even with the Europa Clipper imagery, the highest resolution will be about half a meter per pixel across a few select regions,” Hand explains.

Because Europa has an extremely thin atmosphere that doesn’t provide any insulation, the temperatures on top of its ice shell are estimated to vary between minus-160° Celsius during the daytime maximum and minus-220° C during the night, which means the ice the lander would be there to sample is most likely hard as concrete. Hand’s team, building their robot, had to figure out a design that could deal with all these issues.

The work on the robotic system for the Europa lander mission began more than 10 years ago. Back then, the 2013–2022 decadal strategy for planetary science cited the Europa Clipper as the second-highest priority large-scale planetary mission, so a lander seemed like a natural follow-up.

Autonomy and ice drilling

The robot developed by Hand’s team has legs that enable it to stabilize itself on various types of surfaces, from rock-hard ice to loose, soft snow. To orient itself in the environment, it uses a stereoscopic camera with an LED light source for illumination hooked to computer-vision algorithms—a system similar to the one currently used by the Perseverance rover on Mars. “Stereoscopic cameras can triangulate points in an image and build a digital surface topography model,” explains Joseph Bowkett, a JPL researcher and engineer who worked on the robot’s design.

The team built an entirely new robotic arm with seven degrees of freedom. Force torque sensors installed in most of its joints act a bit like a nervous system, informing the robot when key components sustain excessive loads to prevent it from damaging the arm or the drill. “As we press down on the surface [and] conduct drilling and sampling, we can measure the forces and react accordingly,” Bowkett says. The finishing touch was the ICEPICK, a drilling and sampling tool the robot uses to excavate samples from the ice up to 20 centimeters deep.

Because of long periods the lander would need operate without any human supervision, the team also gave it a wide range of autonomous systems, which operate at two different levels. High-level autonomy is responsible for scheduling and prioritizing tasks within a limited energy budget. The robot can drill into a sampling site, analyze samples with onboard instruments, and decide whether it makes sense to keep drilling at the same spot or choose a different sampling site. The high-level system is also tasked with choosing the most important results for downlink back to Earth.

Low-level autonomy breaks all these high-level tasks down into step-by-step decisions on how to operate the drill and how to move the arm in the safest and most energy-efficient way.

The robot was tested in simulation software first, then indoors at JPL’s facilities, and finally at the Matanuska Glacier in Alaska, where it was lowered from a helicopter that acted as a proxy for a landing vehicle. It was tested at three different sites, ranked from the easiest to the most challenging. It completed all the baseline activities as well as all of the extras. The latter included a task like drilling 27 centimeters deep into ice at the most difficult site, where it was awkwardly positioned on an eight-to-12-degree slope. The robot passed all the tests with flying colors.

And then it got shelved.

Switching the ocean worlds

Hand’s team put their Europa landing robot through the Alaskan field test campaign between July and August 2022. But when the new decadal strategy for planetary science came out in 2023, it turned out that the Europa lander was not among the missions selected. The National Academies committee responsible for formulating these decadal strategies did not recommend giving it a go, mainly because they believed harsh radiation in the Jovian system would make detecting biosignatures “challenging” for a lander.

An Enceladus lander, on the other hand, remained firmly on the table. “I was also on the team developing EELS, a robot intended for a potential Enceladus mission, so thankfully I can speak about both. The radiation challenges are indeed far greater for Europa,” Bowkett says.

Another argument for changing our go-to ocean world is that water plumes containing salts along with carbon- and nitrogen-bearing molecules have already been observed on Enceladus, which means there is a slight chance biosignatures could be detected by a flyby mission. The surface of Enceladus, according to the decadal strategy document, should be capable of preserving biogenic evidence for a long time and seems more conducive to a lander mission. “Luckily, many of the lessons on how to conduct autonomous sampling on Europa, we believe, will transfer to Enceladus, with the benefit of a less damaging radiation environment,” Bowkett told Ars.

The dream of a Europa landing is not completely dead, though. “I would love to get into the Europa’s ocean with a submersible and further down to the seafloor. I would love for that to happen,” Hand says. “But technologically it’s quite a big leap, and you always have to balance your dream missions with the number of technological miracles that need to be solved to make these missions possible.”

Science Robotics, 2025.  DOI: 10.1126/scirobotics.adi5582

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Testing a robot that could drill into Europa and Enceladus Read More »

a-star-has-been-destroyed-by-a-wandering-supermassive-black-hole

A star has been destroyed by a wandering supermassive black hole

But note the phrasing there: “in most cases” and “eventually.” Even in the cases where a merger takes place, the process is slow, potentially taking millions or even billions of years. As a result, a large galaxy might have as many as 100 extremely large black holes wandering about, with about 10 of them having masses of over 106 times that of the Sun. And the galaxy that AT2024tvd resides in is very large.

One consequence of all these black holes wandering about is that not all of them will end up merging. If two of them approach the central black hole at the same time, then it’s possible for gravitational interactions to eject the smallest of them at nearly the velocity needed to escape the galaxy entirely. As a result, for millions of years afterwards, these supermassive black holes may be found at quite a distance from the galaxy’s core.

At the moment, it’s not possible to tell which of these explanations account for AT2024tvd’s location. The galaxy it’s in doesn’t seem to have undergone a recent merger, but there is the potential for this to be a straggler from a far-earlier merger.

It’s notable that all of the galaxies where we’ve seen an off-center tidal disruption event are very large. The paper that describes AT2024tvd suggests this is no accident: larger galaxies mean more mergers in the past, and thus more supermassive black holes floating around the interior. They also suggest that off-center events will be the only ones we see in large galaxies. That’s because larger galaxies will have larger supermassive black holes at their center. And, once a supermassive black hole gets big enough, its event horizon is so far out that stars can pass through it before they get disrupted, and all the energetic release would take place inside the black hole.

Presumably, if you were close enough to see this happen, the star would just fade out of existence.

The arXiv. Abstract number: 2502.17661 (About the arXiv). To be published in The Astrophysical Journal Letters.

A star has been destroyed by a wandering supermassive black hole Read More »

looking-at-the-universe’s-dark-ages-from-the-far-side-of-the-moon

Looking at the Universe’s dark ages from the far side of the Moon


meet you in the dark side of the moon

Building an observatory on the Moon would be a huge challenge—but it would be worth it.

A composition of the moon with the cosmos radiating behind it

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

There is a signal, born in the earliest days of the cosmos. It’s weak. It’s faint. It can barely register on even the most sensitive of instruments. But it contains a wealth of information about the formation of the first stars, the first galaxies, and the mysteries of the origins of the largest structures in the Universe.

Despite decades of searching for this signal, astronomers have yet to find it. The problem is that our Earth is too noisy, making it nearly impossible to capture this whisper. The solution is to go to the far side of the Moon, using its bulk to shield our sensitive instruments from the cacophony of our planet.

Building telescopes on the far side of the Moon would be the greatest astronomical challenge ever considered by humanity. And it would be worth it.

The science

We have been scanning and mapping the wider cosmos for a century now, ever since Edwin Hubble discovered that the Andromeda “nebula” is actually a galaxy sitting 2.5 million light-years away. Our powerful Earth-based observatories have successfully mapped the detailed location to millions of galaxies, and upcoming observatories like the Vera C. Rubin Observatory and Nancy Grace Roman Space Telescope will map millions more.

And for all that effort, all that technological might and scientific progress, we have surveyed less than 1 percent of the volume of the observable cosmos.

The vast bulk of the Universe will remain forever unobservable to traditional telescopes. The reason is twofold. First, most galaxies will simply be too dim and too far away. Even the James Webb Space Telescope, which is explicitly designed to observe the first generation of galaxies, has such a limited field of view that it can only capture a handful of targets at a time.

Second, there was a time, within the first few hundred million years after the Big Bang, before stars and galaxies had even formed. Dubbed the “cosmic dark ages,” this time naturally makes for a challenging astronomical target because there weren’t exactly a lot of bright sources to generate light for us to look at.

But there was neutral hydrogen. Most of the Universe is made of hydrogen, making it the most common element in the cosmos. Today, almost all of that hydrogen is ionized, existing in a super-heated plasma state. But before the first stars and galaxies appeared, the cosmic reserves of hydrogen were cool and neutral.

Neutral hydrogen is made of a single proton and a single electron. Each of these particles has a quantum property known as spin (which kind of resembles the familiar, macroscopic property of spin, but it’s not quite the same—though that’s a different article). In its lowest-energy state, the proton and electron will have spins oriented in opposite directions. But sometimes, through pure random quantum chance, the electron will spontaneously flip around. Very quickly, the hydrogen notices and gets the electron to flip back to where it belongs. This process releases a small amount of energy in the form of a photon with a wavelength of 21 centimeters.

This quantum transition is exceedingly rare, but with enough neutral hydrogen, you can build a substantial signal. Indeed, observations of 21-cm radiation have been used extensively in astronomy, especially to build maps of cold gas reservoirs within the Milky Way.

So the cosmic dark ages aren’t entirely dark; those clouds of primordial neutral hydrogen are emitting tremendous amounts of 21-cm radiation. But that radiation was emitted in the distant past, well over 13 billion years ago. As it has traveled through the cosmic distances, all those billions of light-years on its way to our eager telescopes, it has experienced the redshift effects of our expanding Universe.

By the time that dark age 21-cm radiation reaches us, it has stretched by a factor of 10, turning the neutral hydrogen signal into radio waves with wavelengths of around 2 meters.

The astronomy

Humans have become rather fond of radio transmissions in the past century. Unfortunately, the peak of this primordial signal from the dark ages sits right below the FM dial of your radio, which pretty much makes it impossible to detect from Earth. Our emissions are simply too loud, too noisy, and too difficult to remove. Teams of astronomers have devised clever ways to reduce or eliminate interference, featuring arrays scattered around the most desolate deserts in the world, but they have not been able to confirm the detection of a signal.

So those astronomers have turned in desperation to the quietest desert they can think of: the far side of the Moon.

It wasn’t until 1959 when the Soviet Luna 3 probe gave us our first glimpse of the Moon’s far side, and it wasn’t until 2019 when the Chang’e 4 mission made the first soft landing. Compared to the near side, and especially low-Earth orbit, there is very little human activity there. We’ve had more active missions on the surface of Mars than on the lunar far side.

Chang’e-4 landing zone on the far side of the moon. Credit: Xiao Xiao and others (CC BY 4.0)

And that makes the far side of the Moon the ideal location for a dark-age-hunting radio telescope, free from human interference and noise.

Ideas abound to make this a possibility. The first serious attempt was DARE, the Dark Ages Radio Explorer. Rather than attempting the audacious goal of building an actual telescope on the surface, DARE was a NASA-funded concept to develop an observatory (and when it comes to radio astronomy, “observatory” can be as a simple as a single antenna) to orbit the Moon and take data when it’s on the opposite side as the Earth.

For various bureaucratic reasons, NASA didn’t develop the DARE concept further. But creative astronomers have put forward even bolder proposals.

The FarView concept, for example, is a proposed radio telescope array that would dwarf anything on the Earth. It would be sensitive to frequency ranges between 5 and 40 MHz, allowing it to target the dark ages and the birth of the first stars. The proposed design contains 100,000 individual elements, with each element consisting of a single, simple dipole antenna, dispersed over a staggering 200 square kilometers. It would be infeasible to deliver that many antennae directly to the surface of the Moon. Instead, we’d have to build them, mining lunar regolith and turning it into the necessary components.

The design of this array is what’s called an interferometer. Instead of a single big dish, the individual antennae collect data on their own and then correlate all their signals together later. The effective resolution of an interferometer is the same as a single dish as big as the widest distance among the elements. The downside of an interferometer is that most of the incoming radiation just hits dirt (or in this case, lunar regolith), so the interferometer has to collect a lot of data to build up a decent signal.

Attempting these kinds of observations on the Earth requires constant maintenance and cleaning to remove radio interference and have essentially sunk all attempts to measure the dark ages. But a lunar-based interferometer will have all the time in the world it needs, providing a much cleaner and easier-to-analyze stream of data.

If you’re not in the mood for building 100,000 antennae on the Moon’s surface, then another proposal seeks to use the Moon’s natural features—namely, its craters. If you squint hard enough, they kind of look like radio dishes already. The idea behind the project, named the Lunar Crater Radio Telescope, is to find a suitable crater and use it as the support structure for a gigantic, kilometer-wide telescope.

This idea isn’t without precedent. Both the beloved Arecibo and the newcomer FAST observatories used depressions in the natural landscape of Puerto Rico and China, respectively, to take most of the load off of the engineering to make their giant dishes. The Lunar Telescope would be larger than both of those combined, and it would be tuned to hunt for dark ages radio signals that we can’t observe using Earth-based observatories because they simply bounce off the Earth’s ionosphere (even before we have to worry about any additional human interference). Essentially, the only way that humanity can access those wavelengths is by going beyond our ionosphere, and the far side of the Moon is the best place to park an observatory.

The engineering

The engineering challenges we need to overcome to achieve these scientific dreams are not small. So far, humanity has only placed a single soft-landed mission on the distant side of the Moon, and both of these proposals require an immense upgrade to our capabilities. That’s exactly why both far-side concepts were funded by NIAC, NASA’s Innovative Advanced Concepts program, which gives grants to researchers who need time to flesh out high-risk, high-reward ideas.

With NIAC funds, the designers of the Lunar Crater Radio Telescope, led by Saptarshi Bandyopadhyay at the Jet Propulsion Laboratory, have already thought of the challenges they will need to overcome to make the mission a success. Their mission leans heavily on another JPL concept, the DuAxel, which consists of a rover that can split into two single-axel rovers connected by a tether.

To build the telescope, several DuAxels are sent to the crater. One of each pair “sits” to anchor itself on the crater wall, while another one crawls down the slope. At the center, they are met with a telescope lander that has deployed guide wires and the wire mesh frame of the telescope (again, it helps for assembling purposes that radio dishes are just strings of metal in various arrangements). The pairs on the crater rim then hoist their companions back up, unfolding the mesh and lofting the receiver above the dish.

The FarView observatory is a much more capable instrument—if deployed, it would be the largest radio interferometer ever built—but it’s also much more challenging. Led by Ronald Polidan of Lunar Resources, Inc., it relies on in-situ manufacturing processes. Autonomous vehicles would dig up regolith, process and refine it, and spit out all the components that make an interferometer work: the 100,000 individual antennae, the kilometers of cabling to run among them, the solar arrays to power everything during lunar daylight, and batteries to store energy for round-the-lunar-clock observing.

If that sounds intense, it’s because it is, and it doesn’t stop there. An astronomical telescope is more than a data collection device. It also needs to crunch some numbers and get that precious information back to a human to actually study it. That means that any kind of far side observing platform, especially the kinds that will ingest truly massive amounts of data such as these proposals, would need to make one of two choices.

Choice one is to perform most of the data correlation and processing on the lunar surface, sending back only highly refined products to Earth for further analysis. Achieving that would require landing, installing, and running what is essentially a supercomputer on the Moon, which comes with its own weight, robustness, and power requirements.

The other choice is to keep the installation as lightweight as possible and send the raw data back to Earthbound machines to handle the bulk of the processing and analysis tasks. This kind of data throughput is outright impossible with current technology but could be achieved with experimental laser-based communication strategies.

The future

Astronomical observatories on the far side of the Moon face a bit of a catch-22. To deploy and run a world-class facility, either embedded in a crater or strung out over the landscape, we need some serious lunar manufacturing capabilities. But those same capabilities come with all the annoying radio fuzz that already bedevil Earth-based radio astronomy.

Perhaps the best solution is to open up the Moon to commercial exploitation but maintain the far side as a sort of out-world nature preserve, owned by no company or nation, left to scientists to study and use as a platform for pristine observations of all kinds.

It will take humanity several generations, if not more, to develop the capabilities needed to finally build far-side observatories. But it will be worth it, as those facilities will open up the unseen Universe for our hungry eyes, allowing us to pierce the ancient fog of our Universe’s past, revealing the machinations of hydrogen in the dark ages, the birth of the first stars, and the emergence of the first galaxies. It will be a fountain of cosmological and astrophysical data, the richest possible source of information about the history of the Universe.

Ever since Galileo ground and polished his first lenses and through the innovations that led to the explosion of digital cameras, astronomy has a storied tradition of turning the technological triumphs needed to achieve science goals into the foundations of various everyday devices that make life on Earth much better. If we’re looking for reasons to industrialize and inhabit the Moon, the noble goal of pursuing a better understanding of the Universe makes for a fine motivation. And we’ll all be better off for it.

Photo of Paul Sutter

Looking at the Universe’s dark ages from the far side of the Moon Read More »

3d-map-of-exoplanet-atmosphere-shows-wacky-climate

3D map of exoplanet atmosphere shows wacky climate

Last year, astronomers discovered an unusual Earth-size exoplanet they believe has a hemisphere of molten lava, with its other hemisphere tidally locked in perpetual darkness. And at about the same time, a different group discovered a rare small, cold exoplanet with a massive outer companion 100 times the mass of Jupiter.

Meet Tylos

The different layers of the atmosphere on WASP-121b.

This latest research relied on observational data collected by the European South Observatory’s (ESO) Very Large Telescope, specifically, a spectroscopic instrument called ESPRESSO that can process light collected from the four largest VLT telescope units into one signal. The target exoplanet, WASP-121b—aka Tylos—is located in the Puppis constellation about 900 light-years from Earth. One year on Tylos is equivalent to just 30 hours on Earth, thanks to the exoplanet’s close proximity to its host star. Since one side is always facing the star, it is always scorching, while the exoplanet’s other side is significantly colder.

Those extreme temperature contrasts make it challenging to figure out how energy is distributed in the atmospheric system, and mapping out the 3D structure can help, particularly with determining the vertical circulation patterns that are not easily replicated in our current crop of global circulation models, per the authors. For their analysis, they combined archival ESPRESSO data collected on November 30, 2018, with new data collected on September 23, 2023. They focused on three distinct chemical signatures to probe the deep atmosphere (iron), mid-atmosphere (sodium), and shallow atmosphere (hydrogen).

“What we found was surprising: A jet stream rotates material around the planet’s equator, while a separate flow at lower levels of the atmosphere moves gas from the hot side to the cooler side. This kind of climate has never been seen before on any planet,” said Julia Victoria Seidel of the European Southern Observatory (ESO) in Chile, as well as the Observatoire de la Côte d’Azur in France. “This planet’s atmosphere behaves in ways that challenge our understanding of how weather works—not just on Earth, but on all planets. It feels like something out of science fiction.”

Nature, 2025. DOI: 10.1038/s41586-025-08664-1

Astronomy and Astrophysics, 2025. DOI: 10.1051/0004-6361/202452405  (About DOIs).

3D map of exoplanet atmosphere shows wacky climate Read More »

fast-radio-burst-in-long-dead-galaxy-puzzles-astronomers

Fast radio burst in long-dead galaxy puzzles astronomers

A surprising source

FRBs are of particular interest because they can be used as probes to study the large-scale structure of the universe. That’s why Calvin Leung, a postdoc at the University of California, Berkeley, was so excited to crunch data from Canada’s CHIME instrument (Canadian Hydrogen Intensity Mapping Experiment). CHIME was built for other observations but is sensitive to many of the wavelengths that make up an FRB. Unlike most radio telescopes, which focus on small points in the sky, CHIME scans a huge area, allowing it to pick out FRBs even though they almost never happen in the same place twice.

Leung was able to combine data from several different telescopes to narrow down the likely position of a repeating FRB, first detected in February 2024, located in the constellation Ursa Minor. When he and his CHIME collaborators further refined the accuracy of the location by averaging many bursts from the FRB, they discovered that this FRB originated on the outskirts of a long-dead distant galaxy. That throws a wrench into the magnetar hypothesis because why would a dead galaxy in which no new stars are forming host a magnetar?

It’s the first time an FRB has been found in such a location, and it’s also the furthest away from its galaxy. CHIME currently has two online outrigger radio arrays in place—companion telescopes to the original CHIME radio array in British Columbia. A third array comes online this week in Northern California, and according to Leung, it should enable astronomers to pinpoint FRB sources much more accurately—including this one. Data has already been incorporated from an outrigger in West Virginia, confirming the published position with a 20-times improvement in precision.

“This result challenges existing theories that tie FRB origins to phenomena in star-forming galaxies,” said co-author Vishwangi Shah, a graduate student at McGill University. “The source could be in a globular cluster, a dense region of old, dead stars outside the galaxy. If confirmed, it would make FRB 20240209A only the second FRB linked to a globular cluster.”

V. Shah et al., Astrophysical Journal Letters, 2025. DOI: 10.3847/2041-8213/ad9ddc  (About DOIs).

T. Eftekhari et al., Astrophysical Journal Letters, 2025. DOI: 10.3847/2041-8213/ad9de2  (About DOIs).

Fast radio burst in long-dead galaxy puzzles astronomers Read More »

fast-radio-bursts-originate-near-the-surface-of-stars

Fast radio bursts originate near the surface of stars

One of the two papers published on Wednesday looks at the polarization of the photons in the burst itself, finding that the angle of polarization changes rapidly over the 2.5 milliseconds that FRB 20221022A lasted. The 130-degree rotation that occurred follows an S-shaped pattern, which has already been observed in about half of the pulsars we’ve observed—neutron stars that rotate rapidly and sweep a bright jet across the line of sight with Earth, typically multiple times each second.

The implication of this finding is that the source of the FRB is likely to also be on a compact, rapidly rotating object. Or at least this FRB. As of right now, this is the only FRB that we know displays this sort of behavior. While not all pulsars show this pattern of rotation, half of them do, and we’ve certainly observed enough FRBs we should have picked up others like this if they occurred at an appreciable rate.

Scattered

The second paper performs a far more complicated analysis, searching for indications of interactions between the FRB and the interstellar medium that exists within galaxies. This will have two effects. One, caused by scattering off interstellar material, will spread the burst out over time in a frequency-dependent manner. Scattering can also cause a random brightening/dimming of different areas of the spectrum, called scintillation, and somewhat analogous to the twinkling of stars caused by our atmosphere.

In this case, the photons of the FRB have had three encounters with matter that can induce these effects: the sparse intersteller material of the source galaxy, the equally sparse interstellar material in our own Milky Way, and the even more sparse intergalactic material in between the two. Since the source galaxy for FRB 20221022A is relatively close to our own, the intergalactic medium can be ignored, leaving the detection with two major sources of scattering.

Fast radio bursts originate near the surface of stars Read More »

supermassive-black-hole-binary-emits-unexpected-flares

Supermassive black hole binary emits unexpected flares

“In addition to stars, gas clouds can also be disrupted by SMBHs and their binaries,” they said in the same study. “The key difference is that the clouds can be comparable to or even larger than the binary separation, unlike stars, which are always much smaller. “

Looking at the results of a previous study that numerically modeled this type of situation also suggested a gas cloud. Just like the hypothetical supermassive black hole binary in the model, AT 2021hdr would accrete large amounts of material every time the black holes were halfway through orbiting each other and had to cross the cloud to complete the orbit—their gravity tears away some of the cloud, which ends up in their accretion disks, every time they cross it. They are now thought to take in anywhere between three and 30 percent of the cloud every few cycles. From a cloud so huge, that’s a lot of gas.

The supermassive black holes in AT 2021hdr are predicted to crash into each other and merge in another 70,000 years. They are also part of another merger, in which their host galaxy is gradually merging with a nearby galaxy, which was first discovered by the same team (this has no effect on the BSMBH tidal disruption of the gas cloud).

How the behavior of AT 2021hdr develops could tell us more about its nature and uphold or disprove the idea that it is eating away at a gaseous cloud instead of a star or something else. For now, it seems these black holes don’t just get gas from what they eat—they eat the gas itself.

Astronomy & Astrophysics, 2024.  DOI:  10.1051/0004-6361/202451305

Supermassive black hole binary emits unexpected flares Read More »

our-universe-is-not-fine-tuned-for-life,-but-it’s-still-kind-of-ok

Our Universe is not fine-tuned for life, but it’s still kind of OK


Inspired by the Drake equation, researchers optimize a model universe for life.

Physicists including Robert H. Dickle and Fred Hoyle have argued that we are living in a universe that is perfectly fine-tuned for life. Following the anthropic principle, they claimed that the only reason fundamental physical constants have the values we measure is because we wouldn’t exist if those values were any different. There would simply have been no one to measure them.

But now a team of British and Swiss astrophysicists have put that idea to test. “The short answer is no, we are not in the most likely of the universes,” said Daniele Sorini, an astrophysicist at Durham University. “And we are not in the most life-friendly universe, either.” Sorini led a study aimed at establishing how different amounts of the dark energy present in a universe would affect its ability to produce stars. Stars, he assumed, are a necessary condition for intelligent life to appear.

But worry not. While our Universe may not be the best for life, the team says it’s still pretty OK-ish.

Expanding the Drake equation

Back in the 1960s, Frank Drake, an American astrophysicist and astrobiologist, proposed an equation aimed at estimating the number of intelligent civilizations in our Universe. The equation started with stars as a precondition for life and worked its way down in scale from there. How many new stars appear in the Universe per year? How many of the stars are orbited by planets? How many of those planets are habitable? How many of those habitable planets can develop life? Eventually, you’re left with the fraction of planets that host intelligent civilizations.

The problem with the Drake equation was that it wasn’t really supposed to yield a definite number. We couldn’t—and still can’t—know the values for most of its variables, like the fraction of the planets that developed life. So far, we know of only one such planet, and you can’t infer any statistical probabilities when you only have one sample. The equation was meant more as a guide for future researchers, giving them ideas of what to look for in their search for extraterrestrial life.

But even without knowing the actual values of all those variables present in the Drake equation, one thing was certain: The more stars you had at the beginning, the better the odds for life were. So Sorini’s team focused on stars.

“Our work is connected to the Drake equation in that it relies on the same logic,” Sorini said. “The difference is we are not adding to the life side of the equation. We’re adding to the stars’ side of the equation.” His team attempted to identify the basic constituents of a universe that’s good at producing stars.

“By ‘constituents,’ I mean ordinary matter, the stuff we are made of—the dark matter, which is a weirder, invisible type of matter, and the dark energy, which is what is making the expansion of a universe proceed faster and faster,” Sorinin explained. Of all those constituents, his team found that dark energy has a key influence on the star formation rate.

Into the multiverse

Dark energy accelerates the expansion of the Universe, counteracting gravity and pushing matter further apart. If there’s enough dark energy, it would be difficult to form the dark matter web that structures galaxies. “The idea is ‘more dark energy, fewer galaxies—so fewer stars,’” Sorini said.

The effect of dark energy in a universe can be modeled by a number called the cosmological constant. “You could reinterpret it as a form of energy that can make your universe expand faster,” Sorinin said.

(The cosmological constant was originally a number Albert Einstein came up with to fix the fact that his theory of general relativity caused the expansion of what was thought to be a static universe. Einstein later learned that the Universe actually was expanding and declared the cosmological constant his greatest blunder. But the idea eventually managed to make a comeback after it was discovered that the Universe’s expansion is accelerating.)

The cosmological constant was one of the variables Sorini’s team manipulated to determine if we are living in a universe that is maximally efficient at producing stars. Sorini based this work on an idea put forward by Steven Weinberg, a Nobel Prize-winning physicist, back in 1989. “Weinberg proposed that there could be a multiverse of all possible universes, each with a different value of dark energy,” Sorini explained.  Sorini’s team modeled that multiverse composed of thousands upon thousands of possible universes, each complete with a past and future.

Cosmological fluke

To simulate the history of all those universes, Sorini used a slightly modified version of a star formation model he developed back in 2021 with John A. Peacock, a British astronomer at the University of Edinburgh, Scotland, and co-author of the study. It wasn’t the most precise model, but the approximations it suggested produced a universe that was reasonably close to our own. The team validated the results by predicting the stellar mass fraction in the total mass of the Milky Way Galaxy, which we know stands somewhere between 2.2 and 6.6 percent. The model came up with 6.7 percent, which was deemed good enough for the job.

In the next step, Sorini and his colleagues defined a large set of possible universes in which the value of the cosmological constant ranged from a very tiny fraction of the one we observe in our Universe all the way to the value 100,000 times higher than our own.

It turned out our Universe was not the best at producing stars. But it was decent.

“The value of the cosmological constant in the most life-friendly universe would be measured at roughly one-tenth of the value we observe in our own,” Sorini said.

In a universe like that, the fraction of the matter that gets turned into stars would stand at 27 percent. “But we don’t seem to be that far from the optimal value. In our Universe, stars are formed with around 23 percent of the matter,” Sorini said.

The last question the team addressed was how lucky we are to even be here. According to Sorini’s calculations, if all universes in the multiverse are equally likely, the chances of having a cosmological constant at or lower than the value present in our Universe is just 0.5 percent. In other words, we rolled the dice and got a pretty good score, although it could have been a bit better. The odds of getting a cosmological constant at one-tenth of our own or lower were just 0.2 percent.

Things also could have been much worse. The flip side of these odds is that the number of possible universes that are worse than our own vastly exceeds the number of universes that are better.

“That is of course all subject to the assumptions of our model, and the only assumption about life we made was that more stars lead to higher chances for life to appear,” Sorini said. In the future, his team plans to go beyond that idea and make the model more sophisticated by considering more parameters. “For example, we could ask ourselves what the chances are of producing carbons in order to have life as we know it or something like that,” Sorini said.

Monthly Notices of the Royal Astronomical Society, 2024.  DOI: https://doi.org/10.1093/mnras/stae2236

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Our Universe is not fine-tuned for life, but it’s still kind of OK Read More »

nearly-three-years-since-launch,-webb-is-a-hit-among-astronomers

Nearly three years since launch, Webb is a hit among astronomers

From its halo-like orbit nearly a million miles from Earth, the James Webb Space Telescope is seeing farther than human eyes have ever seen.

In May, astronomers announced that Webb detected the most distant galaxy found so far, a fuzzy blob of red light that we see as it existed just 290 million years after the Big Bang. Light from this galaxy, several hundreds of millions of times the mass of the Sun, traveled more than 13 billion years until photons fell onto Webb’s gold-coated mirror.

A few months later, in July, scientists released an image Webb captured of a planet circling a star slightly cooler than the Sun nearly 12 light-years from Earth. The alien world is several times the mass of Jupiter and the closest exoplanet to ever be directly imaged. One of Webb’s science instruments has a coronagraph to blot out bright starlight, allowing the telescope to resolve the faint signature of a nearby planet and use spectroscopy to measure its chemical composition.

These are just a taste of the discoveries made by the $10 billion Webb telescope since it began science observations in 2022. Judging by astronomers’ interest in using Webb, there are many more to come.

Breaking records

The Space Telescope Science Institute, which operates Webb on behalf of NASA and its international partners, said last week that it received 2,377 unique proposals from science teams seeking observing time on the observatory. The institute released a call for proposals earlier this year for the so-called “Cycle 4” series of observations with Webb.

This volume of proposals represents around 78,000 hours of observing time with Webb, nine times more than the telescope’s available capacity for scientific observations in this cycle. The previous observing cycle had a similar “oversubscription rate” but had less overall observing time available to the science community.

Nearly three years since launch, Webb is a hit among astronomers Read More »

researchers-spot-black-hole-feeding-at-40x-its-theoretical-limit

Researchers spot black hole feeding at 40x its theoretical limit


Similar feeding events could explain the rapid growth of supermassive black holes.

How did supermassive black holes end up at the center of every galaxy? A while back, it wasn’t that hard to explain: That’s where the highest concentration of matter is, and the black holes had billions of years to feed on it. But as we’ve looked ever deeper into the Universe’s history, we keep finding supermassive black holes, which shortens the timeline for their formation. Rather than making a leisurely meal of nearby matter, these black holes have gorged themselves in a feeding frenzy.

With the advent of the Webb Space Telescope, the problem has pushed up against theoretical limits. The matter falling into a black hole generates radiation, with faster feeding meaning more radiation. And that radiation can drive off nearby matter, choking off the black hole’s food supply. That sets a limit on how fast black holes can grow unless matter is somehow fed directly into them. The Webb was used to identify early supermassive black holes that needed to have been pushing against the limit for their entire existence.

But the Webb may have just identified a solution to the dilemma as well. It has spotted a black hole that appears to have been feeding at 40 times the theoretical limit for millions of years, allowing growth at a pace sufficient to build a supermassive black hole.

Setting limits

Matter falling into a black hole generally gathers into what’s called an accretion disk, orbiting the body and heating up due to collisions with the rest of the disk, all while losing energy in the form of radiation. Eventually, if enough energy is lost, the material falls into the black hole. The more matter there is, the brighter the accretion disk gets, and the more matter that gets driven off before it can fall in. The point where the radiation pressure drives away as much matter as the black hole pulls in is called the Eddington Limit. The bigger the black hole, the higher this limit.

It is possible to exceed the Eddington Limit if matter falls directly into the black hole without spending time in the accretion disk, but it requires a fairly distinct configuration of nearby clouds of gas, something that’s unlikely to persist for more than a few million years.

That creates a problem for supermassive black holes. The only way we know to form a black hole—the death of a massive star in a supernova—tends to produce them with only a few times the mass of the Sun. Even assuming unusually massive stars in the early Universe, along with a few black hole mergers, it’s expected that most of the potential seeds of a supermassive black hole are in the area of 100 times the Sun’s mass. There are theoretical ideas about the direct collapse of gas clouds that avoid the intervening star formation and immediately form a black hole with 10,000 times the mass of the Sun or more, but they remain entirely hypothetical.

In either case, black holes would need to suck down a lot of matter before reaching supermassive proportions. But most of the early supermassive black holes spotted using the Webb are feeding at roughly 20 percent of the Eddington limit, based on their lack of X-ray emissions. This either means that they fed at well beyond the Eddington Limit earlier in their history or that they started their existences as very heavy black holes.

The object that’s the focus of this new report, LID-568, was first spotted using the Chandra X-ray Telescope (an observatory that was recently threatened with shutdown). LID-568 is luminous at X-ray wavelengths, which is why Chandra could spot it, and suggests the possibility that it is feeding at an extremely high rate. Imaging in the infrared shows that it appears to be a point source, so the research team concluded that most of the light we’re seeing comes directly from the accretion disk, rather than from the stars in the galaxy it occupies.

But that made it difficult to determine any details about the black hole’s environment or to figure out how old it was relative to the Big Bang at the time we’re viewing it. So, the researchers pointed the Webb at it to capture details that other observatories couldn’t image.

A fast eater

Use of spectroscopy revealed that we were viewing LID-568 as it existed about 1.5 billion years after the Big Bang. The emissions from gas and dust in the area were low, which suggests that the black hole resides in a dwarf galaxy. Based on the emission of hydrogen, the researchers estimate that the black hole is roughly a million times the mass of the Sun—nothing you’d want to get close to, but small compared to many supermassive black holes.

It’s actually similar in mass to a number of black holes the Webb was used to identify in galaxies that are considerably older. But it’s much, much brighter (as bright as something 10 times heavier) and includes the X-ray emissions that those lack. In fact, it’s so bright compared to its mass that the researchers estimate that it could only produce that much radiation if it were feeding at well above the Eddington Limit. Ultimately, they estimate that it’s exceeding the Eddington Limit by a factor of over 40.

Critically, the Webb was able to identify two lobes of material that were moving toward us at high velocities, based on the blue shifting of hydrogen emissions lines. These suggest that the material is moving at over 500 kilometers a second and stretched for tens of thousands of light years away from the black hole. (Presumably, these obscured similar blobs of material moving away from us.) Given their length and apparent velocity, and assuming they represent gas driven off by the black hole, the researchers estimated how long it was emitting this intense radiation.

Working back from there, they estimate the black hole’s original mass was about 100 times that of the Sun. “This lifetime suggests that a substantial fraction of the mass growth of LID-568 may have occurred in a single, super-Eddington accretion episode,” they conclude. For that to work, the black hole had to have ended up in a giant molecular cloud and stayed there feeding for over 10 million years.

The researchers suspect that this intense activity interfered with star formation in the galaxy, which is one of the reasons that it is relatively star-poor. That may explain why we see some very massive black holes at the center of relatively small galaxies in the present Universe.

So what does this mean?

In some ways, this is potentially good news for cosmologists. Forming supermassive black holes as quickly as the size/age of those observed by Webb would seemingly require them to have fed at or slightly above the Eddington Limit for most of their history, which was easy to view as unlikely. If the Eddington Limit can be exceeded by a factor of 40 for over 10 million years, however, this seems to be less of an issue.

But, at the same time, the graph showing mass versus luminosity of supermassive black holes the research team generated shows that LID-568 is in a class by itself. If there were a lot of black holes feeding at these rates, it should be easy to identify more. And it’s a safe bet that these researchers are checking other X-ray sources to see if there are additional examples.

Nature Astronomy, 2024. DOI: 10.1038/s41550-024-02402-9  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Researchers spot black hole feeding at 40x its theoretical limit Read More »