Science

dogs-came-in-a-wide-range-of-sizes-and-shapes-long-before-modern-breeds

Dogs came in a wide range of sizes and shapes long before modern breeds

“The concept of ‘breed’ is very recent and does not apply to the archaeological record,” Evin said. People have, of course, been breeding dogs for particular traits for as long as we’ve had dogs, and tiny lap dogs existed even in ancient Rome. However, it’s unlikely that a Neolithic herder would have described his dog as being a distinct “breed” from his neighbor’s hunting partner, even if they looked quite different. Which, apparently, they did.

A big yellow dog, a little gray dog, and a little white dog

Dogs had about half of their modern diversity (at least in skull shapes and sizes) by the Neolithic. Credit: Kiona Smith

Bones only tell part of the story

“We know from genetic models that domestication should have started during the late Pleistocene,” Evin told Ars. A 2021 study suggested that domestic dogs have been a separate species from wolves for more than 23,000 years. But it took a while for differences to build up.

Evin and her colleagues had access to 17 canine skulls that ranged from 12,700 to 50,000 years old—prior to the end of the ice age—and they all looked enough like modern wolves that, as Evin put it, “for now, we have no evidence to suggest that any of the wolf-like skulls did not belong to wolves or looked different from them.” In other words, if you’re just looking at the skull, it’s hard to tell the earliest dogs from wild wolves.

We have no way to know, of course, what the living dog might have looked like. It’s worth mentioning that Evin and her colleagues found a modern Saint Bernard’s skull that, according to their statistical analysis, looked more wolf-like than dog-like. But even if it’s not offering you a brandy keg, there’s no mistaking a live Saint Bernard, with its droopy jowls and floppy ears, for a wolf.

“Skull shape tells us a lot about function and evolutionary history, but it represents only one aspect of the animal’s appearance. This means that two dogs with very similar skulls could have looked quite different in life,” Evin told Ars. “It’s an important reminder that the archaeological record captures just part of the biological and cultural story.”

And with only bones—and sparse ones, at that—to go on, we may be missing some of the early chapters of dogs’ biological and cultural story. Domestication tends to select the friendliest animals to produce the next generation, and apparently that comes with a particular set of evolutionary side effects, whether you’re studying wolves, foxes, cattle, or pigs. Spots, floppy ears, and curved tails all seem to be part of the genetic package that comes with inter-species friendliness. But none of those traits is visible in the skull.

Dogs came in a wide range of sizes and shapes long before modern breeds Read More »

scientist-pleaded-guilty-to-smuggling-fusarium-graminearum-into-us.-but-what-is-it?

Scientist pleaded guilty to smuggling Fusarium graminearum into US. But what is it?

Even with Fusarium graminearum, which has appeared on every continent but Antarctica, there is potential for introducing new genetic material into the environment that may exist in other countries but not the US and could have harmful consequences for crops.

How do you manage Fusarium graminearum infections?

Fusarium graminearum infections generally occur during the plant’s flowering stage or when there is more frequent rainfall and periods of high humidity during early stages of grain production.

How Fusarium graminearum risk progressed in 2025. Yellow is low risk, orange is medium risk, and red is high risk. Fusarium Risk Tool/Penn State

Wheat in the southern US is vulnerable to infection during the spring. As the season advances, the risk from scab progresses north through the US and into Canada as the grain crops mature across the region, with continued periods of conducive weather throughout the summer.

Between seasons, Fusarium graminearum survives on barley, wheat, and corn plant residues that remain in the field after harvest. It reproduces by producing microscopic spores that can then travel long distances on wind currents, spreading the fungus across large geographic areas each season.

In wheat and barley, farmers can suppress the damage by spraying a fungicide onto developing wheat heads when they’re most susceptible to infection. Applying fungicide can reduce scab and its severity, improve grain weight, and reduce mycotoxin contamination.

However, integrated approaches to manage plant diseases are generally ideal, including planting barley or wheat varieties that are resistant to scab and also using a carefully timed fungicide application, rotating crops, and tilling the soil after harvest to reduce residue where Fusarium graminearum can survive the winter.

Even though fungicide applications may be beneficial, fungicides offer only some protection and can’t cure scab. If the environmental conditions are extremely conducive for scab, with ample moisture and humidity during flowering, the disease will still occur, albeit at reduced levels.

Fusarium Head Blight with NDSU’s Andrew Friskop.

Plant pathologists are making progress on early warning systems for farmers. A team from Kansas State University, Ohio State University, and Pennsylvania State University has been developing a computer model to predict the risk of scab. Their wheat disease predictive model uses historic and current environmental data from weather stations throughout the US, along with current conditions, to develop a forecast.

In areas that are most at risk, plant pathologists and commodity specialists encourage wheat growers to apply a fungicide during periods when the fungus is likely to grow to reduce the chances of damage to crops and the spread of mycotoxin.

Tom W. Allen, associate research professor of Plant Pathology, Mississippi State University. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Scientist pleaded guilty to smuggling Fusarium graminearum into US. But what is it? Read More »

blue-origin’s-new-glenn-rocket-came-back-home-after-taking-aim-at-mars

Blue Origin’s New Glenn rocket came back home after taking aim at Mars


“Never before in history has a booster this large nailed the landing on the second try.”

Blue Origin’s 320-foot-tall (98-meter) New Glenn rocket lifts off from Cape Canaveral Space Force Station, Florida. Credit: Blue Origin

The rocket company founded a quarter-century ago by billionaire Jeff Bezos made history Thursday with the pinpoint landing of an 18-story-tall rocket on a floating platform in the Atlantic Ocean.

The on-target touchdown came nine minutes after the New Glenn rocket, built and operated by Bezos’ company Blue Origin, lifted off from Cape Canaveral Space Force Station, Florida, at 3: 55 pm EST (20: 55 UTC). The launch was delayed from Sunday, first due to poor weather at the launch site in Florida, then by a solar storm that sent hazardous radiation toward Earth earlier this week.

“We achieved full mission success today, and I am so proud of the team,” said Dave Limp, CEO of Blue Origin. “It turns out Never Tell Me The Odds (Blue Origin’s nickname for the first stage) had perfect odds—never before in history has a booster this large nailed the landing on the second try. This is just the beginning as we rapidly scale our flight cadence and continue delivering for our customers.”

The two-stage launcher set off for space carrying two NASA science probes on a two-year journey to Mars, marking the first time any operational satellites flew on Blue Origin’s new rocket, named for the late NASA astronaut John Glenn. The New Glenn hit its marks on the climb into space, firing seven BE-4 main engines for nearly three minutes on a smooth ascent through blue skies over Florida’s Space Coast.

Seven BE-4 engines power New Glenn downrange from Florida’s Space Coast. Credit: Blue Origin

The engines consumed super-cold liquified natural gas and liquid oxygen, producing more than 3.8 million pounds of thrust at full power. The BE-4s shut down, and the first stage booster released the rocket’s second stage, with dual hydrogen-fueled BE-3U engines, to continue the mission into orbit.

The booster soared to an altitude of 79 miles (127 kilometers), then began a controlled plunge back into the atmosphere, targeting a landing on Blue Origin’s offshore recovery vessel named Jacklyn. Moments later, three of the booster’s engines reignited to slow its descent in the upper atmosphere. Then, moments before reaching the Atlantic, the rocket again lit three engines and extended its landing gear, sinking through low-level clouds before settling onto the football field-size deck of Blue Origin’s recovery platform 375 miles (600 kilometers) east of Cape Canaveral.

A pivotal moment

The moment of touchdown appeared electric at several Blue Origin facilities around the country, which had live views of cheering employees piped in to the company’s webcast of the flight. This was the first time any company besides SpaceX has propulsively landed an orbital-class rocket booster, coming nearly 10 years after SpaceX recovered its first Falcon 9 booster intact in December 2015.

Blue Origin’s New Glenn landing also came almost exactly a decade after the company landed its smaller suborbital New Shepard rocket for the first time in West Texas. Just like Thursday’s New Glenn landing, Blue Origin successfully recovered the New Shepard on its second-ever attempt.

Blue Origin’s heavy-lifter launched successfully for the first time in January. But technical problems prevented the booster from restarting its engines on descent, and the first stage crashed at sea. Engineers made “propellant management and engine bleed control improvements” to resolve the problems, and the fixes appeared to work Thursday.

The rocket recovery is a remarkable achievement for Blue Origin, which has long lagged dominant SpaceX in the commercial launch business. SpaceX has now logged 532 landings with its Falcon booster fleet. Now, with just a single recovery in the books, Blue Origin sits at second in the rankings for propulsive landings of orbit-class boosters. Bezos’ company has amassed 34 landings of the suborbital New Shepard model, which lacks the size and doesn’t reach the altitude and speed of the New Glenn booster.

Blue Origin landed a New Shepard returning from space for the first time in November 2015, a few weeks before SpaceX first recovered a Falcon 9 booster. Bezos threw shade on SpaceX with a post on Twitter, now called X, after the first Falcon 9 landing: “Welcome to the club!”

Jeff Bezos, Blue Origin’s founder and owner, wrote this message on Twitter following SpaceX’s first Falcon 9 landing on December 21, 2015. Credit: X/Jeff Bezos

Finally, after Thursday, Blue Origin officials can say they are part of the same reusable rocket club as SpaceX. Within a few days, Blue Origin’s recovery vessel is expected to return to Port Canaveral, Florida, where ground crews will offload the New Glenn booster and move it to a hangar for inspections and refurbishment.

“Today was a tremendous achievement for the New Glenn team, opening a new era for Blue Origin and the industry as we look to launch, land, repeat, again and again,” said Jordan Charles, the company’s vice president for the New Glenn program, in a statement. “We’ve made significant progress on manufacturing at rate and building ahead of need. Our primary focus remains focused on increasing our cadence and working through our manifest.”

Blue Origin plans to reuse the same booster next year for the first launch of the company’s Blue Moon Mark 1 lunar cargo lander. This mission is currently penciled in to be next on Blue Origin’s New Glenn launch schedule. Eventually, the company plans to have a fleet of reusable boosters, like SpaceX has with the Falcon 9, that can each be flown up to 25 times.

New Glenn is a core element in Blue Origin’s architecture for NASA’s Artemis lunar program. The rocket will eventually launch human-rated lunar landers to the Moon to provide astronauts with rides to and from the surface of the Moon.

The US Space Force will also examine the results of Thursday’s launch to assess New Glenn’s readiness to begin launching military satellites. The military selected Blue Origin last year to join SpaceX and United Launch Alliance as a third launch provider for the Defense Department.

Blue Origin’s New Glenn booster, 23 feet (7 meters) in diameter, on the deck of the company’s landing platform in the Atlantic Ocean.

Slow train to Mars

The mission wasn’t over with the buoyant landing in the Atlantic. New Glenn’s second stage fired its engines twice to propel itself on a course toward deep space, setting up for deployment of NASA’s two ESCAPADE satellites a little more than a half-hour after liftoff.

The identical satellites were released from their mounts on top of the rocket to begin their nearly two-year journey to Mars, where they will enter orbit to survey how the solar wind interacts with the rarefied uppermost layers of the red planet’s atmosphere. Scientists believe radiation from the Sun gradually stripped away Mars’ atmosphere, driving runaway climate change that transitioned the planet from a warm, habitable world to the global inhospitable desert seen today.

“I’m both elated and relieved to see NASA’s ESCAPADE spacecraft healthy post-launch and looking forward to the next chapter of their journey to help us understand Mars’ dynamic space weather environment,” said Rob Lillis, the mission’s principal investigator from the University of California, Berkeley.

Scientists want to understand the environment at the top of the Martian atmosphere to learn more about what drove this change. With two instrumented spacecraft, ESCAPADE will gather data from different locations around Mars, providing a series of multipoint snapshots of solar wind and atmospheric conditions. Another NASA spacecraft, named MAVEN, has collected similar data since arriving in orbit around Mars in 2014, but it is only a single observation post.

ESCAPADE, short for Escape and Plasma Acceleration and Dynamics Explorers, was developed and launched on a budget of about $80 million, a bargain compared to all of NASA’s recent Mars missions. The spacecraft were built by Rocket Lab, and the project is managed on behalf of NASA by the University of California, Berkeley.

The two spacecraft for NASA’s ESCAPADE mission at Rocket Lab’s factory in Long Beach, California. Credit: Rocket Lab

NASA paid Blue Origin about $20 million for the launch of ESCAPADE, significantly less than it would have cost to launch it on any other dedicated rocket. The space agency accepted the risk of launching on the relatively unproven New Glenn rocket, which hasn’t yet been certified by NASA or the Space Force for the government’s marquee space missions.

The mission was supposed to launch last year, when Earth and Mars were in the right positions to enable a direct trip between the planets. But Blue Origin delayed the launch, forcing a yearlong wait until the company’s second New Glenn was ready to fly. Now, the ESCAPADE satellites, each about a half-ton in mass fully fueled, will loiter in a unique orbit more than a million miles from Earth until next November, when they will set off for the red planet. ESCAPADE will arrive at Mars in September 2027 and begin its science mission in 2028.

Rocket Lab ground controllers established communication with the ESCAPADE satellites late Thursday night.

“The ESCAPADE mission is part of our strategy to understand Mars’ past and present so we can send the first astronauts there safely,” said Nicky Fox, associate administrator of NASA’s Science Mission Directorate. “Understanding Martian space weather is a top priority for future missions because it helps us protect systems, robots, and most importantly, humans, in extreme environments.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Blue Origin’s New Glenn rocket came back home after taking aim at Mars Read More »

tracking-the-winds-that-have-turned-mars-into-a-planet-of-dust

Tracking the winds that have turned Mars into a planet of dust

Where does all this dust come from? It’s thought to be the result of erosion caused by the winds. Because the Martian atmosphere is so thin, dust particles can be difficult to move, but larger particles can become more easily airborne if winds are turbulent enough, later taking smaller dust motes with them. Perseverance and previous Mars rovers have mostly witnessed wind vortices that were associated with either dust devils or convection, during which warm air rises.

CaSSIS and HRSC data showed that most dust devils occur in the northern hemisphere of Mars, mainly in the Amazonis and Elysium Planitiae, with Amazonis Planitia being a hotspot. They can be kicked up by winds on both rough and smooth terrain, but they tend to spread farther in the southern hemisphere, with some traveling across nearly that entire half of the planet. Seasonal occurrence of dust devils is highest during the southern summer, while they are almost nonexistent during the late northern fall.

Martian dust devils tend to peak between mid-morning and midafternoon, though they can occur from early morning through late afternoon. They also migrate toward the Martian north pole in the northern summer and toward the south pole during the southern summer. Southern dust devils tend to move faster than those in the northern hemisphere. Movement determined by winds can be as fast as 44 meters per second (about 98 mph), which is much faster than dust devils move on Earth.

Weathering the storm

Dust devils have also been found to accelerate extremely rapidly on the red planet. These fierce storms are associated with winds that travel along with them but do not form a vortex, known as nonvortical winds. It only takes a few seconds for these winds to accelerate to velocities high enough that they’re able to lift dust particles from the ground and transfer them to the atmosphere. It is not only dust devils that do this—the team found that even nonvortical winds lift large amounts of dust particles on their own, more than was previously thought, and create a dusty haze in the atmosphere.

Tracking the winds that have turned Mars into a planet of dust Read More »

with-another-record-broken,-the-world’s-busiest-spaceport-keeps-getting-busier

With another record broken, the world’s busiest spaceport keeps getting busier


It’s not just the number of rocket launches, but how much stuff they’re carrying into orbit.

With 29 Starlink satellites onboard, a Falcon 9 rocket streaks through the night sky over Cape Canaveral Space Force Station, Florida, on Monday night. Credit: Stephen Clark/Ars Technica

CAPE CANAVERAL, Florida—Another Falcon 9 rocket fired off its launch pad here on Monday night, taking with it another 29 Starlink Internet satellites to orbit.

This was the 94th orbital launch from Florida’s Space Coast so far in 2025, breaking the previous record for the most satellite launches in a calendar year from the world’s busiest spaceport. Monday night’s launch came two days after a Chinese Long March 11 rocket lifted off from an oceangoing platform on the opposite side of the world, marking humanity’s 255th mission to reach orbit this year, a new annual record for global launch activity.

As of Wednesday, a handful of additional missions have pushed the global figure this year to 259, putting the world on pace for around 300 orbital launches by the end of 2025. This will more than double the global tally of 135 orbital launches in 2021.

Routine vs. complacency

Waiting in the darkness a few miles away from the launch pad, I glanced around at my surroundings before watching SpaceX’s Falcon 9 thunder into the sky. There were no throngs of space enthusiasts anxiously waiting for the rocket to light up the night. No line of photographers snapping photos. Just this reporter and two chipper retirees enjoying what a decade ago would have attracted far more attention.

Go to your local airport and you’ll probably find more people posted up at a plane-spotting park at the end of the runway. Still, a rocket launch is something special. On the same night that I watched the 94th launch of the year depart from Cape Canaveral, Orlando International Airport saw the same number of airplane departures in just three hours.

The crowds still turn out for more meaningful launches, such as a test flight of SpaceX’s Starship megarocket in Texas or Blue Origin’s attempt to launch its second New Glenn heavy-lifter here Sunday. But those are not the norm. Generations of aerospace engineers were taught that spaceflight is not routine for fear of falling into complacency, leading to failure, and in some cases, death.

Compared to air travel, the mantra remains valid. Rockets are unforgiving, with engines operating under extreme pressures, at high thrust, and unable to suck in oxygen from the atmosphere as a reactant for combustion. There are fewer redundancies in a rocket than in an airplane.

The Falcon 9’s established failure rate is less than 1 percent, well short of any safety standard for commercial air travel but good enough to be the most successful orbital-class in history. Given the Falcon 9’s track record, SpaceX seems to have found a way to overcome the temptation for complacency.

A Chinese Long March 11 rocket carrying three Shiyan 32 test satellites lifts off from waters off the coast of Haiyang in eastern China’s Shandong province on Saturday. Credit: Guo Jinqi/Xinhua via Getty Images

Following the trend

The upward trend in rocket launches hasn’t always been the case. Launch numbers were steady for most of the 2010s, following a downward trend in the 2000s, with as few as 52 orbital launches in 2005, the lowest number since the nascent era of spaceflight in 1961. There were just seven launches from here in Florida that year.

The numbers have picked up dramatically in the last five years as SpaceX has mastered reusable rocketry.

It’s important to look at not just the number of launches but also how much stuff rockets are actually putting into orbit. More than half of this year’s launches were performed using SpaceX’s Falcon 9 rocket, and the majority of those deployed Starlink satellites for SpaceX’s global Internet network. Each spacecraft is relatively small in size and weight, but SpaceX stacks up to 29 of them on a single Falcon 9 to max out the rocket’s carrying capacity.

All this mass adds up to make SpaceX’s dominance of the launch industry appear even more absolute. According to analyses by BryceTech, an engineering and space industry consulting firm, SpaceX has launched 86 percent of all the world’s payload mass over the 18 months from the beginning of 2024 through June 30 of this year.

That’s roughly 2.98 million kilograms of the approximately 3.46 million kilograms (3,281 of 3,819 tons) of satellite hardware and cargo that all the world’s rockets placed into orbit during that timeframe.

The charts below were created by Ars Technica using publicly available launch numbers and payload mass estimates from BryceTech. The first illustrates the rising launch cadence at Cape Canaveral Space Force Station and NASA’s Kennedy Space Center, located next to one another in Florida. Launches from other US-licensed spaceports, primarily Vandenberg Space Force Base, California, and Rocket Lab’s base at Māhia Peninsula in New Zealand, are also on the rise.

These numbers represent rockets that reached low-Earth orbit. We didn’t include test flights of SpaceX’s Starship rocket in the chart because all of its launches have intentionally flown on suborbital trajectories.

In the second chart, we break down the payload upmass to orbit from SpaceX, other US companies, China, Russia, and other international launch providers.

Launch rates are on a clear upward trend, while SpaceX has launched 86 percent of the world’s total payload mass to orbit since the beginning of 2024. Credit: Stephen Clark/Ars Technica/BryceTech

Will it continue?

It’s a good bet that payload upmass will continue to rise in the coming years, with heavy cargo heading to orbit to further expand SpaceX’s Starlink communications network and build out new megaconstellations from Amazon, China, and others. The US military’s Golden Dome missile defense shield will also have a ravenous appetite for rockets to get it into space.

SpaceX’s Starship megarocket could begin flying to low-Earth orbit next year, and if it does, SpaceX’s preeminence in delivering mass to orbit will remain assured. Starship’s first real payloads will likely be SpaceX’s next-generation Starlink satellites. These larger, heavier, more capable spacecraft will launch 60 at a time on Starship, further stretching SpaceX’s lead in the upmass war.

But Starship’s arrival will come at the expense of the workhorse Falcon 9, which lacks the capacity to haul the next-gen Starlinks to orbit. “This year and next year I anticipate will be the highest Falcon launch rates that we will see,” said Stephanie Bednarek, SpaceX’s vice president of commercial sales, at an industry conference in July.

SpaceX is on pace for between 165 and 170 Falcon 9 launches this year, with 144 flights already in the books for 2025. Last year’s total for Falcon 9 and Falcon Heavy was 134 missions. SpaceX has not announced how many Falcon 9 and Falcon Heavy launches it plans for next year.

Starship is designed to be fully and rapidly reusable, eventually enabling multiple flights per day. But that’s still a long way off, and it’s unknown how many years it might take for Starship to surpass the Falcon 9’s proven launch tempo.

A Starship rocket and Super Heavy booster lift off from Starbase, Texas. Credit: SpaceX

In any case, with Starship’s heavy-lifting capacity and upgraded next-gen satellites, SpaceX could match an entire year’s worth of new Starlink capacity with just two fully loaded Starship flights. Starship will be able to deliver 60 times more Starlink capacity to orbit than a cluster of satellites riding on a Falcon 9.

There’s no reason to believe SpaceX will be satisfied with simply keeping pace with today’s Starlink growth rate. There are emerging market opportunities in connecting satellites with smartphones, space-based computer processing and data storage, and military applications.

Other companies have medium-to-heavy rockets that are either new to the market or soon to debut. These include Blue Origin’s New Glenn, now set to make its second test flight in the coming days, with a reusable booster designed to facilitate a rapid-fire launch cadence.

Despite all of the newcomers, most satellite operators see a shortage of launch capacity on the commercial market. “The industry is likely to remain supply-constrained through the balance of the decade,” wrote Caleb Henry, director of research at the industry analysis firm Quilty Space. “That could pose a problem for some of the many large constellations on the horizon.”

United Launch Alliance’s Vulcan rocket, Rocket Lab’s Neutron, Stoke Space’s Nova, Relativity Space’s Terran R, and Firefly Aerospace and Northrop Grumman’s Eclipse are among the other rockets vying for a bite at the launch apple.

“Whether or not the market can support six medium to heavy lift launch providers from the US aloneplus Starshipis an open question, but for the remainder of the decade launch demand is likely to remain high, presenting an opportunity for one or more new players to establish themselves in the pecking order,” Henry wrote in a post on Quilty’s website.

China’s space program will need more rockets, too. That nation’s two megaconstellations, known as Guowang and Qianfan, will have thousands of satellites requiring a significant uptick on Chinese launches.

Taking all of this into account, the demand curve for access to space is sure to continue its upward trajectory. How companies meet this demand, and with how many discrete departures from Earth, isn’t quite as clear.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

With another record broken, the world’s busiest spaceport keeps getting busier Read More »

quantum-roundup:-lots-of-companies-announcing-new-tech

Quantum roundup: Lots of companies announcing new tech


More superposition, less supposition

IBM follows through on its June promises, plus more trapped ion news.

IBM has moved to large-scale manufacturing of its Quantum Loon chips. Credit: IBM

The end of the year is usually a busy time in the quantum computing arena, as companies often try to announce that they’ve reached major milestones before the year wraps up. This year has been no exception. And while not all of these announcements involve interesting new architectures like the one we looked at recently, they’re a good way to mark progress in the field, and they often involve the sort of smaller, incremental steps needed to push the field forward.

What follows is a quick look at a handful of announcements from the past few weeks that struck us as potentially interesting.

IBM follows through

IBM is one of the companies announcing a brand new architecture this year. That’s not at all a surprise, given that the company promised to do so back in June; this week sees the company confirming that it has built the two processors it said it would earlier in the year. These include one called Loon, which is focused on the architecture that IBM will use to host error-corrected logical qubits. Loon represents two major changes for the company: a shift to nearest-neighbor connections and the addition of long-distance connections.

IBM had previously used what it termed the “heavy hex” architecture, in which alternating qubits were connected to either two or three of their neighbors, forming a set of overlapping hexagonal structures. In Loon, the company is using a square grid, with each qubit having connections to its four closest neighbors. This higher density of connections can enable more efficient use of the qubits during computations. But qubits in Loon have additional long-distance connections to other parts of the chip, which will be needed for the specific type of error correction that IBM has committed to. It’s there to allow users to test out a critical future feature.

The second processor, Nighthawk, is focused on the now. It also has the nearest-neighbor connections and a square grid structure, but it lacks the long-distance connections. Instead, the focus with Nighthawk is to get error rates down so that researchers can start testing algorithms for quantum advantage—computations where quantum computers have a clear edge over classical algorithms.

In addition, the company is launching GitHub repository that will allow the community to deposit code and performance data for both classical and quantum algorithms, enabling rigorous evaluations of relative performance. Right now, those are broken down into three categories of algorithms that IBM expects are most likely to demonstrate a verifiable quantum advantage.

This isn’t the only follow-up to IBM’s June announcement, which also saw the company describe the algorithm it would use to identify errors in its logical qubits and the corrections needed to fix them. In late October, the company said it had confirmed that the algorithm could work in real time when run on an FPGA made in collaboration with AMD.

Record lows

A few years back, we reported on a company called Oxford Ionics, which had just announced that it achieved a record low error rate in some qubit operations using trapped ions. Most trapped-ion quantum computers move qubits by manipulating electromagnetic fields, but they perform computational operations using lasers. Oxford Ionics figured out how to perform operations using electromagnetic fields, meaning more of their processing benefited from our ability to precisely manufacture circuitry (lasers were still needed for tasks like producing a readout of the qubits). And as we noted, it could perform these computational operations extremely effectively.

But Oxford Ionics never made a major announcement that would give us a good excuse to describe its technology in more detail. The company was ultimately acquired by IonQ, a competitor in the trapped-ion space.

Now, IonQ is building on what it gained from Oxford Ionics, announcing a new, record-low error rate for two-qubit gates: greater than 99.99 percent fidelity. That could be critical for the company, as a low error rate for hardware qubits means fewer are needed to get good performance from error-corrected qubits.

But the details of the two-qubit gates are perhaps more interesting than the error rate. Two-qubit gates involve bringing both qubits involved into close proximity, which often requires moving them. That motion pumps a bit of energy into the system, raising the ions’ temperature and leaving them slightly more prone to errors. As a result, any movement of the ions is generally followed by cooling, in which lasers are used to bleed energy back out of the qubits.

This process, which involves two distinct cooling steps, is slow. So slow that as much as two-thirds of the time spent in operations involves the hardware waiting around while recently moved ions are cooled back down. The new IonQ announcement includes a description of a method for performing two-qubit gates that doesn’t require the ions to be fully cooled. This allows one of the two cooling steps to be skipped entirely. In fact, coupled with earlier work involving one-qubit gates, it raises the possibility that the entire machine could operate with its ions at a still very cold but slightly elevated temperature, avoiding all need for one of the two cooling steps.

That would shorten operation times and let researchers do more before the limit of a quantum system’s coherence is reached.

State of the art?

The last announcement comes from another trapped-ion company, Quantum Art. A couple of weeks back, it announced a collaboration with Nvidia that resulted in a more efficient compiler for operations on its hardware. On its own, this isn’t especially interesting. But it’s emblematic of a trend that’s worth noting, and it gives us an excuse to look at Quantum Art’s technology, which takes a distinct approach to boosting the efficiency of trapped-ion computation.

First, the trend: Nvidia’s interest in quantum computing. The company isn’t interested in the quantum aspects (at least not publicly); instead, it sees an opportunity to get further entrenched in high-performance computing. There are three areas where the computational capacity of GPUs can play a role here. One is small-scale modeling of quantum processors so that users can perform an initial testing of algorithms without committing to paying for access to the real thing. Another is what Quantum Art is announcing: using GPUs as part of a compiler chain to do all the computations needed to find more efficient ways of executing an algorithm on specific quantum hardware.

Finally, there’s a potential role in error correction. Error correction involves some indirect measurements of a handful of hardware qubits to determine the most likely state that a larger collection (called a logical qubit) is in. This requires modeling a quantum system in real time, which is quite difficult—hence the computational demands that Nvidia hopes to meet. Regardless of the precise role, there has been a steady flow of announcements much like Quantum Art’s: a partnership with Nvidia that will keep the company’s hardware involved if the quantum technology takes off.

In Quantum Art’s case, that technology is a bit unusual. The trapped-ion companies we’ve covered so far are all taking different routes to the same place: moving one or two ions into a location where operations can be performed and then executing one- or two-qubit gates. Quantum Art’s approach is to perform gates with much larger collections of ions. At the compiler level, it would be akin to figuring out which qubits need a specific operation performed, clustering them together, and doing it all at once. Obviously, there are potential efficiency gains here.

The challenge would normally be moving so many qubits around to create these clusters. But Quantum Art uses lasers to “pin” ions in a row so they act to isolate the ones to their right from the ones to their left. Each cluster can then be operated on separately. In between operations, the pins can be moved to new locations, creating different clusters for the next set of operations. (Quantum Art is calling each cluster of ions a “core” and presenting this as multicore quantum computing.)

At the moment, Quantum Art is behind some of its competitors in terms of qubit count and performing interesting demonstrations, and it’s not pledging to scale quite as fast. But the company’s founders are convinced that the complexity of doing so many individual operations and moving so many ions around will catch up with those competitors, while the added efficiency of multiple qubit gates will allow it to scale better.

This is just a small sampling of all the announcements from this fall, but it should give you a sense of how rapidly the field is progressing—from technology demonstrations to identifying cases where quantum hardware has a real edge and exploring ways to sustain progress beyond those first successes.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Quantum roundup: Lots of companies announcing new tech Read More »

corals-survived-past-climate-changes-by-retreating-to-the-deeps

Corals survived past climate changes by retreating to the deeps


A recent die-off in Florida puts the spotlight on corals’ survival strategies.

Scientists have found that the 2023 marine heat wave caused “functional extinction” of two Acropora reef-building coral species living in the Florida Reef, which stretches from the Dry Tortugas National Park to Miami.

“At this point, we do not think there’s much of a chance for natural recovery—their numbers are so low that successful reproduction is incredibly unlikely,” said Ross Cunning, a coral biologist at the John G. Shedd Aquarium.

This isn’t the first time corals have faced the borderline of extinction over the last 460 million years, and they have always managed to bounce back and recolonize habitats lost during severe climate changes. The problem is that we won’t live long enough to see them doing that again.

Killer heat waves

Marine heat waves kill corals by messing with the photosynthetic machinery of symbiotic microalgae that live in the corals’ tissues. When the temperature of water goes up too much, the microalgae start producing reactive oxygen species instead of nutritious sugars. The reactive oxygen is toxic to corals, which respond by expelling the microalgae. This solves the toxicity problem, but it also starves the corals and causes them to bleach (the algae are the source of their yellowish color).

The 2023 marine heat wave was not the first to hit the Florida Reef—it was the ninth on record. “Those eight previous heat waves also had major negative effects on coral reefs, causing widespread mortality,” Cunning told Ars. “But the 2023 heat wave blew all other heat waves out of the water. It was 2.2 to four times greater in magnitude than anything that came before it.”

Cunning’s team monitored two Acropora coral species: the staghorn and elkhorn. “They are both branching corals,” Cunning explained. “The staghorn has pointy branches that form dense thickets, whereas elkhorn produces arm-like branches that reach up and grow toward the surface, producing highly complex three dimensionality, like a canopy in the forest.”

He and his colleagues chose those two species because they essentially built the Florida Reef. They also grow the fastest among all Florida Reef corals, which means they are essential for its ability to recover from damage. “Acropora corals were the primary reef builders for the last ten thousand years,” Cunning said. Unfortunately, they also showed the highest levels of mortality due to heat waves.

Coral apocalypse

Cunning’s team found the mortality rate among Acropora corals reached 100 percent in the Dry Tortugas National Park, which is at the southernmost end of the Florida Reef. Moving north to Lower Keys, Middle Keys, and most of the Upper Keys, the mortality stayed at between 98 and 100 percent.

“Once you start moving a little bit further north, there’s the Biscayne National Park, where mortality rates were at 90 percent,” Cunning said. “It wasn’t until the furthest northern extent of the reef in Miami and Broward counties where mortality dropped to just 38 percent thanks to cooler temperatures that occurred there.”

Still, the mortality rate was exceptionally high throughout most of Acropora colonies across the Florida Reef. “What we’re facing is a functional extinction,” Cunning said.

But corals have been around for about 460 million years, and they have survived multiple mass extinction events, including the one that wiped out the dinosaurs. As vulnerable as they appear, corals seemingly have some get-out-of-death card they always pull when things turn really bad for them. This card, most likely, is buried deep in their genome.

Ancestral strength

“There have been studies looking into the evolutionary history of corals, but the difference between those and our work lies in technology,” said Claudia Francesca Vaga, a marine biologist at the Smithsonian Institution.

Her team looked at ultra conserved elements, stretches of DNA that are nearly identical across even distantly related species. These elements were used to build the most extensive phylogenetic tree of corals to date. Based on the genomic data and fossil evidence, Vaga’s team analyzed how 274 stony coral species are related to one another to retrace their common ancestor and reconstruct how they evolved from it.

“We managed to confirm that the first common ancestor of stony corals was most likely solitary—it didn’t live in colonies, and it didn’t have symbionts,” Vaga said.

The very first coral most likely did not rely on algae to produce its nutrients, which means it was immune to bleaching. It was also not attached to a substrate, so it could move from one habitat to another. Another advantage the first corals had was that they were not particularly picky—they could live just as well in the shallow waters as in the deep sea, since they didn’t get most of their nutrients from their photosynthetic symbionts.

Descending from these incredibly resilient ancestors, corals started to specialize. “We learned that symbiosis and coloniality can be acquired independently by stony coral linages and that it happened multiple times,” Vaga said.

Based on her team’s research, past mass extinction events usually wiped out 90 percent of the species living in shallow waters—the ones that were colonial and reliant on symbionts. “But each such extinction triggered a process of retaking the shallows by the more resilient deep-sea corals, which in time evolved symbiosis and coloniality again,” Vaga said.

Thanks to corals’ deep-sea cousins, even the most extreme environmental changes—global warming or sudden, severe variations in the oceans’ acidity or oxygen levels—could not kill them for good. Each mass extinction event they’ve been through just reverted them to factory settings and made them start over from scratch.

The only catch here is time. “We’re talking about four to five million years before coral populations recover,” Vaga said.

Long way back

According to Cunning, the consequence of Acropora corals’ extinction in the Florida Reef is a lower overall reef-building rate, which will lead to reduced biodiversity in the reef’s ecosystem. “There are going to be cascading effects, and humans will be impacted as well. Reefs protect our coastlines by buffering over 90 percent of wave energy,” Cunning said.

In Florida, where coastlines are heavily urbanized, this may translate into hundreds of millions of dollars per year in damages.

But Cunning said we still have means at our disposal to save Acropora corals. “We’re not going to give up on them,” he said.

One option for improving the resilience of corals could be to crossbreed them with species from outside of Florida Reef, ideally ones that live in warmer places and are better adapted to heat. “The first tests of this approach are underway right now in Florida; elkhorn corals were cross bred between Florida parents and Honduran parents,” Cunning said. He hopes this will help produce a new generation of corals that has a better shot at surviving the next heat wave.

Other interventions include manipulating corals’ algal symbionts. “There are many different species of algae with different levels of heat tolerance,” Cunning said. To him, a possible way forward would be to pair the Acropora corals with more heat-tolerant symbionts. “This should alter the bleaching threshold in these corals,” he explained.

Still, even interventions like these will take a very long time to make a difference. “But if four or five million years is the benchmark to beat, then yeah, it’s hopefully going to happen faster than that,” Cunning said.

The upside is that corals will likely pull off their de-extinction trick once again, even if we do absolutely nothing to help them. “In a few million years, they will redevelop coloniality, redevelop symbiosis, and rebuild something similar to the coral reefs we have today,” Vaga said. “This is good news for them. Not necessarily for us.”

Science, 2025.  DOI: 10.1126/science.adx7825

Nature, 2025.  DOI: 10.1038/s41586-025-09615-6

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Corals survived past climate changes by retreating to the deeps Read More »

runaway-black-hole-mergers-may-have-built-supermassive-black-holes

Runaway black hole mergers may have built supermassive black holes

The researchers used cosmological simulations to recreate the first 700 million years of cosmic history, focusing on the formation of a single dwarf galaxy. In their virtual galaxy, waves of stars were born in short, explosive bursts as cold gas clouds collapsed inside a dark matter halo. Instead of a single starburst episode followed by a steady drizzle of star formation as Garcia expected, there were two major rounds of stellar birth. Whole swarms of stars flared to life like Christmas tree lights.

“The early Universe was an incredibly crowded place,” Garcia said. “Gas clouds were denser, stars formed faster, and in those environments, it’s natural for gravity to gather stars into these tightly bound systems.”

Those clusters started out scattered around the galaxy but fell in toward the center like water swirling down a drain. Once there, they merged to create one megacluster, called a nuclear star cluster (so named because it lies at the nucleus of the galaxy). The young galactic heart shone with the light of a million suns and may have set the stage for a supermassive black hole to form.

A simulation of the formation of the super-dense star clusters.

A seemingly simple tweak was needed to make the simulation more precise than previous ones. “Most simulations simplify things to make calculations more practical, but then you sacrifice realism,” Garcia said. “We used an improved model that allowed star formation to vary depending on local conditions rather than just go at a constant rate like with previous models.”

Using the University of Maryland’s supercomputing facility Zaratan, Garcia accomplished in six months what would have taken 12 years on a MacBook.

Some clouds converted as much as 80 percent of their gas into stars—a ferocious rate compared to the 2 percent typically seen in nearby galaxies today. The clouds sparkled to life, becoming clusters of newborn stars held together by their mutual gravity and lighting a new pathway for supermassive black holes to form extremely early in the Universe.

Chicken or egg?

Most galaxies, including our own, are anchored by a nuclear star cluster nestled around a supermassive black hole. But the connection between the two has been a bit murky—did the monster black hole form and then draw stars close, or did the cluster itself give rise to the black hole?

Runaway black hole mergers may have built supermassive black holes Read More »

here’s-how-orbital-dynamics-wizardry-helped-save-nasa’s-next-mars-mission

Here’s how orbital dynamics wizardry helped save NASA’s next Mars mission


Blue Origin is counting down to launch of its second New Glenn rocket Sunday.

The New Glenn rocket rolls to Launch Complex-36 in preparation for liftoff this weekend. Credit: Blue Origin

CAPE CANAVERAL, FloridaThe field of astrodynamics isn’t a magical discipline, but sometimes it seems trajectory analysts can pull a solution out of a hat.

That’s what it took to save NASA’s ESCAPADE mission from a lengthy delay, and possible cancellation, after its rocket wasn’t ready to send it toward Mars during its appointed launch window last year. ESCAPADE, short for Escape and Plasma Acceleration and Dynamics Explorers, consists of two identical spacecraft setting off for the red planet as soon as Sunday with a launch aboard Blue Origin’s massive New Glenn rocket.

“ESCAPADE is pursuing a very unusual trajectory in getting to Mars,” said Rob Lillis, the mission’s principal investigator from the University of California, Berkeley. “We’re launching outside the typical Hohmann transfer windows, which occur every 25 or 26 months. We are using a very flexible mission design approach where we go into a loiter orbit around Earth in order to sort of wait until Earth and Mars are lined up correctly in November of next year to go to Mars.”

This wasn’t the original plan. When it was first designed, ESCAPADE was supposed to take a direct course from Earth to Mars, a transit that typically takes six to nine months. But ESCAPADE will now depart the Earth when Mars is more than 220 million miles away, on the opposite side of the Solar System.

The payload fairing of Blue Origin’s New Glenn rocket, containing NASA’s two Mars-bound science probes. Credit: Blue Origin

The most recent Mars launch window was last year, and the next one doesn’t come until the end of 2026. The planets are not currently in alignment, and the proverbial stars didn’t align to get the ESCAPADE satellites and their New Glenn rocket to the launch pad until this weekend.

This is fine

But there are several reasons this is perfectly OK to NASA. The New Glenn rocket is overkill for this mission. The two-stage launcher could send many tons of cargo to Mars, but NASA is only asking it to dispatch about a ton of payload, comprising a pair of identical science probes designed to study how the planet’s upper atmosphere interacts with the solar wind.

But NASA got a good deal from Blue Origin. The space agency is paying Jeff Bezos’ space company about $20 million for the launch, less than it would for a dedicated launch on any other rocket capable of sending the ESCAPADE mission to Mars. In exchange, NASA is accepting a greater than usual chance of a launch failure. This is, after all, just the second flight of the 321-foot-tall (98-meter) New Glenn rocket, which hasn’t yet been certified by NASA or the US Space Force.

The ESCAPADE mission, itself, was developed with a modest budget, at least by the standards of interplanetary exploration. The mission’s total cost amounts to less than $80 million, an order of magnitude lower than all of NASA’s recent Mars missions. NASA officials would not entrust the second flight of the New Glenn rocket to launch a billion-dollar spacecraft, but the risk calculation changes as costs go down.

NASA knew all of this in 2023 when it signed a launch contract with Blue Origin for the ESCAPADE mission. What officials didn’t know was that the New Glenn rocket wouldn’t be ready to fly when ESCAPADE needed to launch in late 2024. It turned out Blue Origin didn’t launch the first New Glenn test flight until January of this year. It was a success. It took another 10 months for engineers to get the second New Glenn vehicle to the launch pad.

The twin ESCAPADE spacecraft undergoing final preparations for launch. Each spacecraft is about a half-ton fully fueled. Credit: NASA/Kim Shiflett

Aiming high

That’s where the rocket sits this weekend at Cape Canaveral Space Force Station, Florida. If all goes according to plan, New Glenn will take off Sunday afternoon during an 88-minute launch window opening at 2: 45 pm EST (19: 45 UTC). There is a 65 percent chance of favorable weather, according to Blue Origin.

Blue Origin’s launch team, led by launch director Megan Lewis, will oversee the countdown Sunday. The rocket will be filled with super-cold liquid methane and liquid oxygen propellants beginning about four-and-a-half hours prior to liftoff. After some final technical and weather checks, the terminal countdown sequence will commence at T-minus 4 minutes, culminating in ignition of the rocket’s seven BE-4 main engines at T-minus 5.6 seconds.

The rocket’s flight computer will assess the health of each of the powerful engines, combining to generate more than 3.8 million pounds of thrust. If all looks good, hold-down restraints will release to allow the New Glenn rocket to begin its ascent from Florida’s Space Coast.

Heading east, the rocket will surpass the speed of sound in a little over a minute. After soaring through the stratosphere, New Glenn will shut down its seven booster engines and shed its first stage a little more than 3 minutes into the flight. Twin BE-3U engines, burning liquid hydrogen, will ignite to finish the job of sending the ESCAPADE satellites toward deep space. The rocket’s trajectory will send the satellites toward a gravitationally-stable location beyond the Moon, called the L2 Lagrange point, where it will swing into a loosely-bound loiter orbit to wait for the right time to head for Mars.

Meanwhile, the New Glenn booster, itself measuring nearly 20 stories tall, will begin maneuvers to head toward Blue Origin’s recovery ship floating a few hundred miles downrange in the Atlantic Ocean. The final part of the descent will include a landing burn using three of the BE-4 engines, then downshifting to a single engine to control the booster’s touchdown on the landing platform, dubbed “Jacklyn” in honor of Bezos’ late mother.

The launch timeline for New Glenn’s second mission. Credit: Blue Origin

New Glenn’s inaugural launch at the start of this year was a success, but the booster’s descent did not go well. The rocket was unable to restart its engines, and it crashed into the sea.

“We’ve incorporated a number of changes to our propellant management system, some minor hardware changes as well, to increase our likelihood of landing that booster on this mission,” said Laura Maginnis, Blue Origin’s vice president of New Glenn mission management. “That was the primary schedule driver that kind of took us from from January to where we are today.”

Blue Origin officials are hopeful they can land the booster this time. The company’s optimism is enough for officials to have penciled in a reflight of this particular booster on the very next New Glenn launch, slated for the early months of next year. That launch is due to send Blue Origin’s first Blue Moon cargo lander to the Moon.

“Our No. 1 objective is to deliver ESCAPADE safely and successfully on its way to L2, and then eventually on to Mars,” Maginnis said in a press conference Saturday. “We also are planning and wanting to land our booster. If we don’t land the booster, that’s OK. We have several more vehicles in production. We’re excited to see how the mission plays out tomorrow.”

Tracing a kidney bean

ESCAPADE’s path through space, relative to the Earth, has the peculiar shape of a kidney bean. In the world of astrodynamics, this is called a staging or libration orbit. It’s a way to keep the spacecraft on a stable trajectory to wait for the opportunity to go to Mars late next year.

“ESCAPADE has identified that this is the way that we want to fly, so we launch from Earth onto this kidney bean-shaped orbit,” said Jeff Parker, a mission designer from the Colorado-based company Advanced Space. “So, we can launch on virtually any day. What happens is that kidney bean just grows and shrinks based on how much time you need to spend in that orbit. So, we traverse that kidney been and at the very end there’s a final little loop-the-loop that brings us down to Earth.”

That’s when the two ESCAPADE spacecraft, known as Blue and Gold, will pass a few hundred miles above our planet. At the right moment, on November 7 and 9 of next year, the satellites will fire their engines to set off for Mars.

An illustration of ESCAPADE’s trajectory to wait for the opportunity to go to Mars. Credit: UC-Berkeley

There are some tradeoffs with this unique staging orbit. It is riskier than the original plan of sending ESCAPADE straight to Mars. The satellites will be exposed to more radiation, and will consume more of their fuel just to get to the red planet, eating into reserves originally set aside for science observations.

The satellites were built by Rocket Lab, which designed them with extra propulsion capacity in order to accommodate launches on a variety of different rockets. In the end, NASA “judged that the risk for the mission was acceptable, but it certainly is higher risk,” said Richard French, Rocket Lab’s vice president of business development and strategy.

The upside of the tradeoff is it will demonstrate an “exciting and flexible way to get to Mars,” Lillis said. “In the future, if we’d like to send hundreds of spacecraft to Mars at once, it will be difficult to do that from just the launch pads we have on Earth within that month [of the interplanetary launch window]. We could potentially queue up spacecraft using the approach that ESCAPADE is pioneering.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Here’s how orbital dynamics wizardry helped save NASA’s next Mars mission Read More »

james-watson,-who-helped-unravel-dna’s-double-helix,-has-died

James Watson, who helped unravel DNA’s double-helix, has died

James Dewey Watson, who helped reveal DNA’s double-helix structure, kicked off the Human Genome Project, and became infamous for his racist, sexist, and otherwise offensive statements, has died. He was 97.

His death was confirmed to The New York Times by his son Duncan, who said Watson died on Thursday in a hospice in East Northport, New York, on Long Island. He had previously been hospitalized with an infection. Cold Spring Harbor Laboratory also confirmed his passing.

Watson was born in Chicago in 1928 and attained scientific fame in 1953 at 25 years old for solving the molecular structure of DNA—the genetic blueprints for life—with his colleague Francis Crick at England’s Cavendish laboratory. Their discovery heavily relied on the work of chemist and crystallographer Rosalind Franklin at King’s College in London, whose X-ray images of DNA provided critical clues to the molecule’s twisted-ladderlike architecture. One image in particular from Franklin’s lab, Photo 51, made Watson and Crick’s discovery possible. But, she was not fully credited for her contribution. The image was given to Watson and Crick without Franklin’s knowledge or consent by Maurice Wilkins, a biophysicist and colleague of Franklin.

Watson, Crick, and Wilkins were awarded the Nobel Prize in Physiology or Medicine in 1962 for the discovery of DNA’s structure. By that time, Franklin had died (she died in 1958 at the age of 37 from ovarian cancer), and Nobels are not given posthumously. But Watson and Crick’s treatment of Franklin and her research has generated lasting scorn within the scientific community. Throughout his career and in his memoir, Watson disparaged Franklin’s intelligence and appearance.

James Watson, who helped unravel DNA’s double-helix, has died Read More »

the-government-shutdown-is-starting-to-have-cosmic-consequences

The government shutdown is starting to have cosmic consequences

The federal government shutdown, now in its 38th day, prompted the Federal Aviation Administration to issue a temporary emergency order Thursday prohibiting commercial rocket launches from occurring during “peak hours” of air traffic.

The FAA also directed commercial airlines to reduce domestic flights from 40 “high impact airports” across the country in a phased approach beginning Friday. The agency said the order from the FAA’s administrator, Bryan Bedford, is aimed at addressing “safety risks and delays presented by air traffic controller staffing constraints caused by the continued lapse in appropriations.”

The government considers air traffic controllers essential workers, so they remain on the job without pay until Congress passes a federal budget and President Donald Trump signs it into law. The shutdown’s effects, which affected federal workers most severely at first, are now rippling across the broader economy.

Sharing the airspace

Vehicles traveling to and from space share the skies with aircraft, requiring close coordination with air traffic controllers to clear airspace for rocket launches and reentries. The FAA said its order restricting commercial air traffic, launches, and reentries is intended to “ensure the safety of aircraft and the efficiency of the [National Airspace System].”

In a statement explaining the order, the FAA said the air traffic control system is “stressed” due to the shutdown.

“With continued delays and unpredictable staffing shortages, which are driving fatigue, risk is further increasing, and the FAA is concerned with the system’s ability to maintain the current volume of operations,” the regulator said. “Accordingly, the FAA has determined additional mitigation is necessary.”

Beginning Monday, the FAA said commercial space launches will only be permitted between 10 pm and 6 am local time, when the national airspace is most calm. The order restricts commercial reentries to the same overnight timeframe. The FAA licenses all commercial launches and reentries.

The government shutdown is starting to have cosmic consequences Read More »

next-generation-black-hole-imaging-may-help-us-understand-gravity-better

Next-generation black hole imaging may help us understand gravity better

Right now, we probably don’t have the ability to detect these small changes in phenomena. However, that may change, as a next-generation version of the Event Horizon Telescope is being considered, along with a space-based telescope that would operate on similar principles. So the team (four researchers based in Shanghai and CERN) decided to repeat an analysis they did shortly before the Event Horizon Telescope went operational, and consider whether the next-gen hardware might be able to pick up features of the environment around the black hole that might discriminate among different theorized versions of gravity.

Theorists have been busy, and there are a lot of potential replacements for general relativity out there. So, rather than working their way through the list, they used a model of gravity (the parametric Konoplya–Rezzolla–Zhidenko metric) that isn’t specific to any given hypothesis. Instead, it allows some of its parameters to be changed, thus allowing the team to vary the behavior of gravity within some limits. To get a sense of the sort of differences that might be present, the researchers swapped two different parameters between zero and one, giving them four different options. Those results were compared to the Kerr metric, which is the standard general relativity version of the event horizon.

Small but clear differences

Using those five versions of gravity, they model the three-dimensional environment near the event horizon using hydrodynamic simulations, including infalling matter, the magnetic fields it produces, and the jets of matter that those magnetic fields power.

The results resemble the sorts of images that the Event Horizon Telescope produced. These include a bright ring with substantial asymmetry, where one side is significantly brighter due to the rotation of the black hole. And, while the differences are subtle between all the variations of gravity, they’re there. One extreme version produced the smallest but brightest ring; another had a reduced contrast between the bright and dim side of the ring. There were also differences between the width of the jets produced in these models.

Next-generation black hole imaging may help us understand gravity better Read More »