Physics

rip-peter-higgs,-who-laid-foundation-for-the-higgs-boson-in-the-1960s

RIP Peter Higgs, who laid foundation for the Higgs boson in the 1960s

A particle physics hero —

Higgs shared the 2013 Nobel Prize in Physics with François Englert.

Smiling Peter Higgs, seated in front of microphone with Edinburgh logo in the background

Enlarge / A visibly emotional Peter Higgs was present when CERN announced Higgs boson discovery in July 2012.

University of Edinburgh

Peter Higgs, the shy, somewhat reclusive physicist who won a Nobel Prize for his theoretical work on how the Higgs boson gives elementary particles their mass, has died at the age of 94. According to a statement from the University of Edinburgh, the physicist passed “peacefully at home on Monday 8 April following a short illness.”

“Besides his outstanding contributions to particle physics, Peter was a very special person, a man of rare modesty, a great teacher and someone who explained physics in a very simple and profound way,” Fabiola Gianotti, director general at CERN and former leader of one of the experiments that helped discover the Higgs particle in 2012, told The Guardian. “An important piece of CERN’s history and accomplishments is linked to him. I am very saddened, and I will miss him sorely.”

The Higgs boson is a manifestation of the Higgs field, an invisible entity that pervades the Universe. Interactions between the Higgs field and particles help provide particles with mass, with particles that interact more strongly having larger masses. The Standard Model of Particle Physics describes the fundamental particles that make up all matter, like quarks and electrons, as well as the particles that mediate their interactions through forces like electromagnetism and the weak force. Back in the 1960s, theorists extended the model to incorporate what has become known as the Higgs mechanism, which provides many of the particles with mass. One consequence of the Standard Model’s version of the Higgs boson is that there should be a force-carrying particle, called a boson, associated with the Higgs field.

Despite its central role in the function of the Universe, the road to predicting the existence of the Higgs boson was bumpy, as was the process of discovering it. As previously reported, the idea of the Higgs boson was a consequence of studies on the weak force, which controls the decay of radioactive elements. The weak force only operates at very short distances, which suggests that the particles that mediate it (the W and Z bosons) are likely to be massive. While it was possible to use existing models of physics to explain some of their properties, these predictions had an awkward feature: just like another force-carrying particle, the photon, the resulting W and Z bosons were massless.

Schematic of the Standard Model of particle physics.

Enlarge / Schematic of the Standard Model of particle physics.

Over time, theoreticians managed to craft models that included massive W and Z bosons, but they invariably came with a hitch: a massless partner, which would imply a longer-range force. In 1964, however, a series of papers was published in rapid succession that described a way to get rid of this problematic particle. If a certain symmetry in the models was broken, the massless partner would go away, leaving only a massive one.

The first of these papers, by François Englert and Robert Brout, proposed the new model in terms of quantum field theory; the second, by Higgs (then 35), noted that a single quantum of the field would be detectable as a particle. A third paper, by Gerald Guralnik, Carl Richard Hagen, and Tom Kibble, provided an independent validation of the general approach, as did a completely independent derivation by students in the Soviet Union.

At that time, “There seemed to be excitement and concern about quantum field theory (the underlying structure of particle physics) back then, with some people beginning to abandon it,” David Kaplan, a physicist at Johns Hopkins University, told Ars. “There were new particles being regularly produced at accelerator experiments without any real theoretical structure to explain them. Spin-1 particles could be written down comfortably (the photon is spin-1) as long as they didn’t have a mass, but the massive versions were confusing to people at the time. A bunch of people, including Higgs, found this quantum field theory trick to give spin-1 particles a mass in a consistent way. These little tricks can turn out to be very useful, but also give the landscape of what is possible.”

“It wasn’t clear at the time how it would be applied in particle physics.”

Ironically, Higgs’ seminal paper was rejected by the European journal Physics Letters. He then added a crucial couple of paragraphs noting that his model also predicted the existence of what we now know as the Higgs boson. He submitted the revised paper to Physical Review Letters in the US, where it was accepted. He examined the properties of the boson in more detail in a 1966 follow-up paper.

RIP Peter Higgs, who laid foundation for the Higgs boson in the 1960s Read More »

gravitational-waves-reveal-“mystery-object”-merging-with-a-neutron-star

Gravitational waves reveal “mystery object” merging with a neutron star

mind the gap —

The so-called “mass gap” might be less empty than physicists previously thought.

Artistic rendition of a black hole merging with a neutron star.

Enlarge / Artistic rendition of a black hole merging with a neutron star. LIGO/VIRGO/KAGRA detected a merger involving a neutron star and what might be a very light black hole falling within the “mass gap” range.

LIGO-India/ Soheb Mandhai

The LIGO/VIRGO/KAGRA collaboration searches the universe for gravitational waves produced by the mergers of black holes and neutron stars. It has now announced the detection of a signal indicating a merger between two compact objects, one of which has an unusual intermediate mass—heavier than a neutron star and lighter than a black hole. The collaboration provided specifics of their analysis of the merger and the “mystery object” in a draft manuscript posted to the physics arXiv, suggesting that the object might be a very low-mass black hole.

LIGO detects gravitational waves via laser interferometry, using high-powered lasers to measure tiny changes in the distance between two objects positioned kilometers apart. LIGO has detectors in Hanford, Washington state, and in Livingston, Louisiana. A third detector in Italy, Advanced VIRGO, came online in 2016. In Japan, KAGRA is the first gravitational-wave detector in Asia and the first to be built underground. Construction began on LIGO-India in 2021, and physicists expect it will turn on sometime after 2025.

To date, the collaboration has detected dozens of merger events since its first Nobel Prize-winning discovery. Early detected mergers involved either two black holes or two neutron stars, but in 2021, LIGO/VIRGO/KAGRA confirmed the detection of two separate “mixed” mergers between black holes and neutron stars.

Most objects involved in the mergers detected by the collaboration fall into two groups: stellar-mass black holes (ranging from a few solar masses to tens of solar masses) and supermassive black holes, like the one in the middle of our Milky Way galaxy (ranging from hundreds of thousands to billions of solar masses). The former are the result of massive stars dying in a core-collapse supernova, while the latter’s formation process remains something of a mystery. The range between the heaviest known neutron star and the lightest known black hole is known as the “mass gap” among scientists.

There have been gravitational wave hints of compact objects falling within the mass gap before. For instance, as reported previously, in 2019, LIGO/VIRGO picked up a gravitational wave signal from a black hole merger dubbed “GW190521,” that produced the most energetic signal detected thus far, showing up in the data as more of a “bang” than the usual “chirp.” Even weirder, the two black holes that merged were locked in an elliptical (rather than circular) orbit, and their axes of spin were tipped far more than usual compared to those orbits. And the new black hole resulting from the merger had an intermediate mass of 142 solar masses—smack in the middle of the mass gap.

Masses in the stellar graveyard.

Enlarge / Masses in the stellar graveyard.

xIGO-Virgo-KAGRA / Aaron Geller / Northwestern

That same year, the collaboration detected another signal, GW 190814, a compact binary merger involving a mystery object that also fell within the mass gap. With no corresponding electromagnetic signal to accompany the gravitational wave signal, astrophysicists were unable to determine whether that object was an unusually heavy neutron star or an especially light black hole. And now we have a new mystery object within the mass gap in a merger event dubbed “GW 230529.”

“While previous evidence for mass-gap objects has been reported both in gravitational and electromagnetic waves, this system is especially exciting because it’s the first gravitational-wave detection of a mass-gap object paired with a neutron star,” said co-author Sylvia Biscoveanu of Northwestern University. “The observation of this system has important implications for both theories of binary evolution and electromagnetic counterparts to compact-object mergers.”

See where this discovery falls within the mass gap.

Enlarge / See where this discovery falls within the mass gap.

Shanika Galaudage / Observatoire de la Côte d’Azur

LIGO/VIRGO/KAGRA started its fourth observing run last spring and soon picked up GW 230529’s signal. Scientists determined that one of the two merging objects had a mass between 1.2 to 2 times the mass of our sun—most likely a neutron star—while the other’s mass fell in the mass-gap range of 2.5 to 4.5 times the mass of our sun. As with GW 190814, there were no accompanying bursts of electromagnetic radiation, so the team wasn’t able to conclusively identify the nature of the more massive mystery object located some 650 million light-years from Earth, but they think it is probably a low-mass black hole. If so, the finding implies an increase in the expected rate of neutron star–black hole mergers with electromagnetic counterparts, per the authors.

“Before we started observing the universe in gravitational waves, the properties of compact objects like black holes and neutron stars were indirectly inferred from electromagnetic observations of systems in our Milky Way,” said co-author Michael Zevin, an astrophysicist at the Adler Planetarium. “The idea of a gap between neutron-star and black-hole masses, an idea that has been around for a quarter of a century, was driven by such electromagnetic observations. GW230529 is an exciting discovery because it hints at this ‘mass gap’ being less empty than astronomers previously thought, which has implications for the supernova explosions that form compact objects and for the potential light shows that ensue when a black hole rips apart a neutron star.”

arXiv, 2024. DOI: 10.48550/arXiv.2404.04248  (About DOIs).

Gravitational waves reveal “mystery object” merging with a neutron star Read More »

astronomers-have-solved-the-mystery-of-why-this-black-hole-has-the-hiccups

Astronomers have solved the mystery of why this black hole has the hiccups

David vs. Goliath —

Blame it on a smaller orbiting black hole repeatedly punching through the accretion disk.

graphic of hiccuping black hole

Enlarge / Scientists have found a large black hole that “hiccups,” giving off plumes of gas.

Jose-Luis Olivares, MIT

In December 2020, astronomers spotted an unusual burst of light in a galaxy roughly 848 million light-years away—a region with a supermassive black hole at the center that had been largely quiet until then. The energy of the burst mysteriously dipped about every 8.5 days before the black hole settled back down, akin to having a case of celestial hiccups.

Now scientists think they’ve figured out the reason for this unusual behavior. The supermassive black hole is orbited by a smaller black hole that periodically punches through the larger object’s accretion disk during its travels, releasing a plume of gas. This suggests that black hole accretion disks might not be as uniform as astronomers thought, according to a new paper published in the journal Science Advances.

Co-author Dheeraj “DJ” Pasham of MIT’s Kavli Institute for Astrophysics and Space research noticed the community alert that went out after the All Sky Automated Survey for SuperNovae (ASAS-SN) detected the flare, dubbed ASASSN-20qc. He was intrigued and still had some allotted time on the X-ray telescope, called NICER (the Neutron star Interior Composition Explorer) on board the International Space Station. He directed the telescope to the galaxy of interest and gathered about four months of data, after which the flare faded.

Pasham noticed a strange pattern as he analyzed that four months’ worth of data. The bursts of energy dipped every 8.5 days in the X-ray regime, much like a star’s brightness can briefly dim whenever an orbiting planet crosses in front. Pasham was puzzled as to what kind of object could cause a similar effect in an entire galaxy. That’s when he stumbled across a theoretical paper by Czech physicists suggesting that it was possible for a supermassive black hole at the center of a galaxy to have an orbiting smaller black hole; they predicted that, under the right circumstances, this could produce just such a periodic effect as Pasham had observed in his X-ray data.

Computer simulation of an intermediate-mass black hole orbiting a supermassive black hole and driving periodic gas plumes that can explain the observations.

Computer simulation of an intermediate-mass black hole orbiting a supermassive black hole and driving periodic gas plumes that can explain the observations.

Petra Sukova, Astronomical Institute of the CAS

“I was super excited about this theory and immediately emailed to say, ‘I think we’re observing exactly what your theory predicted,” Pasham said. They joined forces to run simulations incorporating the data from NICER, and the results supported the theory. The black hole at the galaxy’s center is estimated to have a mass of 50 million suns. Since there was no burst before December 2020, the team thinks there was, at most, just a faint accretion disk around that black hole and a smaller orbiting black hole of between 100 to 10,000 solar masses that eluded detection because of that.

So what changed? Pasham et al. suggest that a nearby star got caught in the gravitational pull of the supermassive black hole in December 2020 and was ripped to shreds, known as a tidal disruption event (TDE). As previously reported, in a TDE, part of the shredded star’s original mass is ejected violently outward. This, in turn, can form an accretion disk around the black hole that emits powerful X-rays and visible light. The jets are one way astronomers can indirectly infer the presence of a black hole. Those outflow emissions typically occur soon after the TDE.

That seems to be what happened in the current system to cause the sudden flare in the primary supermassive black hole. Now it had a much brighter accretion disk, so when its smaller black hole partner passed through the disk, larger than usual gas plumes were emitted. As luck would have it, that plume just happened to be pointed in the direction of an observing telescope.

Astronomers have known about so-called “David and Goliath” binary black hole systems for a while, but “this is a different beast,” said Pasham. “It doesn’t fit anything that we know about these systems. We’re seeing evidence of objects going in and through the disk, at different angles, which challenges the traditional picture of a simple gaseous disk around black holes. We think there is a huge population of these systems out there.”

Science Advances, 2024. DOI: 10.1126/sciadv.adj8898  (About DOIs).

Astronomers have solved the mystery of why this black hole has the hiccups Read More »

quantum-computing-progress:-higher-temps,-better-error-correction

Quantum computing progress: Higher temps, better error correction

conceptual graphic of symbols representing quantum states floating above a stylized computer chip.

There’s a strong consensus that tackling most useful problems with a quantum computer will require that the computer be capable of error correction. There is absolutely no consensus, however, about what technology will allow us to get there. A large number of companies, including major players like Microsoft, Intel, Amazon, and IBM, have all committed to different technologies to get there, while a collection of startups are exploring an even wider range of potential solutions.

We probably won’t have a clearer picture of what’s likely to work for a few years. But there’s going to be lots of interesting research and development work between now and then, some of which may ultimately represent key milestones in the development of quantum computing. To give you a sense of that work, we’re going to look at three papers that were published within the last couple of weeks, each of which tackles a different aspect of quantum computing technology.

Hot stuff

Error correction will require connecting multiple hardware qubits to act as a single unit termed a logical qubit. This spreads a single bit of quantum information across multiple hardware qubits, making it more robust. Additional qubits are used to monitor the behavior of the ones holding the data and perform corrections as needed. Some error correction schemes require over a hundred hardware qubits for each logical qubit, meaning we’d need tens of thousands of hardware qubits before we could do anything practical.

A number of companies have looked at that problem and decided we already know how to create hardware on that scale—just look at any silicon chip. So, if we could etch useful qubits through the same processes we use to make current processors, then scaling wouldn’t be an issue. Typically, this has meant fabricating quantum dots on the surface of silicon chips and using these to store single electrons that can hold a qubit in their spin. The rest of the chip holds more traditional circuitry that performs the initiation, control, and readout of the qubit.

This creates a notable problem. Like many other qubit technologies, quantum dots need to be kept below one Kelvin in order to keep the environment from interfering with the qubit. And, as anyone who’s ever owned an x86-based laptop knows, all the other circuitry on the silicon generates heat. So, there’s the very real prospect that trying to control the qubits will raise the temperature to the point that the qubits can’t hold onto their state.

That might not be the problem that we thought, according to some work published in Wednesday’s Nature. A large international team that includes people from the startup Diraq have shown that a silicon quantum dot processor can work well at the relatively toasty temperature of 1 Kelvin, up from the usual milliKelvin that these processors normally operate at.

The work was done on a two-qubit prototype made with materials that were specifically chosen to improve noise tolerance; the experimental procedure was also optimized to limit errors. The team then performed normal operations starting at 0.1 K, and gradually ramped up the temperatures to 1.5 K, checking performance as they did so. They found that a major source of errors, state preparation and measurement (SPAM), didn’t change dramatically in this temperature range: “SPAM around 1 K is comparable to that at millikelvin temperatures and remains workable at least until 1.4 K.”

The error rates they did see depended on the state they were preparing. One particular state (both spin-up) had a fidelity of over 99 percent, while the rest were less constrained, at somewhere above 95 percent. States had a lifetime of over a millisecond, which qualifies as long-lived int he quantum world.

All of which is pretty good, and suggests that the chips can tolerate reasonable operating temperatures, meaning on-chip control circuitry can be used without causing problems. The error rates of the hardware qubits are still well above those that would be needed for error correction to work. However, the researchers suggest that they’ve identified error processes that can potentially be compensated for. They expect that the ability to do industrial-scale manufacturing will ultimately lead to working hardware.

Quantum computing progress: Higher temps, better error correction Read More »

event-horizon-telescope-captures-stunning-new-image-of-milky-way’s-black-hole

Event Horizon Telescope captures stunning new image of Milky Way’s black hole

A new image from the Event Horizon Telescope has revealed powerful magnetic fields spiraling from the edge of a supermassive black hole at the center of the Milky Way, Sagittarius A*.

Enlarge / A new image from the Event Horizon Telescope has revealed powerful magnetic fields spiraling from the edge of a supermassive black hole at the center of the Milky Way, Sagittarius A*.

EHT Collaboration

Physicists have been confident since the1980s that there is a supermassive black hole at the center of the Milky Way galaxy, similar to those thought to be at the center of most spiral and elliptical galaxies. It’s since been dubbed Sagittarius A* (pronounced A-star), or SgrAfor short. The Event Horizon Telescope (EHT) captured the first image of SgrAtwo years ago. Now the collaboration has revealed a new polarized image (above) showcasing the black hole’s swirling magnetic fields. The technical details appear in two new papers published in The Astrophysical Journal Letters. The new image is strikingly similar to another EHT image of a larger supermassive black hole, M87*, so this might be something that all such black holes share.

The only way to “see” a black hole is to image the shadow created by light as it bends in response to the object’s powerful gravitational field. As Ars Science Editor John Timmer reported in 2019, the EHT isn’t a telescope in the traditional sense. Instead, it’s a collection of telescopes scattered around the globe. The EHT is created by interferometry, which uses light in the microwave regime of the electromagnetic spectrum captured at different locations. These recorded images are combined and processed to build an image with a resolution similar to that of a telescope the size of the most distant locations. Interferometry has been used at facilities like ALMA (the Atacama Large Millimeter/submillimeter Array) in northern Chile, where telescopes can be spread across 16 km of desert.

In theory, there’s no upper limit on the size of the array, but to determine which photons originated simultaneously at the source, you need very precise location and timing information on each of the sites. And you still have to gather sufficient photons to see anything at all. So atomic clocks were installed at many of the locations, and exact GPS measurements were built up over time. For the EHT, the large collecting area of ALMA—combined with choosing a wavelength in which supermassive black holes are very bright—ensured sufficient photons.

In 2019, the EHT announced the first direct image taken of a black hole at the center of an elliptical galaxy, Messier 87, located in the constellation of Virgo some 55 million light-years away. This image would have been impossible a mere generation ago, and it was made possible by technological breakthroughs, innovative new algorithms, and (of course) connecting several of the world’s best radio observatories. The image confirmed that the object at the center of M87is indeed a black hole.

In 2021, the EHT collaboration released a new image of M87showing what the black hole looks like in polarized light—a signature of the magnetic fields at the object’s edge—which yielded fresh insight into how black holes gobble up matter and emit powerful jets from their cores. A few months later, the EHT was back with images of the “dark heart” of a radio galaxy known as Centaurus A, enabling the collaboration to pinpoint the location of the supermassive black hole at the galaxy’s center.

SgrAis much smaller but also much closer than M87*. That made it a bit more challenging to capture an equally sharp image because SgrAchanges on time scales of minutes and hours compared to days and weeks for M87*. Physicist Matt Strassler previously compared the feat to “taking a one-second exposure of a tree on a windy day. Things get blurred out, and it can be difficult to determine the true shape of what was captured in the image.”

Event Horizon Telescope captures stunning new image of Milky Way’s black hole Read More »

report:-superconductivity-researcher-found-to-have-committed-misconduct

Report: Superconductivity researcher found to have committed misconduct

Definitely not super —

Details of what the University of Rochester investigation found are not available.

Image of a large lawn, with a domed building flanked by trees and flagpoles at its far end.

Enlarge / Rush Rhees Library at the University of Rochester.

We’ve been following the saga of Ranga Dias since he first burst onto the scene with reports of a high-pressure, room-temperature superconductor, published in Nature in 2020. Even as that paper was being retracted due to concerns about the validity of some of its data, Dias published a second paper claiming a similar breakthrough: a superconductor that works at high temperatures but somewhat lower pressures. Shortly afterward, that got retracted as well.

On Wednesday, the University of Rochester, where Dias is based, announced that it had concluded an investigation into Dias and found that he had committed research misconduct. (The outcome was first reported by The Wall Street Journal.)

The outcome is likely to mean the end of Dias’ career, as well as the company he founded to commercialize the supposed breakthroughs. But it’s unlikely we’ll ever see the full details of the investigation’s conclusions.

Questionable research

Dias’ lab was focused on high-pressure superconductivity. At extreme pressures, the orbitals where electrons hang out get distorted, which can alter the chemistry and electronic properties of materials. This can mean the formation of chemical compounds that don’t exist at normal pressures, along with distinct conductivity. In a number of cases, these changes enabled superconductivity at unusually high temperatures, although still well below the freezing point of water.

Dias, however, supposedly found a combination of chemicals that would boost the transition to superconductivity to near room temperature, although only at extreme pressures. While the results were plausible, the details regarding how some of the data was processed to produce one of the paper’s key graphs were lacking, and Dias didn’t provide a clear explanation. Nature eventually pulled the paper, and the University of Rochester initiated investigations (plural!) of his work.

Those investigations cleared Dias of misconduct, and he quickly was back with a report of another high-temperature superconductor, this one forming at less extreme pressures—somewhat surprisingly, published again by Nature. This time, things fell apart much more rapidly, with potential problems quickly becoming apparent, and many of the paper’s authors, not including Dias, called for its retraction.

The University of Rochester started yet another investigation, which is the one that has now concluded that Dias engaged in research misconduct.

The extent of this misconduct, however, might never be revealed. These internal university investigations are generally not made public, even if it might be in the public’s interest to know. The only recent exception is a case where a researcher accused of misconduct sued her university for defamation over the outcome of the investigation. The university submitted its investigation report as evidence, allowing it to become part of the public record.

Behind the scenes

That said, we have learned a fair bit about what has happened inside Dias’ lab, thanks to Nature News, a sister publication of the scientific journal that published both of Dias’ papers. It conducted a tour-de-force of investigative journalism, talking to Dias’ grad students and obtaining the peer review evaluations of Dias’ two papers.

The investigation showed that, for the first paper, Dias simply told his graduate students that the key data came from before he had set up his own lab, which explains why they weren’t aware of it. The students claimed that the ensuing investigations didn’t contact any of them, suggesting they were extremely similar in scope. By contrast, the students claim to have been more aware that the results presented in the second paper didn’t match up with experiments and, in at least one case, suggested Dias clearly misrepresented his lab’s work. (The paper claimed to have synthesized a chemical that the students say was simply purchased from a supplier.)

They were the ones who organized the effort to retract the paper and said that the final investigation actually sought their input.

Meanwhile, on the peer review side, the reporting does not leave Nature looking especially good. Both papers required several rounds of revision and review before being accepted, and even after all this work, most of the reviewers were ambiguous at best about whether the paper should be published. It was an editorial decision to go ahead despite that.

While things seem to have worked out in the end, the major institutions involved here—Nature and the University of Rochester—aren’t coming out of this unscathed. Neither seems to have taken early indications of misconduct as seriously as it should have. As for Dias, the reporting in the Nature News piece should be career-ending. And it’s worth considering that, in the absence of the reporter’s work, the research community would probably remain unaware of most of the details of Dias’ misconduct.

Report: Superconductivity researcher found to have committed misconduct Read More »

this-stretchy-electronic-material-hardens-upon-impact-just-like-“oobleck”

This stretchy electronic material hardens upon impact just like “oobleck”

a flexible alternative —

Researchers likened material’s structure to a big bowl of spaghetti and meatballs.

This flexible and conductive material has “adaptive durability,” meaning it gets stronger when hit.

Enlarge / This flexible and conductive material has “adaptive durability,” meaning it gets stronger when hit.

Yue (Jessica) Wang

Scientists are keen to develop new materials for lightweight, flexible, and affordable wearable electronics so that, one day, dropping our smartphones won’t result in irreparable damage. One team at the University of California, Merced, has made conductive polymer films that actually toughen up in response to impact rather than breaking apart, much like mixing corn starch and water in appropriate amounts produces a slurry that is liquid when stirred slowly but hardens when you punch it (i.e., “oobleck”). They described their work in a talk at this week’s meeting of the American Chemical Society in New Orleans.

“Polymer-based electronics are very promising,” said Di Wu, a postdoc in materials science at UCM. “We want to make the polymer electronics lighter, cheaper, and smarter. [With our] system, [the polymers] can become tougher and stronger when you make a sudden movement, but… flexible when you just do your daily, routine movement. They are not constantly rigid or constantly flexible. They just respond to your body movement.”

As we’ve previously reported, oobleck is simple and easy to make. Mix one part water to two parts corn starch, add a dash of food coloring for fun, and you’ve got oobleck, which behaves as either a liquid or a solid, depending on how much stress is applied. Stir it slowly and steadily and it’s a liquid. Punch it hard and it turns more solid under your fist. It’s a classic example of a non-Newtonian fluid.

In an ideal fluid, the viscosity largely depends on temperature and pressure: Water will continue to flow regardless of other forces acting upon it, such as being stirred or mixed. In a non-Newtonian fluid, the viscosity changes in response to an applied strain or shearing force, thereby straddling the boundary between liquid and solid behavior. Stirring a cup of water produces a shearing force, and the water shears to move out of the way. The viscosity remains unchanged. But for non-Newtonian fluids like oobleck, the viscosity changes when a shearing force is applied.

Ketchup, for instance, is a shear-thickening non-Newtonian fluid, which is one reason smacking the bottom of the bottle doesn’t make the ketchup come out any faster; the application of force increases the viscosity. Yogurt, gravy, mud, and pudding are other examples. And so is oobleck. (The name derives from a 1949 Dr. Seuss children’s book, Bartholomew and the Oobleck.) By contrast, non-drip paint exhibits a “shear-thinning” effect, brushing on easily but becoming more viscous once it’s on the wall. Last year, MIT scientists confirmed that the friction between particles was critical to that liquid-to-solid transition, identifying a tipping point when the friction reached a certain level and the viscosity abruptly increased.

Wu works in the lab of materials scientist Yue (Jessica) Wang, who decided to try to mimic the shear-thickening behavior of oobleck in a polymer material. Flexible polymer electronics are usually made by linking together conjugated conductive polymers, which are long and thin, like spaghetti. But these materials will still break apart in response to particularly large and/or rapid impacts.

So Wu and Wang decided to combine the spaghetti-like polymers with shorter polyaniline molecules and poly(3,4-ethylenedioxythiophene) polystyrene sulfonate, or PEDOT:PSS—four different polymers in all. Two of the four have a positive charge, and two have a negative charge. They used that mixture to make stretchy films and then tested the mechanical properties.

Lo and behold, the films behaved very much like oobleck, deforming and stretching in response to impact rather than breaking apart. Wang likened the structure to a big bowl of spaghetti and meatballs since the positively charged molecules don’t like water and therefore cluster into ball-like microstructures. She and Wu suggest that those microstructures absorb impact energy, flattening without breaking apart. And it doesn’t take much PEDOT:PSS to get this effect: just 10 percent was sufficient.

Further experiments identified an even more effective additive: positively charged 1,3-propanediamine nanoparticles. These particles can weaken the “meatball” polymer interactions just enough so that they can deform even more in response to impacts, while strengthening the interactions between the entangled long spaghetti-like polymers.

The next step is to apply their polymer films to wearable electronics like smartwatch bands and sensors, as well as flexible electronics for monitoring health. Wang’s lab has also experimented with a new version of the material that would be compatible with 3D printing, opening up even more opportunities. “There are a number of potential applications, and we’re excited to see where this new, unconventional property will take us,” said Wang.

This stretchy electronic material hardens upon impact just like “oobleck” Read More »

a-giant-meteorite-has-been-lost-in-the-desert-since-1916—here’s-how-we-might-find-it

A giant meteorite has been lost in the desert since 1916—here’s how we might find it

“This story has everything…” —

A tale of “sand dunes, a guy named Gaston, secret aeromagnetic surveys, and camel drivers.”

Chinguetti slice at the National Museum of Natural History

Enlarge / Chinguetti slice at the National Museum of Natural History. A larger meteorite reported in 1916 hasn’t been spotted since.

In 1916, a French consular official reported finding a giant “iron hill” deep in the Sahara desert, roughly 45 kilometers (28 miles) from Chinguetti, Mauritania—purportedly a meteorite (technically a mesosiderite) some 40 meters (130 feet) tall and 100 meters (330 feet) long. He brought back a small fragment, but the meteorite hasn’t been found again since, despite the efforts of multiple expeditions, calling its very existence into question.

Three British researchers have conducted their own analysis and proposed a means of determining once and for all whether the Chinguetti meteorite really exists, detailing their findings in a new preprint posted to the physics arXiv. They contend that they have narrowed down the likely locations where the meteorite might be buried under high sand dunes and are currently awaiting access to data from a magnetometer survey of the region in hopes of either finding the mysterious missing meteorite or confirming that it likely never existed.

Captain Gaston Ripert was in charge of the Chinguetti camel corps. One day he overheard a conversation among the chameliers (camel drivers) about an unusual iron hill in the desert. He convinced a local chief to guide him there one night, taking Ripert on a 10-hour camel ride along a “disorienting” route, making a few detours along the way. He may even have been literally blindfolded, depending on how one interprets the French phrase en aveugle, which can mean either “blind” (i.e. without a compass) or “blindfolded.” The 4-kilogram fragment Ripert collected was later analyzed by noted geologist Alfred Lacroix, who considered it a significant discovery. But when others failed to locate the larger Chinguetti meteorite, people started to doubt Ripert’s story.

“I know that the general opinion is that the stone does not exist; that to some, I am purely and simply an imposter who picked up a metallic specimen,” Ripert wrote to French naturalist Theodore Monod in 1934. “That to others, I am a simpleton who mistook a sandstone outcrop for an enormous meteorite. I shall do nothing to disabuse them, I know only what I saw.”

Encouraged by a separate report of local blacksmiths claiming to recover iron from a giant block somewhere east or southeast of Chinguetti, Monod intermittently searched for the meteorite several times over the ensuing decades, to no avail. A pilot named Jacques Gallouédec thought he spotted a dark silhouette in the Saharan dunes in the 1980s. But neither Monod nor a second expedition in the late 1990s—documented by the UK’s Channel 4—could find anything. Monod concluded in 1989 that Ripert had likely mistakenly identified a sedimentary rock “with no trace of metal” as a meteorite.

Still, as Rutgers University physicist Matt Buckley noted on Bluesky, “This story has everything: giant unexplained meteorites, sand dunes, a guy named Gaston, ductile nickel needles, secret aeromagnetic surveys, and camel drivers.” So naturally, it intrigued Stephen Warren of Imperial College London, Oxford University’s Ekaterini Protopapa, and Robert Warren, who began their own search for the mysterious missing meteorite in 2020.

A giant meteorite has been lost in the desert since 1916—here’s how we might find it Read More »

alternate-qubit-design-does-error-correction-in-hardware

Alternate qubit design does error correction in hardware

We can fix that —

Early-stage technology has the potential to cut qubits needed for useful computers.

Image of a complicated set of wires and cables hooked up to copper colored metal hardware.

Nord Quantique

There’s a general consensus that performing any sort of complex algorithm on quantum hardware will have to wait for the arrival of error-corrected qubits. Individual qubits are too error-prone to be trusted for complex calculations, so quantum information will need to be distributed across multiple qubits, allowing monitoring for errors and intervention when they occur.

But most ways of making these “logical qubits” needed for error correction require anywhere from dozens to over a hundred individual hardware qubits. This means we’ll need anywhere from tens of thousands to millions of hardware qubits to do calculations. Existing hardware has only cleared the 1,000-qubit mark within the last month, so that future appears to be several years off at best.

But on Thursday, a company called Nord Quantique announced that it had demonstrated error correction using a single qubit with a distinct hardware design. While this has the potential to greatly reduce the number of hardware qubits needed for useful error correction, the demonstration involved a single qubit—the company doesn’t even expect to demonstrate operations on pairs of qubits until later this year.

Meet the bosonic qubit

The technology underlying this work is termed a bosonic qubit, and they’re not anything new; an optical instrument company even has a product listing for them that notes their potential for use in error correction. But while the concepts behind using them in this manner were well established, demonstrations were lagging. Nord Quantique has now posted a paper in the arXiv that details a demonstration of them actually lowering error rates.

The devices are structured much like a transmon, the form of qubit favored by tech heavyweights like IBM and Google. There, the quantum information is stored in a loop of superconducting wire and is controlled by what’s called a microwave resonator—a small bit of material where microwave photons will reflect back and forth for a while before being lost.

A bosonic qubit turns that situation on its head. In this hardware, the quantum information is held in the photons, while the superconducting wire and resonator control the system. These are both hooked up to a coaxial cavity (think of a structure that, while microscopic, looks a bit like the end of a cable connector).

Massively simplified, the quantum information is stored in the manner in which the photons in the cavity interact. The state of the photons can be monitored by the linked resonator/superconducting wire. If something appears to be off, the resonator/superconducting wire allows interventions to be made to restore the original state. Additional qubits are not needed. “A very simple and basic idea behind quantum error correction is redundancy,” co-founder and CTO Julien Camirand Lemyre told Ars. “One thing about resonators and oscillators in superconducting circuits is that you can put a lot of photons inside the resonators. And for us, the redundancy comes from there.”

This process doesn’t correct all possible errors, so it doesn’t eliminate the need for logical qubits made from multiple underlying hardware qubits. In theory, though, you can catch the two most common forms of errors that qubits are prone to (bit flips and changes in phase).

In the arXiv preprint, the team at Nord Quantique demonstrated that the system works. Using a single qubit and simply measuring whether it holds onto its original state, the error correction system can reduce problems by 14 percent. Unfortunately, overall fidelity is also low, starting at about 85 percent, which is significantly below what’s seen in other systems that have been through years of development work. Some qubits have been demonstrated with a fidelity of over 99 percent.

Getting competitive

So there’s no question that Nord Quantique is well behind a number of the leaders in quantum computing that can perform (error-prone) calculations with dozens of qubits and have far lower error rates. Again, Nord Quantique’s work was done using a single qubit—and without doing any of the operations needed to perform a calculation.

Lemyre told Ars that while the company is small, it benefits from being a spin-out of the Institut Quantique at Canada’s Sherbrooke University, one of Canada’s leading quantum research centers. In addition to having access to the expertise there, Nord Quantique uses a fabrication facility at Sherbrooke to make its hardware.

Over the next year, the company expects to demonstrate that the error correction scheme can function while pairs of qubits are used to perform gate operations, the fundamental units of calculations. Another high priority is to combine this hardware-based error correction with more traditional logical qubit schemes, which would allow additional types of errors to be caught and corrected. This would involve operations with a dozen or more of these bosonic qubits at a time.

But the real challenge will be in the longer term. The company is counting on its hardware’s ability to handle error correction to reduce the number of qubits needed for useful calculations. But if its competitors can scale up the number of qubits fast enough while maintaining the control and error rates needed, that may not ultimately matter. Put differently, if Nord Quantique is still in the hundreds of qubit range by the time other companies are in the hundreds of thousands, its technology might not succeed even if it has some inherent advantages.

But that’s the fun part about the field as things stand: We don’t really know. A handful of very different technologies are already well into development and show some promise. And there are other sets that are still early in the development process but are thought to have a smoother path to scaling to useful numbers of qubits. All of them will have to scale to a minimum of tens of thousands of qubits while enabling the ability to perform quantum manipulations that were cutting-edge science just a few decades ago.

Looming in the background is the simple fact that we’ve never tried to scale anything like this to the extent that will be needed. Unforeseen technical hurdles might limit progress at some point in the future.

Despite all this, there are people backing each of these technologies who know far more about quantum mechanics than I ever will. It’s a fun time.

Alternate qubit design does error correction in hardware Read More »

mathematicians-finally-solved-feynman’s-“reverse-sprinkler”-problem

Mathematicians finally solved Feynman’s “reverse sprinkler” problem

A decades-old conundrum —

We might not need to “unwater” our lawns, but results could help control fluid flows.

Light-scattering microparticles reveal the flow pattern for the reverse (sucking) mode of a sprinkler, showing vortices and complex flow patterns forming inside the central chamber. Credit: K. Wang et al., 2024

A typical lawn sprinkler features various nozzles arranged at angles on a rotating wheel; when water is pumped in, they release jets that cause the wheel to rotate. But what would happen if the water were sucked into the sprinkler instead? In which direction would the wheel turn then, or would it even turn at all? That’s the essence of the “reverse sprinkler” problem that physicists like Richard Feynman, among others, have grappled with since the 1940s. Now, applied mathematicians at New York University think they’ve cracked the conundrum, per a recent paper published in the journal Physical Review Letters—and the answer challenges conventional wisdom on the matter.

“Our study solves the problem by combining precision lab experiments with mathematical modeling that explains how a reverse sprinkler operates,” said co-author Leif Ristroph of NYU’s Courant Institute. “We found that the reverse sprinkler spins in the ‘reverse’ or opposite direction when taking in water as it does when ejecting it, and the cause is subtle and surprising.”

Ristroph’s lab frequently addresses these kinds of colorful real-world puzzles. For instance, back in 2018, Ristroph and colleagues fine-tuned the recipe for the perfect bubble based on experiments with soapy thin films. (You want a circular wand with a 1.5-inch perimeter, and you should gently blow at a consistent 6.9 cm/s.) In 2021, the Ristroph lab looked into the formation processes underlying so-called “stone forests” common in certain regions of China and Madagascar. These pointed rock formations, like the famed Stone Forest in China’s Yunnan Province, are the result of solids dissolving into liquids in the presence of gravity, which produces natural convective flows.

In 2021, his lab built a working Tesla valve, in accordance with the inventor’s design, and measured the flow of water through the valve in both directions at various pressures. They found the water flowed about two times slower in the nonpreferred direction. And in 2022, Ristroph studied the surpassingly complex aerodynamics of what makes a good paper airplane—specifically what is needed for smooth gliding. They found that paper airplane aerodynamics differ substantially from conventional aircraft, which rely on airfoils to generate lift.

Mechanik (1883).” data-height=”1298″ data-width=”1200″ href=”https://cdn.arstechnica.net/wp-content/uploads/2024/02/feynman7.jpg”>Illustration of a Mechanik (1883).” height=”692″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/02/feynman7-640×692.jpg” width=”640″>

Enlarge / Illustration of a “reaction wheel” from Ernst Mach’s Mechanik (1883).

Public domain

The reverse sprinkler problem is associated with Feynman because he popularized the concept, but it actually dates back to a chapter in Ernst Mach’s 1883 textbook The Science of Mechanics (Die Mechanik in Ihrer Entwicklung Historisch-Kritisch Dargerstellt). Mach’s thought experiment languished in relative obscurity until a group of Princeton University physicists began debating the issue in the 1940s.

Feynman was a graduate student there at the time and threw himself into the debate with gusto, even devising an experiment in the cyclotron laboratory to test his hypothesis. (In true Feynman fashion, that experiment culminated with the explosion of a glass carboy used in the apparatus because of the high internal pressure.)

One might intuit that a reverse sprinkler would work just like a regular sprinkler, merely played backward, so to speak. But the physics turns out to be more complicated. “The answer is perfectly clear at first sight,” Feynman wrote in Surely You’re Joking, Mr. Feynman (1985). “The trouble was, some guy would think it was perfectly clear [that the rotation would be] one way, and another guy would think it was perfectly clear the other way.”

Mathematicians finally solved Feynman’s “reverse sprinkler” problem Read More »

astronomers-found-ultra-hot,-earth-sized-exoplanet-with-a-lava-hemisphere

Astronomers found ultra-hot, Earth-sized exoplanet with a lava hemisphere

Like Kepler-10 b, illustrated above, the exoplanet HD 63433 d is a small, rocky planet in a tight orbit of its star.

Enlarge / Like Kepler-10 b, illustrated above, newly discovered exoplanet HD 63433 d is a small, rocky planet in a tight orbit of its star.

NASA/Ames/JPL-Caltech/T. Pyle

Astronomers have discovered an unusual Earth-sized exoplanet they believe has a hemisphere of molten lava, with its other hemisphere tidally locked in perpetual darkness. Co-authors and study leaders Benjamin Capistrant (University of Florida) and Melinda Soares-Furtado (University of Wisconsin-Madison) presented the details yesterday at a meeting of the American Astronomical Society in New Orleans. An associated paper has just been published in The Astronomical Journal. Another paper published today in the journal Astronomy and Astrophysics by a different group described the discovery of a rare small, cold exoplanet with a massive outer companion 100 times the mass of Jupiter.

As previously reported, thanks to the massive trove of exoplanets discovered by the Kepler mission, we now have a good idea of what kinds of planets are out there, where they orbit, and how common the different types are. What we lack is a good sense of what that implies in terms of the conditions on the planets themselves. Kepler can tell us how big a planet is, but it doesn’t know what the planet is made of. And planets in the “habitable zone” around stars could be consistent with anything from a blazing hell to a frozen rock.

The Transiting Exoplanet Survey Satellite (TESS) was launched with the intention of helping us figure out what exoplanets are actually like. TESS is designed to identify planets orbiting bright stars relatively close to Earth, conditions that should allow follow-up observations to figure out their compositions and potentially those of their atmospheres.

Both Kepler and TESS identify planets using what’s called the transit method. This works for systems in which the planets orbit in a plane that takes them between their host star and Earth. As this occurs, the planet blocks a small fraction of the starlight that we see from Earth (or nearby orbits). If these dips in light occur with regularity, they’re diagnostic of something orbiting the star.

This tells us something about the planet. The frequency of the dips in the star’s light tells us how long an orbit takes, which tells us how far the planet is from its host star. That, combined with the host star’s brightness, tells us how much incoming light the planet receives, which will influence its temperature. (The range of distances at which temperatures are consistent with liquid water is called the habitable zone.) And we can use that, along with how much light is being blocked, to figure out how big the planet is.

But to really understand other planets and their potential to support life, we have to understand what they’re made of and what their atmosphere looks like. While TESS doesn’t answer those questions, it’s designed to find planets with other instruments that could answer them.

Astronomers found ultra-hot, Earth-sized exoplanet with a lava hemisphere Read More »

astronomers-think-they-finally-know-origin-of-enormous-“cosmic-smoke-rings“

Astronomers think they finally know origin of enormous “cosmic smoke rings“

Space oddity —

Massive stars burn out quickly. When they die, they expel their gas as outflowing winds.

Odd radio circles, like ORC 1 pictured above, are large enough to contain galaxies in their centers and reach hundreds of thousands of light years across.

Enlarge / Odd radio circles are large enough to contain galaxies in their centers and reach hundreds of thousands of light years across.

Jayanne English / University of Manitoba

The discovery of so-called “odd radio circles” several years ago had astronomers scrambling to find an explanation for these enormous regions of radio waves so far-reaching that they have galaxies at their centers. Scientists at the University of California, San Diego, think they have found the answer: outflowing galactic winds from exploding stars in so-called “starburst” galaxies. They described their findings in a new paper published in the journal Nature.

“These galaxies are really interesting,” said Alison Coil of the University of California, San Diego. “They occur when two big galaxies collide. The merger pushes all the gas into a very small region, which causes an intense burst of star formation. Massive stars burn out quickly, and when they die, they expel their gas as outflowing winds.”

As reported previously, the discovery arose from the Evolutionary Map of the Universe (EMU) project, which aims to take a census of radio sources in the sky. Several years ago, Ray Norris, an astronomer at Western Sydney University and CSIRO in Australia, predicted the EMU project would make unexpected discoveries. He dubbed them “WTFs.” Anna Kapinska, an astronomer at the National Radio Astronomy Observatory (NRAO) was browsing through radio astronomy data collected by CSIRO’s Australian Square Kilometer Array Pathfinder (ASKAP) telescope when she noticed several strange shapes that didn’t seem to resemble any known type of object. Following Norris’ nomenclature, she labeled them as possible WTFs. One of those was a picture of a ghostly circle of radio emission, “hanging out in space like a cosmic smoke ring.”

Other team members soon found two more weird round blobs, which they dubbed “odd radio circles” (ORCs). A fourth ORC was identified in archival data from India’s Giant MetreWave Radio Telescope, and a fifth was discovered in fresh ASKAP data in 2021. There are several more objects that might also be ORCs. Based on this, the team estimates there could be as many as 1,000 ORCs in all.

While Norris et al. initially assumed the blobs were just imaging artifacts, data from other radio telescopes confirmed they were a new class of astronomical object. They don’t show up in standard optical telescopes or in infrared and X-ray telescopes—only in the radio spectrum. Astronomers suspect the radio emissions are due to clouds of electrons. But that wouldn’t explain why ORCs don’t show up in other wavelengths. All of the confirmed ORCs thus far have a galaxy at the center, suggesting this might be a relevant factor in how they form. And they are enormous, measuring about a million light-years across, which is larger than our Milky Way.

As for what caused the explosions that led to the formation of ORCs, new data reported in 2022 was sufficient to rule out all but three possibilities. The first is that ORCs are the result of a shockwave from the center of a galaxy, perhaps arising from the merging of two supermassive black holes. Alternatively, they could be the result of radio jets spewing particles from active galactic nuclei. Finally, ORCs may be shells caused by starburst events (“termination shock”), which would produce a spherical shock wave as hot gas blasted out from a galactic center.

A simulation of starburst-driven winds at three different time periods, starting at 181 million years. The top half of each image shows gas temperature, while the lower half shows the radial velocity.

Enlarge / A simulation of starburst-driven winds at three different time periods, starting at 181 million years. The top half of each image shows gas temperature, while the lower half shows the radial velocity.

Cassandra Lochhaas / Space Telescope Science Institute

Coil et al. were intrigued by the discovery of ORCs. They had been studying starburst galaxies, which are noteworthy for their very high rate of star formation, making them appear bright blue. The team thought the later stages of those starburst galaxies might explain the origin of ORCs, but they needed more than radio data to prove it. So the team used the integral field spectrograph at the W.M. Keck Observatory in Hawaii to take a closer look at ORC 4, the first radio circle observable from the Northern Hemisphere. That revealed a much higher amount of bright, heated, compressed gas than one would see in an average galaxy. Additional optical and infrared imaging data revealed that the stars in the ORC 4 galaxy are about 6 billion years old. New star formation seems to have ended some billion years ago.

The next step was to run computer simulations of the odd radio circle itself spanning the course of 750 million years. Those simulations showed an initial 200-million-year period with powerful outflowing galactic winds, followed by a shock wave that propelled very hot gas out of the galaxy to create a radio ring. Meanwhile, a reverse shock wave sent cooler gas back into the central galaxy.

“To make this work, you need a high-mass outflow rate, meaning it’s ejecting a lot of material very quickly. And the surrounding gas just outside the galaxy has to be low density, otherwise the shock stalls. These are the two key factors,” said Coil. “It turns out the galaxies we’ve been studying have these high-mass outflow rates. They’re rare, but they do exist. I really do think this points to ORCs originating from some kind of outflowing galactic winds.” She also thinks that ORCs could help astronomers understand more about galactic outflowing winds since it enables them to “see” those winds through radio data and spectrometry.

Nature, 2024. DOI: 10.1038/s41586-023-06752-8  (About DOIs).

Astronomers think they finally know origin of enormous “cosmic smoke rings“ Read More »