The Centers for Disease Control and Prevention is preparing to update its COVID-19 isolation guidance, moving from a minimum five-day isolation period to one that is solely determined by symptoms, according to a report from The Washington Post.
Currently, CDC isolation guidance states that people who test positive for COVID-19 should stay home for at least five days, at which point people can end their isolation as long as their symptoms are improving and they have been fever-free for 24 hours.
According to three unnamed officials who spoke with the Post, the CDC will update its guidance to remove the five-day minimum, recommending more simply that people can end their isolation any time after being fever-free for 24 hours without the aid of medication, as long as any other remaining symptoms are mild and improving. The change, which is expected to be released in April, would be the first to loosen the guidance since the end of 2021.
In an email to Ars, a CDC spokesperson did not confirm or deny the report, saying only that, “There are no updates to COVID guidelines to announce at this time. We will continue to make decisions based on the best evidence and science to keep communities healthy and safe.”
The Post notes that the proposed update to the guidance matches updated guidance from California and Oregon, as well as other countries.
The officials who spoke with the outlet noted that the loosened guidelines reflect that most people in the US have developed some level of immunity to the pandemic coronavirus from prior infections and vaccinations.
A report earlier this month found that the 2023–2024 COVID-19 vaccine was about 54 percent effective at preventing symptomatic COVID-19 when compared against people who had not received the latest vaccine. However, the CDC estimates that only about 22 percent of adults have received the updated shot.
Currently, the CDC recommends that people wear a mask for 10 days after testing positive unless they have two negative tests 48 hours apart. The Post reported that it’s unclear if the CDC will update its mask recommendation.
Enlarge/ Stephen Ubl, president and chief executive officer of Pharmaceutical Research and Manufacturers of America (PhRMA), speaks during a Bloomberg Live discussion in Washington, DC, on Tuesday, Sept. 19, 2017.
A federal judge in Texas dismissed a lawsuit Monday brought by a heavy-hitting pharmaceutical trade group, which argued that forcing drug makers to negotiate Medicare drug prices is unconstitutional.
The dismissal is a small win for the Biden administration, which is defending the price negotiations on multiple fronts. The lawsuit dismissed Monday is just one of nine from the pharmaceutical industry, all claiming in some way that the price negotiations laid out in the Inflation Reduction Act of 2022 are unconstitutional. The big pharmaceutical companies suing the government directly over the negotiations include Johnson & Johnson, Bristol Myers Squibb, Novo Nordisk, Merck, and AstraZeneca.
Last month, a federal judge in Delaware heard arguments from AstraZeneca’s lawyers, which reportedly went poorly. AstraZeneca argued that Medicare’s new power to negotiate drug prices violates the company’s rights under the Fifth Amendment’s due process clause. The forced negotiations deprive the company of “property rights in their drug products and their patent rights” without due process, AstraZeneca claimed. But Colm Connolly, chief judge of the US District Court of Delaware, was skeptical of how that could be the case, according to a Stat reporter who was present for the hearing. Connolly noted that AstraZeneca doesn’t have to sell drugs to Medicare. “You’re free to do what you want,” Connolly reportedly said. “You may not make as much money.”
At a later point, Connolly bluntly commented: “I don’t find their argument compelling.”
Though the plaintiffs in the now-dismissed Texas also made an argument based on the Fifth Amendment’s due process clause, the case didn’t make it that far. US District Judge David Ezra in Austin, Texas, dismissed the case brought by one of the case’s three plaintiffs, saying the court lacked jurisdiction. And, because that one plaintiff is the only one based in the Western District of Texas, where the lawsuit was filed, he dismissed the case completely.
The three plaintiffs in the case were PhRMA, a powerful drug industry trade group representing high-profile drug makers, including Pfizer, GSK, Eli Lilly, and Sanofi; the Global Colon Cancer Association (GCCA); and the Texas-based National Infusion Center Association (NICA). Lawyers for the Biden administration filed a motion to dismiss the case, arguing that NICA is not a proper plaintiff.
Ezra found that for NICA to bring constitutional claims against Medicare’s price negotiations in a court, it is first required under federal rules to bring those claims through an administrative review process under the Medicare Act or the Centers for Medicare and Medicaid Services. Without a prior administrative review, the court has no jurisdiction.
“The Court lacks jurisdiction over NICA’s claims because the claims here ‘arise under’ the Medicare Act and the claims do not fall under the exception carved out for when claims may completely avoid judicial or administrative review. Therefore, NICA’s claims are dismissed without prejudice,” Ezra wrote in his ruling.
And, with the one Texas-based plaintiff, NICA, knocked out of the case, the Western Texas district is now the “improper venue” for a case brought by the remaining two plaintiffs, PhRMA and GCCA.
Ezra noted that in such situations, a judge can transfer the case to a court that would be considered a proper venue. But Ezra declined, noting that neither the plaintiffs nor defendants suggested a proper venue. And, even if they did, it likely wouldn’t matter, Ezra reasoned, because PhRMA and GCCA also haven’t gone through an administrative review.
“[T]he same federal jurisdictional defect likely exists for PhRMA and GCCA, as nothing suggests that either party has presented its claims to the [Health] Secretary,” Ezra wrote.
Ezra dismissed the case “without prejudice,” meaning the claims could be refiled. A spokesperson for PhRMA told FiercePharma: “We are disappointed with the court’s decision, which does not address the merits of our lawsuit, and we are weighing our next legal steps.”
Meanwhile, the first round of Medicare drug price negotiations is underway. Earlier this month, the federal government sent out its opening offers in the price negotiation process for the first 10 drugs selected. The bargaining will continue through the coming months, with an ending deadline of August 1, 2024. The prices will go into effect at the beginning of 2026.
An Oregon resident contracted bubonic plague from their “very sick” pet cat, marking the first time since 2015 that someone in the state has been stricken with the Black Death bacterium, according to local health officials.
Plague bacteria, Yersinia pestis, circulates cryptically in the US in various types of rodents and their fleas. It causes an average of seven human cases a year, with a range of 1 to 17, according to the Centers for Disease Control and Prevention. The cases tend to cluster in two regions, the CDC notes: a hotspot that spans northern New Mexico, northern Arizona, and southern Colorado, and another region spanning California, far western Nevada, and southern Oregon.
The new case in Oregon occurred in the central county of Deschutes. It was fortunately caught early before the infection developed into a more severe, systemic bloodstream infection (septicemic plague). However, according to a local official who spoke with NBC News, some doctors felt the person had developed a cough while being treated at the hospital. This could indicate progression toward pneumonic plague, a more life-threatening and more readily contagious variety of the plague that spreads via respiratory droplets. Nevertheless, the person’s case reportedly responded well to antibiotic treatment, and the person is recovering.
Health officials worked to prevent the spread of the disease. “All close contacts of the resident and their pet have been contacted and provided medication to prevent illness,” Richard Fawcett, Deschutes County Health Officer, said in a news release.
Fawcett told NBC News that the cat was “very sick” and had a draining abscess, indicating “a fairly substantial” infection. The person could have become infected by plague-infected fleas from the cat or by handling the sick cat or its bodily fluids directly. Symptoms usually develop two to eight days after exposure, when the infection occurs in the lymph nodes. Early symptoms include sudden onset of fever, nausea, weakness, chills, muscle aches, and/or visibly swollen lymph nodes called buboes. If left untreated, the infection progresses to the septicemic or pneumonic forms.
It’s unclear how or why the cat became infected. But cats are particularly susceptible to plague and are considered a common source of infection in the US. The animals, when left to roam outdoors, can pick up infections from fleas as well as killing and eating infected rodents. Though dogs can also pick up the infection from fleas or other animals, they are less likely to develop clinical illness, according to the CDC.
While plague cases are generally rare in the US, Deschutes County Health Services offered general tips to keep from contracting the deadly bacteria, namely: Avoid contact with fleas and rodents, particularly sick, injured, or dead ones; Keep pets on a leash and protected with flea control products; Work to keep rodents out and away from homes and other buildings; and avoid areas with lots of rodents while camping and hiking and wear insect repellant when outdoors to ward off fleas.
According to the CDC, there were 496 plague cases in the US between 1970 and 2020. And between 2000 and 2020, the CDC counted 14 deaths.
Enlarge/ Victor Wembanyama of the San Antonio Spurs drives on Moritz Wagner of the Orlando Magic during a game on February 8 in Orlando, Florida.
The NBA’s tallest rookie is 7 feet 4 inches tall with an 8-foot wingspan, but last year, a series of video clips highlighted his surprisingly nimble, and often shoeless, feet. In one clip, he’s pressing knees and ankles together while wiggling his toes and hopping forward. In another, he’s bear crawling along the baseline. And in yet another, his right heel and left toes are gliding in opposite directions, gym music pounding in the background, as he eases into the splits.
Victor Wembanyama (pronounced wem-ben-YAH-muh), was selected first in last June’s NBA draft at the age of 19. By then, he had played four years of professional basketball in his native France. With a preternatural blend of size, athleticism, and skill, Wembanyama is routinely described as a generational talent. And if the toe-exercise videos are any indication, his trainers appear determined to protect that talent: Sports medicine experts say that long limbs and feet—Wembanyama’s shoe size is 20.5—confer potential physical vulnerabilities.
Leg, arm, and foot bones all function like levers, and the longer they are, the more force is needed to stabilize them. Tall athletes may find it harder to control their movement as they land from a jump or quickly shift direction. Seven-footers, of course, aren’t the only athletes who get hurt while playing. Across the NBA, injuries are on the rise, with knee, ankle, and foot problems leading the way. Anecdotally, physicians and trainers also report seeing children, some as young as 10 years old, with severe sports-related injuries and chronic wear-and-tear that was once seen mainly in adults.
All of this has fueled a small but growing body of scientific research into basketball and other sports-related injuries. In biomechanics laboratories, experts are creating detailed assessments of how players move on the court. Epidemiologists are poring over reams of data. And NBA teams have been experimenting with new approaches for player safety, including an emphasis on load management, which seeks to optimize an athlete’s ratio of stress to rest.
Much of this science is still unsettled, and there is no foolproof method for injury prevention. But experts who spoke with Undark said there is a solid evidence base for specific warm-up programs, stretches, and exercises that reduce injuries. The findings have not been fully disseminated and implemented. Yet, there’s clearly interest in the topic.
Enlarge/ The shoes of Victor Wembanyama, size 20.5, before a game in January. Across the NBA, injuries are on the rise, with knee, ankle, and foot problems leading the way. And experts say long limbs and feet confer potential physical vulnerabilities.
Wembanyama’s barefoot calisthenics have inspired explanatory videos with titles like “Victor Wembanyama’s Weird Toe Workout EXPLAINED!” as well as comments and replies asserting that such exercises could have benefitted previous generations of tall players. “This is smart they’re trying to keep his feet healthy,” wrote a New York Knicks superfan on X, formerly known as Twitter. “[Too] many big men have gone down with foot injuries.”
Danny Seidman, a Michigan-based sports medicine physician (and self-described NBA fanatic), said he’s glad the videos went viral. They got people talking about injury prevention, which has come a long way over the past few decades. “It’s sad for me to see previous athletes in their 50s and 60s who are limping around or hunched over,” said Seidman. “We think we can avoid those sorts of things now.”
Super Bowl LVIII will be played on a natural grass field in an indoor stadium in Las Vegas on February 11, 2024. How do you keep a grass field vibrant in such a hostile growing environment like the Nevada desert?
The answer: You don’t. By the end of the regular NFL season, paint was used to camouflage the reality that only a few scant patches of grass remained in Allegiant Stadium, home to the Las Vegas Raiders. Immediately after the Raiders’ last game on January 7, 2024, the field crew ripped up the remaining grass, installed California-grown sod over three days, and began the tedious process of keeping the grass alive long enough for the big game.
Herculean efforts to prepare a vibrant natural grass field for 2024’s Super Bowl LVIII are especially questionable when one realizes that Allegiant Stadium also has an artificial turf playing surface available (used by UNLV Football). Why don’t teams in hostile environments switch to more robust artificial turf, which is designed to overcome the many limitations of natural grass fields?
The answer lies in a debate over the safety of synthetic playing surfaces. While artificial turf manufacturers tout research that their products result in fewer injuries, the NFL Players Association (NFLPA) claims it raises injury risk and is advocating for its use to be abolished in the NFL. Let’s explore some key arguments of this debate, which continues to grab headlines with each high-profile NFL injury.
Super Bowl gridirons
Pressure for NFL field managers is especially high following the embarrassingly poor field conditions of last year’s Super Bowl. Super Bowl LVII took place at State Farm Stadium in Glendale, Arizona—another natural grass field in the desert (with a retractable roof, closed at night to protect the grass). Despite two years of preparation and an $800,000 investment, the grass field was a disaster, as players struggled to find footing on its slippery surface.
Veteran NFL groundskeeper George Toma attributed the mess to woefully improper field preparation. Players also complained about the slipping issue the previous time the Super Bowl was hosted at the natural grass field in State Farm Stadium eight years prior for Super Bowl XLIX in 2015. That year, the poor traction was blamed on the green paint used on the grass.
For perspective, some of the best sports field managers in the nation oversee field preparations for the Super Bowl. However, maintaining natural grass in desert conditions is so unfavorable (especially when the grass is sometimes indoors) that even the best can mess it up.
None of these issues existed when the Super Bowl was last played on artificial turf. Super Bowl LVI in 2022 was held at SoFi Stadium in Inglewood, California, home to both the Los Angeles Rams and Chargers. Not only did the artificial turf stand up to double the workload during the regular season (hosting home games for the LA Rams and the LA Chargers), but it also withstood a busy playoff season. The artificial turf field at SoFi Stadium hosted NFL games through the regular season and right up to the last playoff game when the LA Rams beat the San Francisco 49ers. Two weeks later, the Rams ended up winning Super Bowl LVI on the very same surface.
While turf avoids the durability issues seen with grass surfaces, players have widespread concerns about its safety—a recent poll by the NFLPA reported 92 percent of players favored grass.
There’s a general consensus that performing any sort of complex algorithm on quantum hardware will have to wait for the arrival of error-corrected qubits. Individual qubits are too error-prone to be trusted for complex calculations, so quantum information will need to be distributed across multiple qubits, allowing monitoring for errors and intervention when they occur.
But most ways of making these “logical qubits” needed for error correction require anywhere from dozens to over a hundred individual hardware qubits. This means we’ll need anywhere from tens of thousands to millions of hardware qubits to do calculations. Existing hardware has only cleared the 1,000-qubit mark within the last month, so that future appears to be several years off at best.
But on Thursday, a company called Nord Quantique announced that it had demonstrated error correction using a single qubit with a distinct hardware design. While this has the potential to greatly reduce the number of hardware qubits needed for useful error correction, the demonstration involved a single qubit—the company doesn’t even expect to demonstrate operations on pairs of qubits until later this year.
Meet the bosonic qubit
The technology underlying this work is termed a bosonic qubit, and they’re not anything new; an optical instrument company even has a product listing for them that notes their potential for use in error correction. But while the concepts behind using them in this manner were well established, demonstrations were lagging. Nord Quantique has now posted a paper in the arXiv that details a demonstration of them actually lowering error rates.
The devices are structured much like a transmon, the form of qubit favored by tech heavyweights like IBM and Google. There, the quantum information is stored in a loop of superconducting wire and is controlled by what’s called a microwave resonator—a small bit of material where microwave photons will reflect back and forth for a while before being lost.
A bosonic qubit turns that situation on its head. In this hardware, the quantum information is held in the photons, while the superconducting wire and resonator control the system. These are both hooked up to a coaxial cavity (think of a structure that, while microscopic, looks a bit like the end of a cable connector).
Massively simplified, the quantum information is stored in the manner in which the photons in the cavity interact. The state of the photons can be monitored by the linked resonator/superconducting wire. If something appears to be off, the resonator/superconducting wire allows interventions to be made to restore the original state. Additional qubits are not needed. “A very simple and basic idea behind quantum error correction is redundancy,” co-founder and CTO Julien Camirand Lemyre told Ars. “One thing about resonators and oscillators in superconducting circuits is that you can put a lot of photons inside the resonators. And for us, the redundancy comes from there.”
This process doesn’t correct all possible errors, so it doesn’t eliminate the need for logical qubits made from multiple underlying hardware qubits. In theory, though, you can catch the two most common forms of errors that qubits are prone to (bit flips and changes in phase).
In the arXiv preprint, the team at Nord Quantique demonstrated that the system works. Using a single qubit and simply measuring whether it holds onto its original state, the error correction system can reduce problems by 14 percent. Unfortunately, overall fidelity is also low, starting at about 85 percent, which is significantly below what’s seen in other systems that have been through years of development work. Some qubits have been demonstrated with a fidelity of over 99 percent.
Getting competitive
So there’s no question that Nord Quantique is well behind a number of the leaders in quantum computing that can perform (error-prone) calculations with dozens of qubits and have far lower error rates. Again, Nord Quantique’s work was done using a single qubit—and without doing any of the operations needed to perform a calculation.
Lemyre told Ars that while the company is small, it benefits from being a spin-out of the Institut Quantique at Canada’s Sherbrooke University, one of Canada’s leading quantum research centers. In addition to having access to the expertise there, Nord Quantique uses a fabrication facility at Sherbrooke to make its hardware.
Over the next year, the company expects to demonstrate that the error correction scheme can function while pairs of qubits are used to perform gate operations, the fundamental units of calculations. Another high priority is to combine this hardware-based error correction with more traditional logical qubit schemes, which would allow additional types of errors to be caught and corrected. This would involve operations with a dozen or more of these bosonic qubits at a time.
But the real challenge will be in the longer term. The company is counting on its hardware’s ability to handle error correction to reduce the number of qubits needed for useful calculations. But if its competitors can scale up the number of qubits fast enough while maintaining the control and error rates needed, that may not ultimately matter. Put differently, if Nord Quantique is still in the hundreds of qubit range by the time other companies are in the hundreds of thousands, its technology might not succeed even if it has some inherent advantages.
But that’s the fun part about the field as things stand: We don’t really know. A handful of very different technologies are already well into development and show some promise. And there are other sets that are still early in the development process but are thought to have a smoother path to scaling to useful numbers of qubits. All of them will have to scale to a minimum of tens of thousands of qubits while enabling the ability to perform quantum manipulations that were cutting-edge science just a few decades ago.
Looming in the background is the simple fact that we’ve never tried to scale anything like this to the extent that will be needed. Unforeseen technical hurdles might limit progress at some point in the future.
Despite all this, there are people backing each of these technologies who know far more about quantum mechanics than I ever will. It’s a fun time.
Enlarge/ A Falcon 9 rocket launched NASA’s PACE spacecraft this week.
SpaceX
Welcome to Edition 6.30 of the Rocket Report! Looking ahead, there are some interesting launches coming up in the middle of this month. Here are some we have our eyes on: Intuitive Machines’ lunar lander on a Falcon 9 and a re-flight of Japan’s big H3 rocket next week; then there’s an Electron launch of an intriguing Astroscale mission and NASA’s Crew-8 the following week. Good luck to all.
As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.
Was Transporter created to ‘kill’ small launch? SpaceX’s Transporter missions, which regularly fly 100 or more small satellites into low-Earth orbit on Falcon 9 rideshare missions, have unquestionably harmed small satellite launch companies. While companies like Rocket Lab or Virgin Orbit could offer smallsat operators a precise orbit, there was no way to compete on price. “The Transporter program was created a few years ago with, in my opinion, the sole purpose of trying to kill new entrants like us,” said Sandy Tirtey, director of global commercial launch services at Rocket Lab, during a panel at the SmallSat Symposium on Wednesday.
Low-price guarantee … The panel was covered by Space News, and the rest of the article includes a lot of comments from small launch providers about how they provide value with dedicated services and so forth—pretty typical fare. However, the story does not really explore Tirtley’s statement. So, was Transporter created to kill small launch companies? As someone who has reported a lot on SpaceX over the years, I’ll offer my two cents. I don’t think the program was created with this intent; rather, it filled a market need (only Electron and India’s PSLV were meeting commercial smallsat demand in any volume at the time). It also gave Falcon 9 more commercial missions. However, I do believe it was ultimately priced with the intent of cutting small launch off at the knees.
FAA investigating Virgin Galactic’s dropped pin. Virgin Galactic reported an anomaly on its most recent flight, Galactic 06, which took place two weeks ago from a spaceport in New Mexico. The company said it discovered a dropped pin during a post-flight review of the mission, which carried two pilots and four passengers to an altitude of 55.1 miles (88.7 km). This alignment pin, according to Virgin Galactic, helps ensure the VSS Unity spaceship is aligned to its carrier aircraft when mating the vehicles, Ars reports.
Corrective actions to be required … Virgin Galactic said it reported the anomaly to the Federal Aviation Administration (FAA) on January 31. On Tuesday, the FAA confirmed that there was no public property or injuries that resulted from the mishap. “The FAA is overseeing the Virgin Galactic-led mishap investigation to ensure the company complies with its FAA-approved mishap investigation plan and other regulatory requirements,” the federal agency said in a statement. Before VSS Unity can return to flight, the FAA must approve Virgin Galactic’s final report, including corrective actions to prevent a similar problem in the future. (submitted by Ken the Bin)
The easiest way to keep up with Eric Berger’s space reporting is to sign up for his newsletter, we’ll collect his stories in your inbox.
HyImpulse ships suborbital rocket to launch site. German launch startup HyImpulse has confirmed that its SR75 rocket and all related support systems have been boxed up and have embarked on the long journey to Australia, European Spaceflight reports. SR75 is a single-stage suborbital launch vehicle that is designed to be capable of delivering up to 250 kilograms to a maximum altitude of around 200 kilometers.
Testing a pathfinder … The debut flight of SR75 had initially been slated to occur from SaxaVord in the United Kingdom. In fact, HyImpulse had received approval for the flight from the UK Civil Aviation Authority in mid-2023. However, with financial issues forcing work on the site to be temporarily slowed, HyImpulse was forced to look elsewhere. The launch will now take place from the South Launch Koonibba Test Range in Australia, possibly as soon as March. The test will certify several critical elements of the company’s larger orbital SL1 rocket. (submitted by Ken the Bin)
It turns out some of the informed speculation about the US military’s latest X-37B spaceplane mission was pretty much spot-on.
When the semi-classified winged spacecraft launched on December 28, it flew into orbit on top of a SpaceX Falcon Heavy rocket, which is much larger than the Atlas V and Falcon 9 rockets used to launch the X-37B on its previous missions.
This immediately sparked speculation that the X-37B would reach higher altitudes than its past flights, which remained in low-Earth orbit at altitudes of a few hundred miles. A discovery from Tomi Simola, a satellite tracking hobbyist living near Helsinki, Finland, appears to confirm this suspicion.
On Friday, Simola reported on social media and on SeeSat-L, a long-running online forum of satellite tracking enthusiasts, that he detected an unidentified object using a sky-watching camera. The camera is designed to continuously observe a portion of the sky to detect moving objects in space. A special software program helps identify known and unknown objects.
“Exciting news!” Simola posted on social media. “Orbital Test Vehicle 7 (OTV-7), which was launched to classified orbit last December, was seen by my SatCam! Here are images from the last two nights!”
Exciting news!
Orbital Test Vehicle 7 (OTV-7), which was launched to classified orbit last December, was seen by my SatCam!
Mike McCants, one of the more experienced satellite observers and co-administrator of the SeeSat-L forum, agreed with Simola’s conclusion that he found the X-37B spaceplane.
“Congrats to Tomi Simola for locating the secret X-37B spaceplane,” posted Jonathan McDowell, an astrophysicist and widely respected expert in spaceflight activity.
Higher than ever
Amateur observations of the spaceplane indicate it is flying in a highly elliptical orbit ranging between 201 and 24,133 miles in altitude (323 and 38,838 kilometers). The orbit is inclined 59.1 degrees to the equator.
This is not far off the predictions from the hobbyist tracking community before the launch in December. At that time, enthusiasts used information about the Falcon Heavy’s launch trajectory and drop zones for the rocket’s core booster and upper stage to estimate the orbit it would reach with the X-37B spaceplane.
The Space Force has not released any information about the orbit of the X-37B. While it took hobbyists about six weeks to find the X-37B on this mission, it typically took less time for amateur trackers to locate it when it orbited at lower altitudes on its previous missions. Despite the secrecy, it’s difficult to imagine the US military’s adversaries in China and Russia didn’t already know where the spaceplane was flying.
Military officials usually don’t disclose details about the X-37B’s missions while they are in space, providing updates only before each launch and then after each landing.
This is the seventh flight of an X-3B spaceplane since the first one launched in 2010. In a statement before the launch in December, the Space Force said this flight of the X-37B is focused on “a wide range of test and experimentation objectives.” Flying in “new orbital regimes” is among the test objectives, military officials said.
The military has two Boeing-built X-37B spaceplanes, or Orbital Test Vehicles, in its inventory. They are reusable and designed to launch inside the payload fairing of a conventional rocket, spend multiple years in space with the use of solar power, and then return to Earth for a landing on a three-mile-long runway, either at Vandenberg Space Force Base in California or at NASA’s Kennedy Space Center in Florida.
It resembles a miniature version of NASA’s retired space shuttle orbiter, with wings, deployable landing gear, and black thermal protection tiles to shield its belly from the scorching heat of reentry. It measures 29 feet (about 9 meters) long, roughly a quarter of the length of NASA’s space shuttle, and it doesn’t carry astronauts.
The X-37B has a cargo bay inside the fuselage for payloads, with doors that open after launch and close before landing. There is also a service module mounted to the back end of the spaceplane to accommodate additional experiments, payloads, and small satellites that can deploy in orbit to perform their own missions.
All the Space Force has said about the payloads on the current X-37B flight is that its experiment package includes investigations into new “space domain awareness technologies.” NASA is flying an experiment on the X-37B to measure how plant seeds respond to sustained exposure to space radiation. The spaceplane’s orbit on this flight takes it through the Van Allen radiation belts.
The secrecy surrounding the X-37B has sparked much speculation about its purpose, some of which centers on ideas that the spaceplane is part of a classified weapons platform in orbit. More likely, analysts say, the X-37B is a testbed for new space technologies. The unusual elliptical orbit for this mission is similar to the orbit used for some of the Space Force’s satellites designed to detect and warn of ballistic missile launches.
McDowell said this could mean the X-37B is testing out an infrared sensor for future early warning satellites, but then he cautioned this would be “just a wild speculation.”
Speculation is about all we have to go on regarding the X-37B. But it seems we no longer need to speculate about where the X-37B is flying.
When big pharmaceutical companies are confronted over their exorbitant pricing of prescription drugs in the US, they often retreat to two well-worn arguments: One, that the high drug prices cover costs of researching and developing new drugs, a risky and expensive endeavor, and two, that middle managers—pharmacy benefit managers (PBMs), to be specific—are actually the ones price gouging Americans.
Both of these arguments faced substantial blows in a hearing Thursday held by the Senate Committee on Health, Education, Labor and Pensions, chaired by Sen. Bernie Sanders (I-Vt.). In fact, pharmaceutical companies are spending billions of dollars more on lavish executive compensation, dividends, and stock buyouts than they spend on research and development (R&D) for new drugs, Sanders pointed out. “In other words, these companies are spending more to enrich their own stockholders and CEOs than they are in finding new cures and new treatments,” he said.
And, while PBMs certainly contribute to America’s uniquely astronomical drug pricing, their profiteering accounts for a small fraction of the massive drug market, Sanders and an expert panelist noted. PBMs work as shadowy middle managers between drugmakers, insurers, and pharmacies, setting drug formularies and consumer prices, and negotiating rebates and discounts behind the scenes. Though PBMs practices contribute to overall costs, they pale compared to pharmaceutical profits.
Rather, the heart of the problem, according to a Senate report released earlier this week, is pharmaceutical greed, patent gaming that allows drug makers to stretch out monopolies, and powerful lobbying.
On Thursday, the Senate committee gathered the CEOs of three behemoth pharmaceutical companies to question them on the drug pricing practices: Robert Davis of Merck, Joaquin Duato of Johnson & Johnson, and Chris Boerner of Bristol Myers Squibb.
“We are aware of the many important lifesaving drugs that your companies have produced, and that’s extraordinarily important,” Sanders said before questioning the CEOs. “But, I think, as all of you know, those drugs mean nothing to anybody who cannot afford it.”
America’s uniquely high prices
Sanders called drug pricing in the US “outrageous,” noting that Americans spend by far the most for prescription drugs in the world. A report this month by the US Department of Health and Human Services found that in 2022, US prices across all brand-name and generic drugs were nearly three times as high as prices in 33 other wealthy countries. That means that for every dollar paid in other countries for prescription drugs, Americans paid $2.78. And that gap is widening over time.
Focusing on drugs from the three companies represented at the hearing (J&J, Merck, and Bristol Myers Squibb), the Senate report looked at how initial prices for new drugs entering the US market have skyrocketed over the past two decades. The analysis found that from 2004 to 2008, the median launch price of innovative prescription drugs sold by J&J, Merck, and Bristol Myers Squibb was over $14,000. But, over the past five years, the median launch price was over $238,000. Those numbers account for inflation.
The report focused on high-profit drugs from each of the drug makers. Merck’s Keytruda, a cancer drug, costs $191,000 a year in the US, but is just $91,000 in France and $44,000 in Japan. J&J’s HIV drug, Symtuza, is $56,000 in the US, but only $14,000 in Canada. And Bristol Myers Squibb’s Eliquis, used to prevent strokes, costs $7,100 in the US, but $760 in the UK and $900 in Canada.
Sanders asked Bristol Myers Squibb’s CEO Boerner if the company would “reduce the list price of Eliquis in the United States to the price that you charge in Canada, where you make a profit?” Boerner replied that “we can’t make that commitment primarily because the prices in these two countries have very different systems.”
The powerful pharmaceutical trade group PhRMA, published a blog post before the hearing saying that comparing US drug prices to prices in other countries “hurts patients.” The group argued that Americans have broader, faster access to drugs than people in other countries.
Enlarge/ A nursing home resident is pushed along a corridor by a nurse.
Health insurance companies cannot use algorithms or artificial intelligence to determine care or deny coverage to members on Medicare Advantage plans, the Centers for Medicare & Medicaid Services (CMS) clarified in a memo sent to all Medicare Advantage insurers.
The memo—formatted like an FAQ on Medicare Advantage (MA) plan rules—comes just months after patients filed lawsuits claiming that UnitedHealth and Humana have been using a deeply flawed, AI-powered tool to deny care to elderly patients on MA plans. The lawsuits, which seek class-action status, center on the same AI tool, called nH Predict, used by both insurers and developed by NaviHealth, a UnitedHealth subsidiary.
According to the lawsuits, nH Predict produces draconian estimates for how long a patient will need post-acute care in facilities like skilled nursing homes and rehabilitation centers after an acute injury, illness, or event, like a fall or a stroke. And NaviHealth employees face discipline for deviating from the estimates, even though they often don’t match prescribing physicians’ recommendations or Medicare coverage rules. For instance, while MA plans typically provide up to 100 days of covered care in a nursing home after a three-day hospital stay, using nH Predict, patients on UnitedHealth’s MA plan rarely stay in nursing homes for more than 14 days before receiving payment denials, the lawsuits allege.
Specific warning
It’s unclear how nH Predict works exactly, but it reportedly uses a database of 6 million patients to develop its predictions. Still, according to people familiar with the software, it only accounts for a small set of patient factors, not a full look at a patient’s individual circumstances.
This is a clear no-no, according to the CMS’s memo. For coverage decisions, insurers must “base the decision on the individual patient’s circumstances, so an algorithm that determines coverage based on a larger data set instead of the individual patient’s medical history, the physician’s recommendations, or clinical notes would not be compliant,” the CMS wrote.
The CMS then provided a hypothetical that matches the circumstances laid out in the lawsuits, writing:
In an example involving a decision to terminate post-acute care services, an algorithm or software tool can be used to assist providers or MA plans in predicting a potential length of stay, but that prediction alone cannot be used as the basis to terminate post-acute care services.
Instead, the CMS wrote, in order for an insurer to end coverage, the individual patient’s condition must be reassessed, and denial must be based on coverage criteria that is publicly posted on a website that is not password protected. In addition, insurers who deny care “must supply a specific and detailed explanation why services are either no longer reasonable and necessary or are no longer covered, including a description of the applicable coverage criteria and rules.”
In the lawsuits, patients claimed that when coverage of their physician-recommended care was unexpectedly wrongfully denied, insurers didn’t give them full explanations.
Fidelity
In all, the CMS finds that AI tools can be used by insurers when evaluating coverage—but really only as a check to make sure the insurer is following the rules. An “algorithm or software tool should only be used to ensure fidelity,” with coverage criteria, the CMS wrote. And, because “publicly posted coverage criteria are static and unchanging, artificial intelligence cannot be used to shift the coverage criteria over time” or apply hidden coverage criteria.
The CMS sidesteps any debate about what qualifies as artificial intelligence by offering a broad warning about algorithms and artificial intelligence. “There are many overlapping terms used in the context of rapidly developing software tools,” the CMS wrote.
Algorithms can imply a decisional flow chart of a series of if-then statements (i.e., if the patient has a certain diagnosis, they should be able to receive a test), as well as predictive algorithms (predicting the likelihood of a future admission, for example). Artificial intelligence has been defined as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
The CMS also openly worried that the use of either of these types of tools can reinforce discrimination and biases—which has already happened with racial bias. The CMS warned insurers to ensure any AI tool or algorithm they use “is not perpetuating or exacerbating existing bias, or introducing new biases.”
While the memo overall was an explicit clarification of existing MA rules, the CMS ended by putting insurers on notice that it is increasing its audit activities and “will be monitoring closely whether MA plans are utilizing and applying internal coverage criteria that are not found in Medicare laws.” Non-compliance can result in warning letters, corrective action plans, monetary penalties, and enrollment and marketing sanctions.
Death is coming for the old-school gas furnace—and its killer is the humble heat pump. They’re already outselling gas furnaces in the US, and now a coalition of states has signed an agreement to supercharge the gas-to-electric transition by making it as cheap and easy as possible for their residents to switch.
Nine states have signed a memorandum of understanding that says that heat pumps should make up at least 65 percent of residential heating, air conditioning, and water-heating shipments by 2030. (“Shipments” here means systems manufactured, a proxy for how many are actually sold.) By 2040, these states—California, Colorado, Maine, Maryland, Massachusetts, New Jersey, New York, Oregon, and Rhode Island—are aiming for 90 percent of those shipments to be heat pumps.
“It’s a really strong signal from states that they’re committed to accelerating this transition to zero-emissions residential buildings,” says Emily Levin, senior policy adviser at the Northeast States for Coordinated Air Use Management (NESCAUM), an association of air-quality agencies, which facilitated the agreement. The states will collaborate, for instance, in pursuing federal funding, developing standards for the rollout of heat pumps, and laying out an overarching plan “with priority actions to support widespread electrification of residential buildings.”
Instead of burning planet-warming natural gas, a heat pump warms a building by transferring heat from the outdoor air into the interior space. Run it in the opposite direction, and it can cool the inside of a building—a heat pump is both a heater and AC unit. Because the system is electric, it can run off a grid increasingly powered by renewables like wind and solar. Even if you have to run a heat pump with electricity from fossil-fuel power plants, it’s much more efficient than a furnace, because it’s moving heat instead of creating it.
A heat pump can save an average American household over $550 a year, according to one estimate. They’ve gotten so efficient that even when it’s freezing out, they can still extract warmth from the air to heat a home. You can even install a heat pump system that also warms your water. “We really need consumers to move away from dirty to clean heat, and we really want to get the message out that heat pumps are really the way to go,” says Serena McIlwain, Maryland’s secretary of the environment. “We have homeowners who are getting ready to replace their furnaces, and if they’re not aware, they are not going to replace it with a heat pump.”
The coalition’s announcement comes just months after the federal government doubled down on its own commitment to heat pumps, announcing $169 million in funding for the domestic production of the systems. That money comes from 2022’s Inflation Reduction Act, which also provides an American household with thousands of dollars in rebates or tax credits to switch to a heat pump.
These states are aiming to further collaborate with those heat pump manufacturers by tracking sales and overall progress, sending a signal to the industry to ramp up production to meet the ensuing demand. They’ll also collaborate with each other on research and generally share information, working toward the best strategies for realizing the transition from gas to electric. Basically, they’re pursuing a sort of standardization of the policies and regulations for getting more heat pumps built, bought, and installed, which other states outside of the coalition might eventually tap into.
“A consistent approach between states helps to ease the market transition,” says Matt Casale, senior manager of appliance standards at the Building Decarbonization Coalition, which is collaborating with the Northeast States for Coordinated Air Use Management. “There are all of these manufacturers, and all of these contractors, all along the supply chain, trying to plan out their next several years. They want to know: What is it going to look like?”
There’s also the less-talked-about challenge of the green energy revolution: training enough technicians to actually install the heat pumps. To that end, the memorandum calls for workforce development and contractor training. “If we’re pushing heat pumps and more installations, and we don’t have enough electricians to do the job, we’re not going to meet the goal—period,” says McIlwain. “We do need to put a lot of money and energy and resources into making sure that we have the workforce available to do it.”
In addition to the technicians working with the systems, the country needs way more electricians to retrofit homes to go fully electric beyond heat pumps, with solar panels and induction stoves and home batteries. To help there, last year the White House announced the formation of the American Climate Corps, which aims to put more than 20,000 people to work in clean energy and overall climate resilience.
With states collaborating like this on heat pumps, the idea is to lift the device from an obscure technology cherished by climate nerds into ubiquity, for the good of consumers and the planet. “We need to be sending these unmistakable signals to the marketplace that heat pumps and zero-emission homes are the future,” says Casale. “This agreement between this many states really sets the stage for doing that.”
Sending 1 kilogram to Mars will set you back roughly $2.4 million, judging by the cost of the Perseverance mission. If you want to pack up supplies and gear for every conceivable contingency, you’re going to need a lot of those kilograms.
But what if you skipped almost all that weight and only took a do-it-all Swiss Army knife instead? That’s exactly what scientists at NASA Ames Research Center and Stanford University are testing with robots, algorithms, and highly advanced building materials.
Zero mass exploration
“The concept of zero mass exploration is rooted in self-replicating machines, an engineering concept John von Neumann conceived in the 1940s”, says Kenneth C. Cheung, a NASA Ames researcher. He was involved in the new study published recently in Science Robotics covering self-reprogrammable metamaterials—materials that do not exist in nature and have the ability to change their configuration on their own. “It’s the idea that an engineering system can not only replicate, but sustain itself in the environment,” he adds.
Based on this concept, Robert A. Freitas Jr. in the 1980s proposed a self-replicating interstellar spacecraft called the Von Neumann probe that would visit a nearby star system, find resources to build a copy of itself, and send this copy to another star system. Rinse and repeat.
“The technology of reprogrammable metamaterials [has] advanced to the point where we can start thinking about things like that. It can’t make everything we need yet, but it can make a really big chunk of what we need,” says Christine E. Gregg, a NASA Ames researcher and the lead author of the study.
Building blocks for space
One of the key problems with Von Neumann probes was that taking elements found in the soil on alien worlds and processing them into actual engineering components was resource-intensive and required huge amounts of energy. The NASA Ames team solved that with using prefabricated “voxels”—standardized reconfigurable building blocks.
The system derives its operating principles from the way nature works on a very fundamental level. “Think how biology, one of the most scalable systems we have ever seen, builds stuff,” says Gregg. “It does that with building blocks. There are on the order of 20 amino acids which your body uses to make proteins to make 200 different types of cells and then combines trillions of those cells to make organs as complex as my hair and my eyes. We are using the same strategy,” she adds.
To demo this technology, they built a set of 256 of those blocks—extremely strong 3D structures made with a carbon-fiber-reinforced polymer called StattechNN-40CF. Each block had fastening interfaces on every side that could be used to reversibly attach them to other blocks and form a strong truss structure.
A 3×3 truss structure made with these voxels had an average failure load of 900 Newtons, which means it could hold over 90 kilograms despite being incredibly light itself (its density is just 0.0103 grams per cubic centimeter). “We took these voxels out in backpacks and built a boat, a shelter, a bridge you could walk on. The backpacks weighed around 18 kilograms. Without technology like that, you wouldn’t even think about fitting a boat and a bridge in a backpack,” says Cheung. “But the big thing about this study is that we implemented this reconfigurable system autonomously with robots,” he adds.