Author name: Mike M.

nasa-selects-spacex-to-launch-a-gamma-ray-telescope-into-an-unusual-orbit

NASA selects SpaceX to launch a gamma-ray telescope into an unusual orbit

Plane change —

The Falcon 9 rocket is pretty much the only rocket available to launch this mission.

Artist's illustration of the COSI spacecraft.

Enlarge / Artist’s illustration of the COSI spacecraft.

A small research satellite designed to study the violent processes behind the creation and destruction of chemical elements will launch on a SpaceX Falcon 9 rocket in 2027, NASA announced Tuesday.

The Compton Spectrometer and Imager (COSI) mission features a gamma-ray telescope that will scan the sky to study gamma-rays emitted by the explosions of massive stars and the end of their lives. These supernova explosions generate reactions that fuse new atomic nuclei, a process called nucleosynthesis, of heavier elements.

Using data from COSI, scientists will map where these elements are forming in the Milky Way galaxy. COSI’s observations will also yield new insights into the annihilation of positrons, the antimatter equivalent of electrons, which appear to be originating from the center of the galaxy. Another goal for COSI will be to rapidly report the location of short gamma-ray bursts, unimaginably violent explosions that flash and then fade in just a couple of seconds. These bursts are likely caused by merging neutron stars.

The COSI mission will be sensitive to so-called soft gamma rays, a relatively unexplored segment of the electromagnetic spectrum. The telescope is based on a design scientists have flown on research balloon flights.

NASA selected COSI in a competition for funding to become the next mission in the agency’s Explorers program in 2021. Earlier this year, NASA formally approved the mission to proceed into development for launch in August 2027, with an overall budget in the range of $267 million to $294 million, according to NASA budget documents.

From Florida to the equator

COSI is a relatively small spacecraft, built by Northrop Grumman and weighing less than a ton, but it will ride alone into orbit on top of a Falcon 9 rocket. That’s because COSI will operate in an unusual orbit about 340 miles (550 kilometers) over the equator, an orbit chosen to avoid interference from radiation over the South Atlantic Anomaly, the region where the inner Van Allen radiation belt comes closest to Earth’s surface.

SpaceX’s Falcon 9 will deliver COSI directly into its operational orbit after taking off from Cape Canaveral, Florida, then will fire its upper stage in a sideways maneuver to make a turn at the equator. This type of maneuver, called a plane change, takes a lot of energy, or delta-V, on par with the delta-V required to put a heavier satellite into a much higher orbit.

File photo of a Falcon 9 launch on May 6 from Cape Canaveral Space Force Station, Florida.

Enlarge / File photo of a Falcon 9 launch on May 6 from Cape Canaveral Space Force Station, Florida.

SpaceX

NASA awarded SpaceX a firm-fixed-price contract valued at $69 million to launch the COSI mission. This is about a 37 percent increase in the price NASA paid SpaceX in a 2019 contract for launch of the similarly sized IXPE X-ray telescope into a similar orbit as COSI. The higher price is at least partially explained by inflation.

The space agency didn’t have much of a decision to make in the COSI launch contract. The Falcon 9 is the only rocket certified by NASA that can launch a satellite with the mass of COSI into its equatorial orbit.

In the next couple of years, NASA hopes United Launch Alliance’s Vulcan rocket and Blue Origin’s New Glenn launcher will be in the mix to compete for launch contracts for missions like COSI. All of ULA’s remaining Atlas V rockets are already booked by other customers.

NASA selects SpaceX to launch a gamma-ray telescope into an unusual orbit Read More »

“regresshion”-vulnerability-in-openssh-gives-attackers-root-on-linux

“RegreSSHion” vulnerability in OpenSSH gives attackers root on Linux

RELAPSE —

Full system compromise possible by peppering servers with thousands of connection requests.

“RegreSSHion” vulnerability in OpenSSH gives attackers root on Linux

Researchers have warned of a critical vulnerability affecting the OpenSSH networking utility that can be exploited to give attackers complete control of Linux and Unix servers with no authentication required.

The vulnerability, tracked as CVE-2024-6387, allows unauthenticated remote code execution with root system rights on Linux systems that are based on glibc, an open source implementation of the C standard library. The vulnerability is the result of a code regression introduced in 2020 that reintroduced CVE-2006-5051, a vulnerability that was fixed in 2006. With thousands, if not millions, of vulnerable servers populating the Internet, this latest vulnerability could pose a significant risk.

Complete system takeover

“This vulnerability, if exploited, could lead to full system compromise where an attacker can execute arbitrary code with the highest privileges, resulting in a complete system takeover, installation of malware, data manipulation, and the creation of backdoors for persistent access,” wrote Bharat Jogi, the senior director of threat research at Qualys, the security firm that discovered it. “It could facilitate network propagation, allowing attackers to use a compromised system as a foothold to traverse and exploit other vulnerable systems within the organization.”

The risk is in part driven by the central role OpenSSH plays in virtually every internal network connected to the Internet. It provides a channel for administrators to connect to protected devices remotely or from one device to another inside the network. The ability for OpenSSH to support multiple strong encryption protocols, its integration into virtually all modern operating systems, and its location at the very perimeter of networks further drive its popularity.

Besides the ubiquity of vulnerable servers populating the Internet, CVE-2024-6387 also provides a potent means for executing malicious code stems with the highest privileges, with no authentication required. The flaw stems from faulty management of the signal handler, a component in glibc for responding to potentially serious events such as division-by-zero attempts. When a client device initiates a connection but doesn’t successfully authenticate itself within an allotted time (120 seconds by default), vulnerable OpenSSH systems call what’s known as a SIGALRM handler asynchronously. The flaw resides in sshd, the main OpenSSH engine. Qualys has named the vulnerability regreSSHion.

The severity of the threat posed by exploitation is significant, but various factors are likely to prevent it from being mass exploited, security experts said. For one, the attack can take as long as eight hours to complete and require as many as 10,000 authentication steps, Stan Kaminsky, a researcher at security firm Kaspersky, said. The delay results from a defense known as address space layout randomization, which changes the memory addresses where executable code is stored to thwart attempts to run malicious payloads.

Other limitations apply. Attackers must also know the specific OS running on each targeted server. So far, no one has found a way to exploit 64-bit systems since the number of available memory addresses is exponentially higher than those available for 32-bit systems. Further mitigating the chances of success, denial-of-service attacks that limit the number of connection requests coming into a vulnerable system will prevent exploitation attempts from succeeding.

All of those limitations will likely prevent CVE-2024-6387 from being mass exploited, researchers said, but there’s still the risk of targeted attacks that pepper a specific network of interest with authentication attempts over a matter of days until allowing code execution. To cover their tracks, attackers could spread requests through a large number of IP addresses in a fashion similar to password-spraying attacks. In this way, attackers could target a handful of vulnerable networks until one or more of the attempts succeeded.

The vulnerability affects the following:

  • OpenSSH versions earlier than 4.4p1 are vulnerable to this signal handler race condition unless they are patched for CVE-2006-5051 and CVE-2008-4109.
  • Versions from 4.4p1 up to, but not including, 8.5p1 are not vulnerable due to a transformative patch for CVE-2006-5051, which made a previously unsafe function secure.
  • The vulnerability resurfaces in versions from 8.5p1 up to, but not including, 9.8p1 due to the accidental removal of a critical component in a function.

Anyone running a vulnerable version should update as soon as practicable.

“RegreSSHion” vulnerability in OpenSSH gives attackers root on Linux Read More »

call-the-ant-doctor:-amputation-gives-injured-ants-a-leg-up-on-infections

Call the ant doctor: Amputation gives injured ants a leg up on infections

video still image showing woundcare and amputation in C. maculatus

Enlarge / Scientists have observed wound care and selective amputation in Florida carpenter ants.

Florida carpenter ants (Camponotus floridanus) selectively treat the wounded limbs of their fellow ants, according to a new paper published in the journal Current Biology. Depending on the location of the injury, the ants either lick the wounds to clean them or chew off the affected limb to keep infection from spreading. The treatment is surprisingly effective, with survival rates of around 90–95 percent for amputee ants.

“When we’re talking about amputation behavior, this is literally the only case in which a sophisticated and systematic amputation of an individual by another member of its species occurs in the animal kingdom,” said co-author Erik Frank, a behavioral ecologist at the University of Würzburg in Germany. “The fact that the ants are able to diagnose a wound, see if it’s infected or sterile, and treat it accordingly over long periods of time by other individuals—the only medical system that can rival that would be the human one.”

Frank has been studying various species of ants for many years. Late last year, he co-authored a paper detailing how Matabele ants (Megaponera analis) south of the Sahara can tell if an injured comrade’s wound is infected or not, thanks to chemical changes in the hydrocarbon profile of the ant cuticle when a wound gets infected. These ants only eat termites, but termites have powerful jaws and use them to defend against predators, so there is a high risk of injury to hunting ants.

If an infected wound is identified, the ants then treat said wound with antibiotics produced by a special gland on the side of the thorax (the metapleural gland). Those secretions are made of some 112 components, half of which have antimicrobial properties. Frank et al.’s experiments showed that applying these secretions reduced the mortality rate of injured ants by 90 percent, and future research could lead to the discovery of new antibiotics suitable for treating humans. (This work was featured in an episode of a recent Netflix nature documentary, Life on Our Planet.)

Amputation in Camponotus maculatus. Credit: Danny Buffat.

Those findings caused Frank to ponder if the Matabele ant is unique in its ability to detect and treat infected wounds, so he turned his attention to the Florida carpenter ant. These reddish-brown ants nest in rotting wood and can be fiercely territorial, defending their homes from rival ant colonies. That combat comes with a high risk of injury. Florida carpenter ants lack a metapleural gland, however, so Frank et al. wondered how this species treats injured comrades. They conducted a series of experiments to find out.

Frank et al. drew their subjects from colonies of lab-raised ants (produced by queens collected during 2017 fieldwork in Florida), and ants targeted for injury were color-tagged with acrylic paint two days before each experiment. Selective injuries to tiny (ankle-like) tibias and femurs (thighs) were made with sterile Dowel-scissors, and cultivated strains of P. aeruginosa were used to infect some of those wounds, while others were left uninfected as a control. The team captured the subsequent treatment behavior of the other ants on video and subsequently analyzed that footage. They also took CT scans of the ants’ legs to learn more about the anatomical structure.

Call the ant doctor: Amputation gives injured ants a leg up on infections Read More »

tesla-posts-disappointing-production-and-sales-numbers-for-q2-2024

Tesla posts disappointing production and sales numbers for Q2 2024

line goes up —

Sales fell by 5 percent, with production cut by more.

Tesla Inc. vehicles in a parking lot after arriving at a port in Yokohama, Japan, on Monday, May 10, 2021.

Enlarge / For some time now, Tesla has produced more cars than it has sold. This past quarter, that changed.

Toru Hanai/Bloomberg via Getty Images

Tesla published its quarterly production and delivery numbers yesterday afternoon, and anyone hoping that the last three months have marked a return to growth will be disappointed. For Q2 2024, the automaker built 418,831 electric vehicles, a 14.4 percent decrease on Q2 2023. The drop in sales wasn’t quite as bad—in Q2 2024 Tesla sold 443,956 EVs, a 4.8 percent decline, year on year.

After several boom years, even the hype-generating powers of Tesla CEO Elon Musk weren’t able to stave off the realities of a small and stagnant product line and a brutal price war, particularly in China. The first quarter of 2024 saw Tesla’s deliveries fall by 8.5 percent, the first time this number hadn’t gone up since 2020.

Later in April, we saw the effect on Tesla’s balance sheet. Profits fell by more than half, and profit margins slumped to just 5.5 percent, barely half the industry average.

In fact, there’s evidence that Musk’s vast reach through social media may be directly harming the Tesla brand at this point. A poll of more than 7,500 New York Times readers, collected earlier this year, revealed that many had a problem being associated with Tesla and Musk, with one comparing driving a Tesla to “a giant red MAGA hat.”

There may be a bright spot in the production and delivery numbers. Tesla delivered 422,405 Models 3 and Y between April and June, but it only built 386,576 at its factories in the US, Germany, and China. For many quarters, Tesla has been building more cars than it has delivered, raising questions and inspiring open source satellite image analysts to go looking for inventory from space. Now, perhaps, the automaker is clearing some of that excess inventory and matching production to more realistic expectations of demand.

In a brief text note to investors, Tesla notes that its solar energy and storage division had a bumper quarter, deploying 9.4 GWh of energy storage. This could see the division contribute up to 20 percent of Tesla’s total revenues for the quarter.

Musk’s reaction to the decline in Tesla’s automotive sales business has been to pivot. Perhaps bored of the realities of a low-margin industry surrounded by cutthroat rivals, the erratic CEO now says the future of the company will be humanoid robots, based on annual projections that bear little to no resemblance to objective reality as we know it.

Tesla investors obviously don’t mind; the company’s share price has risen by more than 8 percent since the market opened at 9: 30 am.

Tesla posts disappointing production and sales numbers for Q2 2024 Read More »

bleeding-subscribers,-cable-companies-force-their-way-into-streaming

Bleeding subscribers, cable companies force their way into streaming

Enter NOW TV Latino —

Companies like Charter brought about the streaming industry they now want to join.

A person's hand aiming a cable TV remote control at a TV screen

Getty Images | stefanamer

It’s clear that streaming services are the present and future of video distribution. But that doesn’t mean that cable companies are ready to give up on your monthly dollars.

A sign of this is Comcast, the US’ second-biggest cable company, debuting a new streaming service today. Comcast already had an offering that let subscribers stream its Xfinity cable live channels and access some titles on demand. NOW TV Latino differs in being a separate, additional streaming service that people can subscribe to independently of Xfinity cable for $10 per month.

However, unlike streaming services like Netflix or Max, you can only subscribe to NOW TV Latino if Xfinity is sold in your area. NOW TV Latino subscriptions include the ability to stream live TV from Spanish-language channels that Xfinity offers, like Sony Cine and ViendoMovies. And because Comcast owns NBCUniversal, people who subscribe to NOW TV Latino get a free subscription to Peacock with commercials, which usually costs $6/month.

From cable to streaming

In addition to NOW TV Latino, recent Comcast efforts to stay relevant in a TV and movie distribution world dominated by online streaming has centered on bundling. As streaming giants like Netflix struggle with customer churn, bundling is the current favored tactic to keep customers subscribed for longer.

Comcast is selling NOW TV Latino as a separate service, but it’s truly a Peacock bundle. The cable giant is also selling the streaming service bundled with its cable service or with its recently released streaming bundle that combines Comcast’s Peacock with Netflix, Apple TV+, and ads for $15/month.

While popular for streaming service providers, cable companies were some of the pioneers of the bundling strategy, which can overwhelm customers with confusing rates and services that some may not need. As Comcast CEO Brian Roberts said in May while announcing the aforementioned Peacock/Netflix/AppleTV+ bundle: “We’ve been bundling video successfully and creatively for 60 years, and so this is the latest iteration of that.”

Bleeding customers

The cable industry has been in a nose-dive for years. Comcast’s Q1 2024 earnings report showed its cable business losing 487,000 subscribers. The cable giant ended 2022 with 16,142,000 subscribers; in January, it had 13,600,000.

Charter, the only US cable company bigger than Comcast, is rapidly losing pay-TV subscribers, too. In its Q1 2024 earnings report, Charter reported losing 405,000 subscribers, including business accounts. It ended 2022 with 15,147,000 subscribers; at the end of March, it had 13,717,000.

And, like Comcast, Charter is looking to streaming bundles to keep its pay-TV business alive and to compete with the likes of YouTube TV and Hulu With Live TV.

In April, Charter also announced a Spanish language-focused streaming service, but in traditional cable fashion, one must subscribe to Charter’s Spectrum Internet to be able to subscribe (TV Stream Latino is $25/month). Charter also sells the ability to stream live TV from some of the channels that its cable service has.

In 2022, Charter and Comcast formed a joint venture, Xumo, that focuses on streaming but includes cable industry spins, like set-top boxes. The companies are even trying to get a piece of the money made from smart TV operating systems (OSes), with budget brands such as Hisense now selling TVs with Xumo OS.

It’s a curious time, as cable TV providers scramble to be part of an industry created in reaction to business practices that many customers viewed as anti-consumer. Meanwhile, the streaming industry is adopting some of these same practices, like commercials and incessant price hikes, to establish profitability. And some smaller streaming players say it’s nearly impossible to compete as the streaming industry’s top players are taking form and, in some cases, collaborating.

But after decades of discouraging many subscribers with few alternatives, it will be hard for former or current cable customers to view firms like Comcast and Charter as trustworthy competitive streaming providers.

Bleeding subscribers, cable companies force their way into streaming Read More »

meta-defends-charging-fee-for-privacy-amid-showdown-with-eu

Meta defends charging fee for privacy amid showdown with EU

Meta defends charging fee for privacy amid showdown with EU

Meta continues to hit walls with its heavily scrutinized plan to comply with the European Union’s strict online competition law, the Digital Markets Act (DMA), by offering Facebook and Instagram subscriptions as an alternative for privacy-inclined users who want to opt out of ad targeting.

Today, the European Commission (EC) announced preliminary findings that Meta’s so-called “pay or consent” or “pay or OK” model—which gives users a choice to either pay for access to its platforms or give consent to collect user data to target ads—is not compliant with the DMA.

According to the EC, Meta’s advertising model violates the DMA in two ways. First, it “does not allow users to opt for a service that uses less of their personal data but is otherwise equivalent to the ‘personalized ads-based service.” And second, it “does not allow users to exercise their right to freely consent to the combination of their personal data,” the press release said.

Now, Meta will have a chance to review the EC’s evidence and defend its policy, with today’s findings kicking off a process that will take months. The EC’s investigation is expected to conclude next March. Thierry Breton, the commissioner for the internal market, said in the press release that the preliminary findings represent “another important step” to ensure Meta’s full compliance with the DMA.

“The DMA is there to give back to the users the power to decide how their data is used and ensure innovative companies can compete on equal footing with tech giants on data access,” Breton said.

A Meta spokesperson told Ars that Meta plans to fight the findings—which could trigger fines up to 10 percent of the company’s worldwide turnover, as well as fines up to 20 percent for repeat infringement if Meta loses.

Meta continues to claim that its “subscription for no ads” model was “endorsed” by the highest court in Europe, the Court of Justice of the European Union (CJEU), last year.

“Subscription for no ads follows the direction of the highest court in Europe and complies with the DMA,” Meta’s spokesperson said. “We look forward to further constructive dialogue with the European Commission to bring this investigation to a close.”

However, some critics have noted that the supposed endorsement was not an official part of the ruling and that particular case was not regarding DMA compliance.

The EC agreed that more talks were needed, writing in the press release, “the Commission continues its constructive engagement with Meta to identify a satisfactory path towards effective compliance.”

Meta defends charging fee for privacy amid showdown with EU Read More »

nasa-orders-more-tests-on-starliner,-but-says-crew-isn’t-stranded-in-space

NASA orders more tests on Starliner, but says crew isn’t stranded in space

Boeing's Starliner spacecraft is seen docked at the International Space Station on June 13.

Enlarge / Boeing’s Starliner spacecraft is seen docked at the International Space Station on June 13.

NASA and Boeing officials pushed back Friday on headlines that the commercial Starliner crew capsule is stranded at the International Space Station but said they need more time to analyze data before formally clearing the spacecraft for undocking and reentry.

Two NASA astronauts, commander Butch Wilmore and pilot Suni Williams, will spend at least a few more weeks on the space station as engineers on the ground conduct thruster tests to better understand issues with the Starliner propulsion system in orbit. Wilmore and Williams launched June 5 aboard an Atlas V rocket and docked at the station the next day, completing the first segment of Starliner’s first test flight with astronauts.

NASA managers originally planned for the Starliner spacecraft to remain docked at the space station for at least eight days, although they left open the possibility of a mission extension. The test flight is now likely to last at least a month and a half, and perhaps longer, as engineers wrestle with helium leaks and thruster glitches on Starliner’s service module.

Batteries on this Starliner spacecraft were initially only certified for a 45-day mission duration, but NASA officials said they are looking at extending the limit after confirming the batteries are functioning well.

“We have the luxury of time,” said Ken Bowersox, associate administrator for NASA’s space operations mission directorate. “We’re still in the middle of a test mission. We’re still pressing forward.”

Previously, NASA and Boeing officials delayed Starliner’s reentry and landing from mid-June, then from June 26, and now they have bypassed a potential landing opportunity in early July. Last week, NASA said in a statement that the agency’s top leadership will meet to formally review the readiness of Starliner for reentry, something that wasn’t part of the original plan.

“We’re not stuck on ISS”

Steve Stich, manager of NASA’s commercial crew program, said Friday that he wanted to clear up “misunderstandings” that led to headlines claiming the Starliner spacecraft was stuck or stranded at the space station.

“I want to make it very clear that Butch and Suni are not stranded in space,” Stich said. “Our plan is to continue to return them on Starliner and return them home at the right time. We have a little bit more work to do to get there for the final return, but they’re safe on (the) space station.”

With Starliner docked, the space station currently hosts three different crew spacecraft, including SpaceX’s Crew Dragon and Russia’s Soyuz. There are no serious plans under consideration to bring Wilmore and Williams home on a different spacecraft.

“Obviously, we have the luxury of having multiple vehicles, and we work contingency plans for lots of different cases, but right now, we’re really focused on returning Butch and Suni on Starliner,” Stich said.

“We’re not stuck on the ISS,” said Mark Nappi, Boeing’s vice president in charge of the Starliner program. “It’s pretty painful to read the things that are out there. We’ve gotten a really good test flight that’s been accomplished so far, and it’s being viewed rather negatively.”

Stich said NASA officials should have “more frequent interaction” with reporters to fill in gaps of information on the Starliner test flight. NASA’s written updates are not always timely, and often lack details and context.

NASA officials have cleared the Starliner spacecraft for an emergency return to Earth if astronauts need to evacuate the space station for safety or medical reasons. But NASA hasn’t yet approved Starliner for reentry and landing under “nominal” conditions.

“When it is a contingency situation, we’re ready to put the crew on the spacecraft and bring them home as a lifeboat,” Bowersox said. “For the nominal entry, we want to look at the data more before we make the final call to put the crew aboard the vehicle, and it’s a serious enough call that we’ll bring the senior management team together (for approval).”

NASA orders more tests on Starliner, but says crew isn’t stranded in space Read More »

scotus-kills-chevron-deference,-giving-courts-more-power-to-block-federal-rules

SCOTUS kills Chevron deference, giving courts more power to block federal rules

Supreme Court Chief Justice John Roberts and Associate Justice Sonia Sotomayor wearing their robes as they arrive for the State of the Union address.

Enlarge / Supreme Court Chief Justice John Roberts and Associate Justice Sonia Sotomayor arrive for President Joe Biden’s State of the Union address on March 7, 2024, in Washington, DC.

Getty Images | Win McNamee

The US Supreme Court today overturned the 40-year-old Chevron precedent in a ruling that limits the regulatory authority of federal agencies. The 6-3 decision in Loper Bright Enterprises v. Raimondo will make it harder for agencies such as the Federal Communications Commission and Environmental Protection Agency to issue regulations without explicit authorization from Congress.

Chief Justice John Roberts delivered the opinion of the court and was joined by Clarence Thomas, Samuel Alito, Neil Gorsuch, Brett Kavanaugh, and Amy Coney Barrett. Justice Elena Kagan filed a dissenting opinion that was joined by Sonia Sotomayor and Ketanji Brown Jackson.

Chevron gave agencies leeway to interpret ambiguous laws as long as the agency’s conclusion was reasonable. But the Roberts court said that a “statutory ambiguity does not necessarily reflect a congressional intent that an agency, as opposed to a court, resolve the resulting interpretive question.”

“Perhaps most fundamentally, Chevron‘s presumption is misguided because agencies have no special competence in resolving statutory ambiguities. Courts do,” the ruling said. “The Framers anticipated that courts would often confront statutory ambiguities and expected that courts would resolve them by exercising independent legal judgment. Chevron gravely erred in concluding that the inquiry is fundamentally different just because an administrative interpretation is in play.”

This is especially critical “when the ambiguity is about the scope of an agency’s own power—perhaps the occasion on which abdication in favor of the agency is least appropriate,” the court said. The Roberts opinion also said the Administrative Procedure Act “specifies that courts, not agencies, will decide ‘all relevant questions of law’ arising on review of agency action—even those involving ambiguous laws,” and “prescribes no deferential standard for courts to employ in answering those legal questions.”

Kagan: SCOTUS majority now “administrative czar”

The Loper Bright case involved a challenge to a rule enforced by the National Marine Fisheries Service. Lower courts applied the Chevron framework when ruling in favor of the government.

Kagan’s dissent said that Chevron “has become part of the warp and woof of modern government, supporting regulatory efforts of all kinds—to name a few, keeping air and water clean, food and drugs safe, and financial markets honest.”

Ambiguities should generally be resolved by agencies instead of courts, Kagan wrote. “This Court has long understood Chevron deference to reflect what Congress would want, and so to be rooted in a presumption of legislative intent. Congress knows that it does not—in fact cannot—write perfectly complete regulatory statutes. It knows that those statutes will inevitably contain ambiguities that some other actor will have to resolve, and gaps that some other actor will have to fill. And it would usually prefer that actor to be the responsible agency, not a court,” the dissent said.

The Roberts court ruling “flips the script: It is now ‘the courts (rather than the agency)’ that will wield power when Congress has left an area of interpretive discretion,” Kagan wrote. “A rule of judicial humility gives way to a rule of judicial hubris.”

Kagan wrote that the court in recent years “has too often taken for itself decision-making authority Congress assigned to agencies,” substituting “its own judgment on workplace health for that of the Occupational Safety and Health Administration; its own judgment on climate change for that of the Environmental Protection Agency; and its own judgment on student loans for that of the Department of Education.”

Apparently deciding those previous decisions were “too piecemeal,” the court “majority today gives itself exclusive power over every open issue—no matter how expertise-driven or policy-laden—involving the meaning of regulatory law,” Kagan wrote. “As if it did not have enough on its plate, the majority turns itself into the country’s administrative czar. It defends that move as one (suddenly) required by the (nearly 80-year-old) Administrative Procedure Act. But the Act makes no such demand. Today’s decision is not one Congress directed. It is entirely the majority’s choice.”

The unanimous 1984 SCOTUS ruling in Chevron U.S.A. Inc. v. Natural Resources Defense Council involved the Environmental Protection Agency and air pollution rules. Even with Chevron deference in place, the EPA faced limits to its regulatory power. A Supreme Court ruling earlier this week imposed a stay on rules meant to limit the spread of ozone-generating pollutants across state lines.

Consumer advocacy group Public Knowledge criticized today’s ruling, saying that it “grounds judicial superiority over the legislative and executive branches by declaring that the Constitution requires judges to unilaterally decide the meaning of statutes written by Congress and entrusted to agencies.”

Public Knowledge Senior VP Harold Feld argued that after today’s ruling, “no consumer protection is safe. Even if Congress can write with such specificity that a court cannot dispute its plain meaning, Congress will need to change the law for every new technology and every change in business practice. Even at the best of times, it would be impossible for Congress to keep up. Given the dysfunction of Congress today, we are at the mercy of the whims of the Imperial Court.”

SCOTUS kills Chevron deference, giving courts more power to block federal rules Read More »

the-world’s-toughest-race-starts-saturday,-and-it’s-delightfully-hard-to-call-this-year

The world’s toughest race starts Saturday, and it’s delightfully hard to call this year

Is it Saturday yet? —

Setting the stage for what could be a wild ride across France.

The peloton passing through a sunflowers field during the stage eight of the 110th Tour de France in 2023.

Enlarge / The peloton passing through a sunflowers field during the stage eight of the 110th Tour de France in 2023.

David Ramos/Getty Images

Most readers probably did not anticipate seeing a Tour de France preview on Ars Technica, but here we are. Cycling is a huge passion of mine and several other staffers, and this year, a ton of intrigue surrounds the race, which has a fantastic route. So we’re here to spread Tour fever.

The three-week race starts Saturday, paradoxically in the Italian region of Tuscany. Usually, there is a dominant rider, or at most two, and a clear sense of who is likely to win the demanding race. But this year, due to rider schedules, a terrible crash in early April, and new contenders, there is more uncertainty than usual. A solid case could be made for at least four riders to win this year’s Tour de France.

For people who aren’t fans of pro road cycling—which has to be at least 99 percent of the United States—there’s a great series on Netflix called Unchained to help get you up to speed. The second season, just released, covers last year’s Tour de France and introduces you to most of the protagonists in the forthcoming edition. If this article sparks your interest, I recommend checking it out.

Anyway, for those who are cycling curious, I want to set the stage for this year’s race by saying a little bit about the four main contenders, from most likely to least likely to win, and provide some of the backstory to what could very well be a dramatic race this year.

Tadej Pogačar

Tadej Pogacar of Slovenia and UAE Team Emirates won the Giro d'Italia in May.

Enlarge / Tadej Pogacar of Slovenia and UAE Team Emirates won the Giro d’Italia in May.

Tim de Waele/Getty Images

  • Slovenia
  • 25 years old
  • UAE Team Emirates
  • Odds: -190

Pogačar burst onto the scene in 2019 at the very young age of 20 by finishing third in the Vuelta a España, one of the three grand tours of cycling. He then went on to win the 2020 and 2021 Tours de France, first by surprising fellow countryman Primož Roglič (more on him below) in 2020 and then utterly dominating in 2021. Given his youth, it seemed he would be the premiere grand tour competitor for the next decade.

But then another slightly older rider, a teammate of Roglič’s named Jonas Vingegaard, emerged in 2022 and won the next two races. Last year, in fact, Vingegaard cracked Pogačar by 7 minutes and 29 seconds in the Tour, a huge winning margin, especially for two riders of relatively close talent. This established Vingegaard as the alpha male of grand tour cyclists, having proven himself a better climber and time trialist than Pogačar, especially in the highest and hardest stages.

So this year, Pogačar decided to change up his strategy. Instead of focusing on the Tour de France, Pogačar participated in the first grand tour of the season, the Giro d’Italia, which occurred in May. He likely did so for a couple of reasons. First of all, he almost certainly received a generous appearance fee from the Italian organizers. And secondly, riding the Giro would give him a ready excuse for not beating Vingegaard in France.

Why is this? Because there are just five weeks between the end of the Giro and the start of the Tour. So if a rider peaks for the Giro and exerts himself in winning the race, it is generally thought that he can’t arrive at the Tour in winning form. He will be a few percent off, not having ideal preparation.

Predictably, Pogačar smashed the lesser competition at the Giro and won the race by 9 minutes and 56 seconds. Because he was so far ahead, he was able to take the final week of the race a bit easier. The general thinking in the cycling community is that Pogačar is arriving at the Tour in excellent but not peak form. But given everything else that has happened so far this season, the bettors believe that will be enough for him to win. Maybe.

The world’s toughest race starts Saturday, and it’s delightfully hard to call this year Read More »

monitoring-and-analytics:-the-eyes-and-ears-of-zero-trust

Monitoring and Analytics: The Eyes and Ears of Zero Trust

Welcome back to our zero trust blog series! In our previous post, we took a deep dive into API security and explored best practices for securing this critical component of modern application architectures. Today, we’re turning our attention to another essential aspect of zero trust: monitoring and analytics.

In a zero trust model, visibility is everything. With no implicit trust granted to any user, device, or application, organizations must continuously monitor and analyze all activity across their environment to detect and respond to potential threats in real-time.

In this post, we’ll explore the role of monitoring and analytics in a zero trust model, discuss the key data sources and technologies involved, and share best practices for building a comprehensive monitoring and analytics strategy.

The Role of Monitoring and Analytics in Zero Trust

In a traditional perimeter-based security model, monitoring and analytics often focus on detecting threats at the network boundary. However, in a zero trust model, the perimeter is everywhere, and threats can come from any user, device, or application, both inside and outside the organization.

To mitigate these risks, zero trust requires organizations to take a comprehensive, data-driven approach to monitoring and analytics. This involves:

  1. Continuous monitoring: Collecting and analyzing data from all relevant sources, including users, devices, applications, and infrastructure, in real-time.
  2. Behavioral analytics: Using machine learning and other advanced analytics techniques to identify anomalous or suspicious behavior that may indicate a potential threat.
  3. Automated response: Leveraging automation and orchestration tools to quickly investigate and remediate potential threats, minimizing the impact of security incidents.
  4. Continuous improvement: Using insights from monitoring and analytics to continuously refine and optimize security policies, controls, and processes.

By applying these principles, organizations can create a more proactive, adaptive security posture that can detect and respond to threats faster and more effectively than traditional approaches.

Key Data Sources and Technologies for Zero Trust Monitoring and Analytics

To build a comprehensive monitoring and analytics strategy for zero trust, organizations must collect and analyze data from a wide range of sources, including:

  1. Identity and access management (IAM) systems: Data on user identities, roles, and permissions, as well as authentication and authorization events.
  2. Endpoint detection and response (EDR) tools: Data on device health, configuration, and activity, as well as potential threats and vulnerabilities.
  3. Network security tools: Data on network traffic, including flow logs, packet captures, and intrusion detection and prevention system (IDPS) events.
  4. Application performance monitoring (APM) tools: Data on application performance, errors, and potential security issues, such as injection attacks or data exfiltration attempts.
  5. Cloud security posture management (CSPM) tools: Data on cloud resource configurations, compliance with security policies, and potential misconfigurations or vulnerabilities.

To collect, process, and analyze this data, organizations can leverage a range of technologies, including:

  1. Security information and event management (SIEM) platforms: Centralized platforms for collecting, normalizing, and analyzing security event data from multiple sources.
  2. User and entity behavior analytics (UEBA) tools: Advanced analytics tools that use machine learning to identify anomalous or suspicious behavior by users, devices, and applications.
  3. Security orchestration, automation, and response (SOAR) platforms: Tools that automate and orchestrate security processes, such as incident response and remediation, based on predefined playbooks and workflows.
  4. Big data platforms: Scalable platforms for storing, processing, and analyzing large volumes of structured and unstructured security data, such as Hadoop, Spark, and Elasticsearch.

By leveraging these data sources and technologies, organizations can build a comprehensive, data-driven monitoring and analytics strategy that can detect and respond to threats in real-time.

Best Practices for Zero Trust Monitoring and Analytics

Implementing a zero trust approach to monitoring and analytics requires a comprehensive, multi-layered strategy. Here are some best practices to consider:

  1. Identify and prioritize data sources: Identify all relevant data sources across your environment, and prioritize them based on their level of risk and criticality. Focus on collecting data from high-risk sources first, such as IAM systems, EDR tools, and critical applications.
  2. Establish a centralized logging and monitoring platform: Implement a centralized platform, such as a SIEM or big data platform, to collect, normalize, and analyze security event data from multiple sources. Ensure that the platform can scale to handle the volume and variety of data generated by a zero trust environment.
  3. Implement behavioral analytics: Leverage UEBA tools and machine learning algorithms to identify anomalous or suspicious behavior by users, devices, and applications. Focus on detecting behavior that deviates from established baselines or patterns, such as unusual login attempts, data access patterns, or network traffic.
  4. Automate incident response and remediation: Implement SOAR tools and automated playbooks to quickly investigate and remediate potential threats. Ensure that playbooks are aligned with zero trust principles, such as least privilege access and continuous verification.
  5. Continuously monitor and refine policies and controls: Use insights from monitoring and analytics to continuously refine and optimize security policies, controls, and processes. Regularly review and update policies based on changes in the threat landscape, business requirements, and user behavior.
  6. Foster a culture of continuous improvement: Encourage a culture of continuous learning and improvement across the organization. Regularly share insights and lessons learned from monitoring and analytics with stakeholders, and use them to drive ongoing enhancements to the zero trust strategy.

By implementing these best practices and continuously refining your monitoring and analytics posture, you can better protect your organization’s assets and data from the risks posed by evolving threats and changing business requirements.

Conclusion

In a zero trust world, monitoring and analytics are the eyes and ears of the security organization. By continuously collecting and analyzing data from all relevant sources, organizations can detect and respond to potential threats faster and more effectively than ever before.

However, achieving effective monitoring and analytics in a zero trust model requires a commitment to leveraging the right data sources and technologies, implementing behavioral analytics and automation, and fostering a culture of continuous improvement. It also requires a shift in mindset, from a reactive, perimeter-based approach to a proactive, data-driven approach that assumes no implicit trust.

As you continue your zero trust journey, make monitoring and analytics a top priority. Invest in the tools, processes, and skills necessary to build a comprehensive monitoring and analytics strategy, and regularly assess and refine your approach to keep pace with evolving threats and business needs.

In the next post, we’ll explore the role of automation and orchestration in a zero trust model and share best practices for using these technologies to streamline security processes and accelerate incident response.

Until then, stay vigilant and keep your eyes and ears open!

Additional Resources:

Monitoring and Analytics: The Eyes and Ears of Zero Trust Read More »

google-translate-just-nearly-doubled-its-number-of-supported-languages

Google Translate just nearly doubled its number of supported languages

Large language models —

This includes common languages like Cantonese and lesser-known ones like Manx.

The Google PaLM 2 logo.

Enlarge / The logo for PaLM 2, a Google large language model.

Google

Google announced today that it has added support for 110 new languages to Google Translate, nearly doubling the number of languages that can be translated.

The company used the PaLM 2 large language model to facilitate these additions.

In a blog post, Google Senior Software Engineer Isaac Caswell claimed that the newly added languages are spoken by more than 614 million people, or about 8 percent of the global population.

He noted that about a quarter of the languages originate in Africa, “representing our largest expansion of African languages to date.”

The blog post also went into some light detail about Google’s philosophy for choosing languages and for deciding which dialects to support:

Languages have an immense amount of variation: regional varieties, dialects, different spelling standards. In fact, many languages have no one standard form, so it’s impossible to pick a “right” variety. Our approach has been to prioritize the most commonly used varieties of each language. For example, Romani is a language that has many dialects all throughout Europe. Our models produce text that is closest to Southern Vlax Romani, a commonly used variety online. But it also mixes in elements from others, like Northern Vlax and Balkan Romani.

This update brings the total number of languages supported by Google Translate to 243, which is just the beginning of its publicized initiative to ultimately support 1,000 languages through the use of AI. You can see the full list of languages added in a help page published by Google.

By contrast, Apple Translate supports 21 languages, though that number includes both US and UK English as distinct options. Apple recently announced plans to add Hindi to its Translate app. Of course, Apple and Google take very different approaches to—and have different levels of investment in—these tools.

Google Translate just nearly doubled its number of supported languages Read More »

openai’s-new-“criticgpt”-model-is-trained-to-criticize-gpt-4-outputs

OpenAI’s new “CriticGPT” model is trained to criticize GPT-4 outputs

automated critic —

Research model catches bugs in AI-generated code, improving human oversight of AI.

An illustration created by OpenAI.

Enlarge / An illustration created by OpenAI.

On Thursday, OpenAI researchers unveiled CriticGPT, a new AI model designed to identify mistakes in code generated by ChatGPT. It aims to enhance the process of making AI systems behave in ways humans want (called “alignment”) through Reinforcement Learning from Human Feedback (RLHF), which helps human reviewers make large language model (LLM) outputs more accurate.

As outlined in a new research paper called “LLM Critics Help Catch LLM Bugs,” OpenAI created CriticGPT to act as an AI assistant to human trainers who review programming code generated by the ChatGPT AI assistant. CriticGPT—based on the GPT-4 family of LLMS—analyzes the code and points out potential errors, making it easier for humans to spot mistakes that might otherwise go unnoticed. The researchers trained CriticGPT on a dataset of code samples with intentionally inserted bugs, teaching it to recognize and flag various coding errors.

The researchers found that CriticGPT’s critiques were preferred by annotators over human critiques in 63 percent of cases involving naturally occurring LLM errors and that human-machine teams using CriticGPT wrote more comprehensive critiques than humans alone while reducing confabulation (hallucination) rates compared to AI-only critiques.

Developing an automated critic

The development of CriticGPT involved training the model on a large number of inputs containing deliberately inserted mistakes. Human trainers were asked to modify code written by ChatGPT, introducing errors and then providing example feedback as if they had discovered these bugs. This process allowed the model to learn how to identify and critique various types of coding errors.

In experiments, CriticGPT demonstrated its ability to catch both inserted bugs and naturally occurring errors in ChatGPT’s output. The new model’s critiques were preferred by trainers over those generated by ChatGPT itself in 63 percent of cases involving natural bugs (the aforementioned statistic). This preference was partly due to CriticGPT producing fewer unhelpful “nitpicks” and generating fewer false positives, or hallucinated problems.

The researchers also created a new technique they call Force Sampling Beam Search (FSBS). This method helps CriticGPT write more detailed reviews of code. It lets the researchers adjust how thorough CriticGPT is in looking for problems, while also controlling how often it might make up issues that don’t really exist. They can tweak this balance depending on what they need for different AI training tasks.

Interestingly, the researchers found that CriticGPT’s capabilities extend beyond just code review. In their experiments, they applied the model to a subset of ChatGPT training data that had previously been rated as flawless by human annotators. Surprisingly, CriticGPT identified errors in 24 percent of these cases—errors that were subsequently confirmed by human reviewers. OpenAI thinks this demonstrates the model’s potential to generalize to non-code tasks and highlights its ability to catch subtle mistakes that even careful human evaluation might miss.

Despite its promising results, like all AI models, CriticGPT has limitations. The model was trained on relatively short ChatGPT answers, which may not fully prepare it for evaluating longer, more complex tasks that future AI systems might tackle. Additionally, while CriticGPT reduces confabulations, it doesn’t eliminate them entirely, and human trainers can still make labeling mistakes based on these false outputs.

The research team acknowledges that CriticGPT is most effective at identifying errors that can be pinpointed in one specific location within the code. However, real-world mistakes in AI outputs can often be spread across multiple parts of an answer, presenting a challenge for future iterations of the model.

OpenAI plans to integrate CriticGPT-like models into its RLHF labeling pipeline, providing its trainers with AI assistance. For OpenAI, it’s a step toward developing better tools for evaluating outputs from LLM systems that may be difficult for humans to rate without additional support. However, the researchers caution that even with tools like CriticGPT, extremely complex tasks or responses may still prove challenging for human evaluators—even those assisted by AI.

OpenAI’s new “CriticGPT” model is trained to criticize GPT-4 outputs Read More »