Author name: DJ Henderson

spacex’s-unmatched-streak-of-perfection-with-the-falcon-9-rocket-is-over

SpaceX’s unmatched streak of perfection with the Falcon 9 rocket is over

Numerous pieces of ice fell off the second stage of the Falcon 9 rocket during its climb into orbit from Vandenberg Space Force Base, California.

Enlarge / Numerous pieces of ice fell off the second stage of the Falcon 9 rocket during its climb into orbit from Vandenberg Space Force Base, California.

SpaceX

A SpaceX Falcon 9 rocket suffered an upper stage engine failure and deployed a batch of Starlink Internet satellites into a perilously low orbit after launch from California Thursday night, the first blemish on the workhorse launcher’s record in more than 300 missions since 2016.

Elon Musk, SpaceX’s founder and CEO, posted on X that the rocket’s upper stage engine failed when it attempted to reignite nearly an hour after the Falcon 9 lifted off from Vandenberg Space Force Base, California, at 7: 35 pm PDT (02: 35 UTC).

Frosty evidence

After departing Vandenberg to begin SpaceX’s Starlink 9-3 mission, the rocket’s reusable first stage booster propelled the Starlink satellites into the upper atmosphere, then returned to Earth for an on-target landing on a recovery ship parked in the Pacific Ocean. A single Merlin Vacuum engine on the rocket’s second stage fired for about six minutes to reach a preliminary orbit.

A few minutes after liftoff of SpaceX’s Starlink 9-3 mission, veteran observers of SpaceX launches noticed an unusual build-up of ice around the top of the Merlin Vacuum engine, which consumes a propellant mixture of super-chilled kerosene and cryogenic liquid oxygen. The liquid oxygen is stored at a temperature of several hundred degrees below zero.

Numerous chunks of ice fell away from the rocket as the upper stage engine powered into orbit, but the Merlin Vacuum, or M-Vac, engine appeared to complete its first burn as planned. A leak in the oxidizer system or a problem with insulation could lead to ice accumulation, although the exact cause, and its possible link to the engine malfunction later in flight, will be the focus of SpaceX’s investigation into the failure.

A second burn with the upper stage engine was supposed to raise the perigee, or low point, of the rocket’s orbit well above the atmosphere before releasing 20 Starlink satellites to continue climbing to their operational altitude with their own propulsion.

“Upper stage restart to raise perigee resulted in an engine RUD for reasons currently unknown,” Musk wrote in an update two hours after the launch. RUD (rapid unscheduled disassembly) is a term of art in rocketry that usually signifies a catastrophic or explosive failure.

“Team is reviewing data tonight to understand root cause,” Musk continued. “Starlink satellites were deployed, but the perigee may be too low for them to raise orbit. Will know more in a few hours.”

Telemetry from the Falcon 9 rocket indicated it released the Starlink satellites into an orbit with a perigee just 86 miles (138 kilometers) above Earth, roughly 100 miles (150 kilometers) lower than expected, according to Jonathan McDowell, an astrophysicist and trusted tracker of spaceflight activity. Detailed orbital data from the US Space Force was not immediately available.

Ripple effects

While ground controllers scramble to salvage the 20 Starlink satellites, SpaceX engineers began probing what went wrong with the second stage’s M-Vac engine. For SpaceX and its customers, the investigation into the rocket malfunction is likely the more pressing matter.

SpaceX could absorb the loss of 20 Starlink satellites relatively easily. The company’s satellite assembly line can produce 20 Starlink spacecraft in a few days. But the Falcon 9 rocket’s dependability and high flight rate have made it a workhorse for NASA, the US military, and the wider space industry. An investigation will probably delay several upcoming SpaceX flights.

The first in-flight failure for SpaceX’s Falcon rocket family since June 2015, a streak of 344 consecutive successful launches until tonight.

A lot of unusual ice was observed on the Falcon 9’s upper stage during its first burn tonight, some of it falling into the engine plume. https://t.co/1vc3P9EZjj pic.twitter.com/fHO73MYLms

— Stephen Clark (@StephenClark1) July 12, 2024

Depending on the cause of the problem and what SpaceX must do to fix it, it’s possible the company can recover from the upper stage failure and resume launching Starlink satellites soon. Most of SpaceX’s launches aren’t for external customers, but deploy satellites for the company’s own Starlink network. This gives SpaceX a unique flexibility to quickly return to flight with the Falcon 9 without needing to satisfy customer concerns.

The Federal Aviation Administration, which licenses all commercial space launches in the United States, will require SpaceX to conduct a mishap investigation before resuming Falcon 9 flights.

“The FAA will be involved in every step of the investigation process and must approve SpaceX’s final report, including any corrective actions,” an FAA spokesperson said. “A return to flight is based on the FAA determining that any system, process, or procedure related to the mishap does not affect public safety.”

Two crew missions are supposed to launch on SpaceX’s human-rated Falcon 9 rocket in the next six weeks, but those launch dates are now in doubt.

The all-private Polaris Dawn mission, commanded by billionaire Jared Isaacman, is scheduled to launch on a Falcon 9 rocket on July 31 from NASA’s Kennedy Space Center in Florida. Isaacman and three commercial astronaut crewmates will spend five days in orbit on a mission that will include the first commercial spacewalk outside their Crew Dragon capsule, using new pressure suits designed and built by SpaceX.

NASA’s next crew mission with SpaceX is slated to launch from Florida aboard a Falcon 9 rocket around August 19. This team of four astronauts will replace a crew of four who have been on the International Space Station since March.

Some customers, especially NASA’s commercial crew program, will likely want to see the results of an in-depth inquiry and require SpaceX to string together a series of successful Falcon 9 flights with Starlink satellites before clearing their own missions for launch. SpaceX has already launched 70 flights with its Falcon family of rockets since January 1, an average cadence of one launch every 2.7 days, more than the combined number of orbital launches by all other nations this year.

With this rapid-fire launch cadence, SpaceX could quickly demonstrate the fitness of any fixes engineers recommend to resolve the problem that caused Thursday night’s failure. But investigations into rocket failures often take weeks or months. It was too soon, early on Friday, to know the true impact of the upper stage malfunction on SpaceX’s launch schedule.

SpaceX’s unmatched streak of perfection with the Falcon 9 rocket is over Read More »

“superhuman”-go-ais-still-have-trouble-defending-against-these-simple-exploits

“Superhuman” Go AIs still have trouble defending against these simple exploits

Man vs. machine —

Plugging up “worst-case” algorithmic holes is proving more difficult than expected.

Man vs. machine in a sea of stones.

Enlarge / Man vs. machine in a sea of stones.

Getty Images

In the ancient Chinese game of Go, state-of-the-art artificial intelligence has generally been able to defeat the best human players since at least 2016. But in the last few years, researchers have discovered flaws in these top-level AI Go algorithms that give humans a fighting chance. By using unorthodox “cyclic” strategies—ones that even a beginning human player could detect and defeat—a crafty human can often exploit gaps in a top-level AI’s strategy and fool the algorithm into a loss.

Researchers at MIT and FAR AI wanted to see if they could improve this “worst case” performance in otherwise “superhuman” AI Go algorithms, testing a trio of methods to harden the top-level KataGo algorithm‘s defenses against adversarial attacks. The results show that creating truly robust, unexploitable AIs may be difficult, even in areas as tightly controlled as board games.

Three failed strategies

In the pre-print paper “Can Go AIs be adversarially robust?”, the researchers aim to create a Go AI that is truly “robust” against any and all attacks. That means an algorithm that can’t be fooled into “game-losing blunders that a human would not commit” but also one that would require any competing AI algorithm to spend significant computing resources to defeat it. Ideally, a robust algorithm should also be able to overcome potential exploits by using additional computing resources when confronted with unfamiliar situations.

An example of the original cyclic attack in action.

Enlarge / An example of the original cyclic attack in action.

The researchers tried three methods to generate such a robust Go algorithm. In the first, they simply fine-tuned the KataGo model using more examples of the unorthodox cyclic strategies that previously defeated it, hoping that KataGo could learn to detect and defeat these patterns after seeing more of them.

This strategy initially seemed promising, letting KataGo win 100 percent of games against a cyclic “attacker.” But after the attacker itself was fine-tuned (a process that used much less computing power than KataGo’s fine-tuning), that win rate fell back down to 9 percent against a slight variation on the original attack.

For its second defense attempt, the researchers iterated a multi-round “arms race” where new adversarial models discover novel exploits and new defensive models seek to plug up those newly discovered holes. After 10 rounds of such iterative training, the final defending algorithm still only won 19 percent of games against a final attacking algorithm that had discovered previously unseen variation on the exploit. This was true even as the updated algorithm maintained an edge against earlier attackers that it had been trained against in the past.

Go AI if they know the right algorithm-exploiting strategy.” height=”427″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/07/GettyImages-109417607-640×427.jpg” width=”640″>

Enlarge / Even a child can beat a world-class Go AI if they know the right algorithm-exploiting strategy.

Getty Images

In their final attempt, researchers tried a completely new type of training using vision transformers, in an attempt to avoid what might be “bad inductive biases” found in the convolutional neural networks that initially trained KataGo. This method also failed, winning only 22 percent of the time against a variation on the cyclic attack that “can be replicated by a human expert,” the researchers wrote.

Will anything work?

In all three defense attempts, the KataGo-beating adversaries didn’t represent some new, previously unseen height in general Go-playing ability. Instead, these attacking algorithms were laser-focused on discovering exploitable weaknesses in an otherwise performant AI algorithm, even if those simple attack strategies would lose to most human players.

Those exploitable holes highlight the importance of evaluating “worst-case” performance in AI systems, even when the “average-case” performance can seem downright superhuman. On average, KataGo can dominate even high-level human players using traditional strategies. But in the worst case, otherwise “weak” adversaries can find holes in the system that make it fall apart.

It’s easy to extend this kind of thinking to other types of generative AI systems. LLMs that can succeed at some complex creative and reference tasks might still utterly fail when confronted with trivial math problems (or even get “poisoned” by malicious prompts). Visual AI models that can describe and analyze complex photos may nonetheless fail horribly when presented with basic geometric shapes.

If you can solve these kinds of puzzles, you may have better visual reasoning than state-of-the-art AIs.

Enlarge / If you can solve these kinds of puzzles, you may have better visual reasoning than state-of-the-art AIs.

Improving these kinds of “worst case” scenarios is key to avoiding embarrassing mistakes when rolling an AI system out to the public. But this new research shows that determined “adversaries” can often discover new holes in an AI algorithm’s performance much more quickly and easily than that algorithm can evolve to fix those problems.

And if that’s true in Go—a monstrously complex game that nonetheless has tightly defined rules—it might be even more true in less controlled environments. “The key takeaway for AI is that these vulnerabilities will be difficult to eliminate,” FAR CEO Adam Gleave told Nature. “If we can’t solve the issue in a simple domain like Go, then in the near-term there seems little prospect of patching similar issues like jailbreaks in ChatGPT.”

Still, the researchers aren’t despairing. While none of their methods were able to “make [new] attacks impossible” in Go, their strategies were able to plug up unchanging “fixed” exploits that had been previously identified. That suggests “it may be possible to fully defend a Go AI by training against a large enough corpus of attacks,” they write, with proposals for future research that could make this happen.

Regardless, this new research shows that making AI systems more robust against worst-case scenarios might be at least as valuable as chasing new, more human/superhuman capabilities.

“Superhuman” Go AIs still have trouble defending against these simple exploits Read More »

much-of-neanderthal-genetic-diversity-came-from-modern-humans

Much of Neanderthal genetic diversity came from modern humans

A large, brown-colored skull seen in profile against a black background.

The basic outline of the interactions between modern humans and Neanderthals is now well established. The two came in contact as modern humans began their major expansion out of Africa, which occurred roughly 60,000 years ago. Humans picked up some Neanderthal DNA through interbreeding, while the Neanderthal population, always fairly small, was swept away by the waves of new arrivals.

But there are some aspects of this big-picture view that don’t entirely line up with the data. While it nicely explains the fact that Neanderthal sequences are far more common in non-African populations, it doesn’t account for the fact that every African population we’ve looked at has some DNA that matches up with Neanderthal DNA.

A study published on Thursday argues that much of this match came about because an early modern human population also left Africa and interbred with Neanderthals. But in this case, the result was to introduce modern human DNA to the Neanderthal population. The study shows that this DNA accounts for a lot of Neanderthals’ genetic diversity, suggesting that their population was even smaller than earlier estimates had suggested.

Out of Africa early

This study isn’t the first to suggest that modern humans and their genes met Neanderthals well in advance of our major out-of-Africa expansion. The key to understanding this is the genome of a Neanderthal from the Altai region of Siberia, which dates from roughly 120,000 years ago. That’s well before modern humans expanded out of Africa, yet its genome has some regions that have excellent matches to the human genome but are absent from the Denisovan lineage.

One explanation for this is that these are segments of Neanderthal DNA that were later picked up by the population that expanded out of Africa. The problem with that view is that most of these sequences also show up in African populations. So, researchers advanced the idea that an ancestral population of modern humans left Africa about 200,000 years ago, and some of its DNA was retained by Siberian Neanderthals. That’s consistent with some fossil finds that place anatomically modern humans in the Mideast at roughly the same time.

There is, however, an alternative explanation: Some of the population that expanded out of Africa 60,000 years ago and picked up Neanderthal DNA migrated back to Africa, taking the Neanderthal DNA with them. That has led to a small bit of the Neanderthal DNA persisting within African populations.

To sort this all out, a research team based at Princeton University focused on the Neanderthal DNA found in Africans, taking advantage of the fact that we now have a much larger array of completed human genomes (approximately 2,000 of them).

The work was based on a simple hypothesis. All of our work on Neanderthal DNA indicates that their population was relatively small, and thus had less genetic diversity than modern humans did. If that’s the case, then the addition of modern human DNA to the Neanderthal population should have boosted its genetic diversity. If so, then the stretches of “Neanderthal” DNA found in African populations should include some of the more diverse regions of the Neanderthal genome.

Much of Neanderthal genetic diversity came from modern humans Read More »

$500-aluminum-version-of-the-analogue-pocket-looks-like-the-game-boy’s-final-form

$500 aluminum version of the Analogue Pocket looks like the Game Boy’s final form

so metal —

Other Pocket iterations have stuck to colorful (and cheaper) plastic.

Analogue is launching another limited-edition version of its Pocket console, this time with an anodized aluminum body and buttons.

Enlarge / Analogue is launching another limited-edition version of its Pocket console, this time with an anodized aluminum body and buttons.

Analogue

Analogue has released multiple variations of the Analogue Pocket, its Game Boy-style handheld console that can play old cartridges and game ROMs using its FPGA chip. But until now, all of those designs have been riffs on the regular Pocket’s black (or white) plastic shell.

The company’s latest Pocket iteration might appeal more to people who prefer the solidity and durability of anodized aluminum to the cheap practicality of plastic. On July 15, the company will release a limited run of all-aluminum Analogue Pocket consoles in four different colors: white, gray, black, and a Game Boy Advance-esque indigo. The company says that “every single piece” of these consoles is “entirely CNC’d from aluminum,” including not just the frame but also all of the buttons.

The new material will cost you, though: Each aluminum Pocket sells for $500, over twice as much as the $220 price of a regular plastic Pocket.

The aluminum versions of the Pocket will run the exact same software as the standard plastic ones and will be compatible with all the same cartridges and accessories. Analogue’s site doesn’t compare the weight of the aluminum and plastic Pocket consoles, though intuitively we’d expect the metal one to be heavier. The aluminum consoles begin shipping on July 17.

An exploded version of the new Pocket; even the buttons are aluminum.

Enlarge / An exploded version of the new Pocket; even the buttons are aluminum.

Analogue

When the Pocket first launched in late 2021, ongoing supply chain disruptions and high demand led to monthslong wait times for the initial models. Things have gotten slightly better since then—you can’t simply open Analogue’s store on any given day and just buy one, but the basic black and white plastic models restock with some regularity. Analogue has also released multiple special edition runs of the handheld, including one made of glow-in-the-dark plastic and a colorful series of models that recall Nintendo’s mid-’90s “Play It Loud!” hardware refresh for the original Game Boy.

As much as we liked the Pocket in our original review, the hardware has gotten much more capable thanks to a series of post-launch firmware updates. In the summer of 2022, Analogue added OpenFPGA support to the pocket, allowing its FPGA chip to emulate consoles like the NES, SNES, Sega Genesis, and others aside from the portable systems that the Pocket was designed to emulate. Updates toward the end of 2023 allowed those third-party emulation cores to use their own display filters, replicating the look of classic CRT TVs and other displays.

The updates have also fixed multiple bugs in the system. The latest update is version 2.2, released back in March, which primarily adds support for the Analogue Pocket Adapter Set that allows other kinds of vintage game cartridges to plug in to the Pocket’s cartridge slot.

$500 aluminum version of the Analogue Pocket looks like the Game Boy’s final form Read More »

dvds-are-dying-right-as-streaming-has-made-them-appealing-again

DVDs are dying right as streaming has made them appealing again

RIP Redbox —

You don’t know what you’ve got till it’s gone.

A Redbox kiosk

Enlarge / A Redbox movie rental kiosk stands outside a CVS store.

Since 2004, red DVD rental kiosks posted near entrances of grocery stores and the like tempted shoppers with movie (and until 2019, video game) disc rentals. But the last 24,000 of Redbox’s kiosks are going away, as Redbox’s parent company moved to chapter 7 liquidation bankruptcy this week. The end of Redbox marks another death knell for the DVD industry at a time when volatile streaming services are making physical media appealing again.

Redbox shutting down

Chicken Soup for the Soul Entertainment, which owns Redbox, filed for chapter 11 bankruptcy on June 29. But on Wednesday, Judge Thomas M. Horan of the US Bankruptcy Court for the District of Delaware approved a conversion to chapter 7, signaling the liquidation of business, per Deadline. Redbox’s remaining 24,000 kiosks will close, and 1,000 workers will be laid off (severance and back pay eligibility are under review, and a bankruptcy trustee will investigate if trust funds intended for employees were misappropriated).

Chicken Soup bought Redbox for $375 million in 2022 and is $970 million in debt. It will also be shuttering its Redbox, Crackle, and Popcornflix streaming services.

DVDs in decline

As a DVD-centric business, Redbox was living on borrowed time. The convenience of on-demand streaming made it hard to compete, and bankruptcy proceedings revealed that Redbox was paying employees more than it was earning.

Overall, the past year hasn’t been a good one for DVD or Blu-ray devotees, as many businesses announced that they’re exiting the industry. In August, Netflix quit its original business of mailing out rental DVDs. Now the king of streaming, the remaining DVD business was so menial that Netflix gave away DVDs as it shut down operations.

Once industry disruptors, DVDs and Blu-rays have been further ushered out the door in 2024. In April, Target confirmed that it will only sell DVDs in stores during “key times,” like the winter holiday season or the release of a newer movie to DVD. The news hit especially hard considering Best Buy ended DVD and Blu-ray sales in-store and online this year. Disney is outsourcing its DVD and Blu-ray business to Sony, and Sony this month revealed plans to stop selling recordable Blu-rays to consumers (it hasn’t decided when yet).

Bad timing

It’s sensible for businesses to shift from physical media sales. Per CNBC’s calculations, DVD sales fell over 86 percent between 2008 and 2019. Research from the Motion Picture Association in 2021 found that physical media represented 8 percent of the home/mobile entertainment market in the US, falling behind digital (80 percent) and theatrical (12 percent).

But as physical media gets less lucrative and the shuttering of businesses makes optical discs harder to find, the streaming services that largely replaced them are getting aggravating and unreliable. And with the streaming industry becoming more competitive and profit-hungry than ever, you never know if the movie/show that most attracted you to a streaming service will still be available when you finally get a chance to sit down and watch. Even paid-for online libraries that were marketed as available “forever” have been ripped away from customers.

When someone buys or rents a DVD, they know exactly what content they’re paying for and for how long they’ll have it (assuming they take care of the physical media). They can also watch the content if the Internet goes out and be certain that they’re getting uncompressed 4K resolution. DVD viewers are also less likely to be bombarded with ads whenever they pause and can get around an ad-riddled smart TV home screen (nothing’s perfect; some DVDs have unskippable commercials).

Streaming isn’t likely to stabilize any time soon, either. Team-ups between streaming providers and merger/acquisition activity make the future of streaming and the quality of available services uncertain. For example, what’s ahead for Paramount+ and Pluto now that Paramount is planning a Skydance merger?

There’s also something to be said about how limiting reliance on streaming can be for movie buffs and people with unique tastes. Treasured content, like older movies or canceled TV shows, isn’t always put on streaming services. And what is put on streaming is sometimes altered, including with new music and controversial scenes/episodes or embarrassing moments at live events removed.

A DVD company like Redbox closing was years in the making. There are people who believe it’s prudent to maintain a physical media library, but renting one is even more niche. Still, places that offer DVDs have gotten significantly rarer recently, and relying solely on an increasingly cable-like streaming industry for home entertainment is a scary proposition. Seeing an alternative option in the form of a red, slender box outside my grocery store actually sounds nice right now.

DVDs are dying right as streaming has made them appealing again Read More »

nasa-update-on-starliner-thruster-issues:-this-is-fine

NASA update on Starliner thruster issues: This is fine

Boeing's Starliner spacecraft on final approach to the International Space Station last month.

Enlarge / Boeing’s Starliner spacecraft on final approach to the International Space Station last month.

Before clearing Boeing’s Starliner crew capsule to depart the International Space Station and head for Earth, NASA managers want to ensure the spacecraft’s problematic control thrusters can help guide the ship’s two-person crew home.

The two astronauts who launched June 5 on the Starliner spacecraft’s first crew test flight agree with the managers, although they said Wednesday that they’re comfortable with flying the capsule back to Earth if there’s any emergency that might require evacuation of the space station.

NASA astronauts Butch Wilmore and Suni Williams were supposed to return to Earth weeks ago, but managers are keeping them at the station as engineers continue probing thruster problems and helium leaks that have plagued the mission since its launch.

“This is a tough business that we’re in,” Wilmore, Starliner’s commander, told reporters Wednesday in a news conference from the space station. “Human spaceflight is not easy in any regime, and there have been multiple issues with any spacecraft that’s ever been designed, and that’s the nature of what we do.”

Five of the 28 reaction control system thrusters on Starliner’s service module dropped offline as the spacecraft approached the space station last month. Starliner’s flight software disabled the five control jets when they started overheating and losing thrust. Four of the thrusters were later recovered, although some couldn’t reach their full power levels as Starliner came in for docking.

Wilmore, who took over manual control for part of Starliner’s approach to the space station, said he could sense the spacecraft’s handling qualities diminish as thrusters temporarily failed. “You could tell it was degraded, but still, it was impressive,” he said. Starliner ultimately docked to the station in autopilot mode.

In mid-June, the Starliner astronauts hot-fired the thrusters again, and their thrust levels were closer to normal.

“What we want to know is that the thrusters can perform; if whatever their percentage of thrust is, we can put it into a package that will get us a deorbit burn,” said Williams, a NASA astronaut serving as Starliner’s pilot. “That’s the main purpose that we need [for] the service module: to get us a good deorbit burn so that we can come back.”

These small thrusters aren’t necessary for the deorbit burn itself, which will use a different set of engines to slow Starliner’s velocity enough for it to drop out of orbit and head for landing. But Starliner needs enough of the control jets working to maneuver into the proper orientation for the deorbit firing.

This test flight is the first time astronauts have flown in space on Boeing’s Starliner spacecraft, following years of delays and setbacks. Starliner is NASA’s second human-rated commercial crew capsule, and it’s poised to join SpaceX’s Crew Dragon in a rotation of missions ferrying astronauts to and from the space station through the rest of the decade.

But first, Boeing and NASA need to safely complete the Starliner test flight and resolve the thruster problems and helium leaks plaguing the spacecraft before moving forward with operational crew rotation missions. There’s a Crew Dragon spacecraft currently docked to the station, but Steve Stich, NASA’s commercial crew program manager, told reporters Wednesday that, right now, Wilmore and Williams still plan to come home on Starliner.

“The beautiful thing about the commercial crew program is that we have two vehicles, two different systems, that we could use to return crew,” Stich said. “So we have a little bit more time to go through the data and then make a decision as to whether we need to do anything different. But the prime option today is to return Butch and Suni on Starliner. Right now, we don’t see any reason that wouldn’t be the case.”

Mark Nappi, Boeing’s Starliner program manager, said officials identified more than 30 actions to investigate five “small” helium leaks and the thruster problems on Starliner’s service module. “All these items are scheduled to be completed by the end of next week,” Nappi said.

“It’s a test flight, and the first with crew, and we’re just taking a little extra time to make sure that we understand everything before we commit to deorbit,” Stich said.

NASA update on Starliner thruster issues: This is fine Read More »

nearby-star-cluster-houses-unusually-large-black-hole

Nearby star cluster houses unusually large black hole

Big, but not that big —

Fast-moving stars imply that there’s an intermediate-mass black hole there.

Three panel image, with zoom increasing from left to right. Left most panel is a wide view of the globular cluster; right is a zoom in to the area where its central black hole must reside.

Enlarge / From left to right, zooming in from the globular cluster to the site of its black hole.

ESA/Hubble & NASA, M. Häberle

Supermassive black holes appear to reside at the center of every galaxy and to have done so since galaxies formed early in the history of the Universe. Currently, however, we can’t entirely explain their existence, since it’s difficult to understand how they could grow quickly enough to reach the cutoff for supermassive as quickly as they did.

A possible bit of evidence was recently found by using about 20 years of data from the Hubble Space Telescope. The data comes from a globular cluster of stars that’s thought to be the remains of a dwarf galaxy and shows that a group of stars near the cluster’s core are moving so fast that they should have been ejected from it entirely. That implies that something massive is keeping them there, which the researchers argue is a rare intermediate-mass black hole, weighing in at over 8,000 times the mass of the Sun.

Moving fast

The fast-moving stars reside in Omega Centauri, the largest globular cluster in the Milky Way. With an estimated 10 million stars, it’s a crowded environment, but observations are aided by its relative proximity, at “only” 17,000 light-years away. Those observations have been hinting that there might be a central black hole within the globular cluster, but the evidence has not been decisive.

The new work, done by a large international team, used over 500 images of Omega Centauri, taken by the Hubble Space Telescope over the course of 20 years. This allowed them to track the motion of stars within the cluster, allowing an estimate of their speed relative to the cluster’s center of mass. While this has been done previously, the most recent data allowed an update that reduced the uncertainty in the stars’ velocity.

Within the update data, a number of stars near the cluster’s center stood out for their extreme velocities: seven of them were moving fast enough that the gravitational pull of the cluster isn’t enough to keep them there. All seven should have been lost from the cluster within 1,000 years, although the uncertainties remained large for two of them. Based on the size of the cluster, there shouldn’t even be a single foreground star between the Hubble and the Omega Cluster, so these really seem to be within the cluster despite their velocity.

The simplest explanation for that is that there’s an additional mass holding them in place. That could potentially be several massive objects, but the close proximity of all these stars to the center of the cluster favor a single, compact object. Which means a black hole.

Based on the velocities, the researchers estimate that the object has a mass of at least 8,200 times that of the Sun. A couple of stars appear to be accelerating; if that holds up based on further observations, it would indicate that the black hole is over 20,000 solar masses. That places it firmly within black hole territory, though smaller than supermassive black holes, which are viewed as those with roughly a million solar masses or more. And it’s considerably larger than you’d expect from black holes formed through the death of a star, which aren’t expected to be much larger than 100 times the Sun’s mass.

This places it in the category of intermediate-mass black holes, of which there are only a handful of potential sightings, none of them universally accepted. So, this is a significant finding if for no other reason than it may be the least controversial spotting of an intermediate-mass black hole yet.

What’s this telling us?

For now, there are still considerable uncertainties in some of the details here—but prospects for improving the situation exist. Observations with the Webb Space Telescope could potentially pick up the faint emissions from gas that’s falling into the black hole. And it can track the seven stars identified here. Its spectrographs could also potentially pick up the red and blue shifts in light caused by the star’s motion. Its location at a considerable distance from Hubble could also provide a more detailed three-dimensional picture of Omega Centauri’s central structure.

Figuring this out could potentially tell us more about how black holes grow to supermassive scales. Earlier potential sightings of intermediate-mass black holes have also come in globular clusters, which may suggest that they’re a general feature of large gatherings of stars.

But Omega Centauri differs from many other globular clusters, which often contain large populations of stars that all formed at roughly the same time, suggesting the clusters formed from a single giant cloud of materials. Omega Centauri has stars with a broad range of ages, which is one of the reasons why people think it’s the remains of a dwarf galaxy that was sucked into the Milky Way.

If that’s the case, then its central black hole is an analog of the supermassive black holes found in actual dwarf galaxies—which raises the question of why it’s only intermediate-mass. Did something about its interactions with the Milky Way interfere with the black hole’s growth?

And, in the end, none of this sheds light on how any black hole grows to be so much more massive than any star it could conceivably have formed from. Getting a better sense of this black hole’s history could provide more perspective on some questions that are currently vexing astronomers.

Nature, 2024. DOI: 10.1038/s41586-024-07511-z  (About DOIs).

Nearby star cluster houses unusually large black hole Read More »

in-bid-to-loosen-nvidia’s-grip-on-ai,-amd-to-buy-finnish-startup-for-$665m

In bid to loosen Nvidia’s grip on AI, AMD to buy Finnish startup for $665M

AI tech stack —

The acquisition is the largest of its kind in Europe in a decade.

In bid to loosen Nvidia’s grip on AI, AMD to buy Finnish startup for $665M

AMD is to buy Finnish artificial intelligence startup Silo AI for $665 million in one of the largest such takeovers in Europe as the US chipmaker seeks to expand its AI services to compete with market leader Nvidia.

California-based AMD said Silo’s 300-member team would use its software tools to build custom large language models (LLMs), the kind of AI technology that underpins chatbots such as OpenAI’s ChatGPT and Google’s Gemini. The all-cash acquisition is expected to close in the second half of this year, subject to regulatory approval.

“This agreement helps us both accelerate our customer engagements and deployments while also helping us accelerate our own AI tech stack,” Vamsi Boppana, senior vice president of AMD’s artificial intelligence group, told the Financial Times.

The acquisition is the largest of a privately held AI startup in Europe since Google acquired UK-based DeepMind for around 400 million pounds in 2014, according to data from Dealroom.

The deal comes at a time when buyouts by Silicon Valley companies have come under tougher scrutiny from regulators in Brussels and the UK. Europe-based AI startups, including Mistral, DeepL, and Helsing, have raised hundreds of millions of dollars this year as investors seek out a local champion to rival US-based OpenAI and Anthropic.

Helsinki-based Silo AI, which is among the largest private AI labs in Europe, offers tailored AI models and platforms to enterprise customers. The Finnish company launched an initiative last year to build LLMs in European languages, including Swedish, Icelandic, and Danish.

AMD’s AI technology competes with that of Nvidia, which has taken the lion’s share of the high-performance chip market. Nvidia’s success has propelled its valuation past $3 trillion this year as tech companies push to build the computing infrastructure needed to power the biggest AI models. AMD started to roll out its MI300 chips late last year in a direct challenge to Nvidia’s “Hopper” line of chips.

Peter Sarlin, Silo AI co-founder and chief executive, called the acquisition the “logical next step” as the Finnish group seeks to become a “flagship” AI company.

Silo AI is committed to “open source” AI models, which are available for free and can be customized by anyone. This distinguishes it from the likes of OpenAI and Google, which favor their own proprietary or “closed” models.

The startup previously described its family of open models, called “Poro,” as an important step toward “strengthening European digital sovereignty” and democratizing access to LLMs.

The concentration of the most powerful LLMs into the hands of a few US-based Big Tech companies is meanwhile attracting attention from antitrust regulators in Washington and Brussels.

The Silo deal shows AMD seeking to scale its business quickly and drive customer engagement with its own offering. AMD views Silo, which builds custom models for clients, as a link between its “foundational” AI software and the real-world applications of the technology.

Software has become a new battleground for semiconductor companies as they try to lock in customers to their hardware and generate more predictable revenues, outside the boom-and-bust chip sales cycle.

Nvidia’s success in the AI market stems from its multibillion-dollar investment in Cuda, its proprietary software that allows chips originally designed for processing computer graphics and video games to run a wider range of applications.

Since starting to develop Cuda in 2006, Nvidia has expanded its software platform to include a range of apps and services, largely aimed at corporate customers that lack the in-house resources and skills that Big Tech companies have to build on its technology.

Nvidia now offers more than 600 “pre-trained” models, meaning they are simpler for customers to deploy. The Santa Clara, California-based group last month started rolling out a “microservices” platform, called NIM, which promises to let developers build chatbots and AI “co-pilot” services quickly.

Historically, Nvidia has offered its software free of charge to buyers of its chips, but said this year that it planned to charge for products such as NIM.

AMD is among several companies contributing to the development of an OpenAI-led rival to Cuda, called Triton, which would let AI developers switch more easily between chip providers. Meta, Microsoft, and Intel have also worked on Triton.

© 2024 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post to the web.

In bid to loosen Nvidia’s grip on AI, AMD to buy Finnish startup for $665M Read More »

why-1994’s-lair-of-squid-was-the-weirdest-pack-in-game-of-all-time

Why 1994’s Lair of Squid was the weirdest pack-in game of all time

digital archaeology —

The HP 200LX included a mysterious maze game called Lair of Squid. We tracked down the author.

Artist's impression of a squid jumping forth from a HP 200LX.

Enlarge / Artist’s impression of a squid jumping forth from an HP 200LX.

Aurich Lawson / HP

In 1994, Hewlett-Packard released a miracle machine: the HP 200LX pocket-size PC. In the depths of the device, among the MS-DOS productivity apps built into its fixed memory, there lurked a first-person maze game called Lair of Squid. Intrigued by the game, we tracked down its author, Andy Gryc, and probed into the title’s mysterious undersea origins.

“If you ask my family, they’ll confirm that I’ve been obsessed with squid for a long time,” Gryc told Ars Technica. “It’s admittedly very goofy—and that’s my fault—although I was inspired by Doom, which had come out relatively recently.”

In Lair of Squid, you’re trapped in an underwater labyrinth, seeking a way out while avoiding squid roaming the corridors. A collision with any cephalopod results in death. To progress through each stage and ascend to the surface, you locate the exit and provide a hidden, scrambled code word. The password is initially displayed as asterisks, with letters revealed as you encounter them within the maze.

Lair of Squid running on the author’s HP 200LX, shortly after the moment of discovery.” height=”480″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/07/squid_photo_1-640×480.jpg” width=”640″>

Enlarge / A photo of Lair of Squid running on the author’s HP 200LX, shortly after the moment of discovery.

Benj Edwards

Buckle up for a tale of rogue coding, cephalopod obsession, and the most unexpected Easter egg in palmtop history. This is no fish story—it’s the saga of Lair of Squid.

A computer in the palm of your hand

Introduced in 1994, the HP 200LX palmtop PC put desktop functionality in a pocket-size package. With a small QWERTY keyboard, MS-DOS compatibility, and a suite of productivity apps, the clamshell 200LX offered a vision of one potential future of mobile computing. It featured a 7.91 MHz 80186 CPU, a monochrome 640×200 CGA display, and 1–4 megabytes of RAM.

The cover of the HP 200LX User's Guide (1994).

Enlarge / The cover of the HP 200LX User’s Guide (1994).

Hewlett Packard

I’ve collected vintage computers since 1993, and people frequently offer to send me old devices they’d rather not throw away. Recently, a former HP engineer sent me his small but nice collection of ’90s HP handheld palmtop computers, including a 95LX (1991), 100LX (1993), and 200LX.

HP designed its portable LX series to run many MS-DOS programs that feature text mode or CGA graphics, and each includes built-in versions of the Lotus 1-2-3 spreadsheet, a word processor, terminal program, calculator, and more.

I owned a 95LX as a kid (a hand-me-down from my dad’s friend), which came with a simplistic overhead maze game called TigerFox. So imagine my surprise in 2024, when trawling through the productivity and personal organization apps on that 200LX, to find a richly detailed first-person maze game based around cephalopods, of all things.

(I was less surprised to find an excellent built-in Minesweeper clone, Hearts and Bones, which is definitely a more natural fit for the power and form of the 200LX itself.)

Lair of Squid isn’t a true Doom clone since it’s not a first-person shooter (in some ways, it’s more like a first-person Pac-Man without pellets), but its mere existence—on a black-and-white device best suited for storing phone numbers and text notes—deserves note as one of the weirdest and most interesting pack-in games to ever exist.

Just after discovering Lair of Squid on my device earlier this year, I tweeted about it, and I extracted the file for the game (called “maze.exe”) from the internal ROM drive and sent it to DOS gaming historian Anatoly Shashkin, who put the game on The Internet Archive so anyone can play it in their browser.

After that, I realized that I wanted to figure out who wrote this quirky game, and thanks to a post on RGB Classic Games, I found a name: Andy Gryc. With some luck in cold-emailing, I found him.

Why 1994’s Lair of Squid was the weirdest pack-in game of all time Read More »

kathryn-hahn-is-ready-to-walk-the-witch’s-road-in-agatha-all-along-trailer

Kathryn Hahn is ready to walk the Witch’s Road in Agatha All Along trailer

Kathryn Hahn reprises her WandaVision role as Agatha Harkness in the spinoff series Agatha All Along.

The true identity of nosy next-door neighbor Agatha—played to perfection by Kathryn Hahn—was the big reveal of 2021’s WandaVision, even inspiring a meta-jingle that went viral. Now Hahn is bringing the character back for her own standalone adventure with Agatha All Along. Based on the first trailer, it looks like a lot of dark, spooky fun, just in time for the Halloween season. The nine-episode series is one of the TV series in the MCU’s Phase Five, coming on the heels of Secret Invasion, Loki S2, What If…? S2, and Echo.

(Spoilers for WandaVision below.)

WandaVision was set immediately after the events of Avengers: Endgame (but before Spider-Man: Far From Home), with newlyweds Wanda and Vision starting their married life in the town of Westview, New Jersey. Wacky hijinks ensued as the couple tried to lead a normal life while hiding their superpowers from their neighbors—especially Hahn’s nosy Agnes. Each episode was shot in the style of a particular era of sitcom television, from the 1950s through the 2000s. The couple noticed more and more jarring elements—a full-color drone, a voice calling out to Wanda over the radio, neighbors briefly breaking character—hinting that this seemingly idyllic suburban existence might not be what it seemed.

We learned that a grief-stricken Wanda had inadvertently locked the entire town in a reality-warping Hex, with the residents forced to play their sitcom “roles” and adhere to Wanda’s “script,” creating the happily ever after ending she never got with Vision. But the hijinks weren’t all due to Wanda’s powers. Agnes turned out to be a powerful witch named Agatha Harkness, who had studied magic for centuries and was just dying to learn the source of Wanda’s incredible power. Wanda’s natural abilities were magnified by the Mind Stone, but Agatha realized that Wanda was a wielder of “chaos magic.” She was, in fact, the Scarlet Witch. In the finale, Wanda trapped Agatha in her nosy neighbor persona while releasing the rest of the town.

Agatha All Along has been in the works since 2021, officially announced in November of that year. There were numerous title changes, culminating this May with my personal goofy favorite: Agatha: The Lying Witch with Great Wardrobe (a nod to C.S. Lewis). It briefly appeared on the Marvel Twitter account before being taken down, and Disney soon revealed that the various name changes were “orchestrated by [Harkness] as a way of messing with Marvel fans.” Head writer Jac Schaeffer (who also created WandaVision) has said that the series would follow Agatha as she forms her own coven with “a disparate mixed bag of witches… defined by deception, treachery, villainy, and selfishness” who must learn to work together. And apparently we can expect a few more catchy tunes.

Kathryn Hahn is ready to walk the Witch’s Road in Agatha All Along trailer Read More »

massive-car-dealer-ransom-attack-is-mostly-over-after-2-weeks-of-work-arounds

Massive car dealer ransom attack is mostly over after 2 weeks of work-arounds

CDK Global car dealer outage —

CDK outage likely slumped June auto sales, may have cost more than $600M.

Cars lined up, shown at an angle in a row, at a car dealership.

Enlarge / Vehicles for sale at an AutoNation Honda dealership in Fremont, California, US, on Monday, June 24, 2024.

Getty Images

After “cyber incidents” on June 19 and 20 took down CDK Global, a software-as-a-service vendor for more than 15,000 car dealerships, forum and Reddit comments by service tech workers and dealers advised their compatriots to prepare for weeks, not days, before service was restored.

That sentiment proved accurate, as CDK Global last expected to have “all dealers’ connections” working by either July 3 or 4, roughly two weeks’ time. Posts across various dealer-related subreddits today suggest CDK’s main services are mostly restored, if not entirely. Restoration of services is a mixed blessing for some workers, as huge backlogs of paperwork now need entering into digital systems.

Bloomberg reported on June 21 that a ransomware gang, BlackSuit, had demanded “tens of millions of dollars” from CDK and that the company was planning to pay that amount, according to a source familiar with the matter. CDK later told its clients on June 25 that the attack was a “cyber ransom event,” and that restoring services would take “several days and not weeks.” Allan Liska, with analyst Recorded Future, told Bloomberg that BlackSuit was responsible for at least 95 other recorded ransomware breaches around the world.

Lisa Finney, senior manager for external communications at CDK, told Ars on Monday that the firm had no additional information to provide about the attacks, service restoration, or plans for dealers preparing against future attacks.

During the outage, many dealerships pivoted from all-in-one software platforms to pens, paper, Excel sheets, phone calls, and, in some cases, alternative local software. Car Dealership Guy rounded up some of the dealerships’ work-arounds. Repair part numbers, hours, and partial VIN numbers were being tracked in Excel. Lots of dealers grabbed the last contracts they had on hand, blanked out customer information, and made editable PDFs out of them.

Lots of dealers and service managers advocated preparing for the next outage with “no Internet days.” Others noted that the steps some dealerships were taking, like using their own phones for contacting sales leads, could run afoul of privacy and “Do not call” provisions.

Anderson Economic Group, a Michigan-based auto analyst, estimated that CDK’s shutdown cost auto dealers more than $600 million over a two-week period. CDK’s outage is expected to play a large part in a June car sales slump.

Massive car dealer ransom attack is mostly over after 2 weeks of work-arounds Read More »

the-president-ordered-a-board-to-probe-a-massive-russian-cyberattack-it-never-did.

The president ordered a board to probe a massive Russian cyberattack. It never did.

In this photo illustration, a Microsoft logo seen displayed on a smartphone with a Cyber Security illustration image in the background.

This story was originally published by ProPublica.

Investigating how the world’s largest software provider handles the security of its own ubiquitous products.

After Russian intelligence launched one of the most devastating cyber espionage attacks in history against US government agencies, the Biden administration set up a new board and tasked it to figure out what happened—and tell the public.

State hackers had infiltrated SolarWinds, an American software company that serves the US government and thousands of American companies. The intruders used malicious code and a flaw in a Microsoft product to steal intelligence from the National Nuclear Security Administration, National Institutes of Health, and the Treasury Department in what Microsoft President Brad Smith called “the largest and most sophisticated attack the world has ever seen.”

The president issued an executive order establishing the Cyber Safety Review Board in May 2021 and ordered it to start work by reviewing the SolarWinds attack.

But for reasons that experts say remain unclear, that never happened.

Nor did the board probe SolarWinds for its second report.

For its third, the board investigated a separate 2023 attack, in which Chinese state hackers exploited an array of Microsoft security shortcomings to access the email inboxes of top federal officials.

A full, public accounting of what happened in the Solar Winds case would have been devastating to Microsoft. ProPublica recently revealed that Microsoft had long known about—but refused to address—a flaw used in the hack. The tech company’s failure to act reflected a corporate culture that prioritized profit over security and left the US government vulnerable, a whistleblower said.

The board was created to help address the serious threat posed to the US economy and national security by sophisticated hackers who consistently penetrate government and corporate systems, making off with reams of sensitive intelligence, corporate secrets, or personal data.

For decades, the cybersecurity community has called for a cyber equivalent of the National Transportation Safety Board, the independent agency required by law to investigate and issue public reports on the causes and lessons learned from every major aviation accident, among other incidents. The NTSB is funded by Congress and staffed by experts who work outside of the industry and other government agencies. Its public hearings and reports spur industry change and action by regulators like the Federal Aviation Administration.

So far, the Cyber Safety Review Board has charted a different path.

The board is not independent—it’s housed in the Department of Homeland Security. Rob Silvers, the board chair, is a Homeland Security undersecretary. Its vice chair is a top security executive at Google. The board does not have full-time staff, subpoena power or dedicated funding.

Silvers told ProPublica that DHS decided the board didn’t need to do its own review of SolarWinds as directed by the White House because the attack had already been “closely studied” by the public and private sectors.

“We want to focus the board on reviews where there is a lot of insight left to be gleaned, a lot of lessons learned that can be drawn out through investigation,” he said.

As a result, there has been no public examination by the government of the unaddressed security issue at Microsoft that was exploited by the Russian hackers. None of the SolarWinds reports identified or interviewed the whistleblower who exposed problems inside Microsoft.

By declining to review SolarWinds, the board failed to discover the central role that Microsoft’s weak security culture played in the attack and to spur changes that could have mitigated or prevented the 2023 Chinese hack, cybersecurity experts and elected officials told ProPublica.

“It’s possible the most recent hack could have been prevented by real oversight,” Sen. Ron Wyden, a Democratic member of the Senate Select Committee on Intelligence, said in a statement. Wyden has called for the board to review SolarWinds and for the government to improve its cybersecurity defenses.

In a statement, a spokesperson for DHS rejected the idea that a SolarWinds review could have exposed Microsoft’s failings in time to stop or mitigate the Chinese state-based attack last summer. “The two incidents were quite different in that regard, and we do not believe a review of SolarWinds would have necessarily uncovered the gaps identified in the Board’s latest report,” they said.

The board’s other members declined to comment, referred inquiries to DHS or did not respond to ProPublica.

In past statements, Microsoft did not dispute the whistleblower’s account but emphasized its commitment to security. “Protecting customers is always our highest priority,” a spokesperson previously told ProPublica. “Our security response team takes all security issues seriously and gives every case due diligence with a thorough manual assessment, as well as cross-confirming with engineering and security partners.”

The board’s failure to probe SolarWinds also underscores a question critics including Wyden have raised about the board since its inception: whether a board with federal officials making up its majority can hold government agencies responsible for their role in failing to prevent cyberattacks.

“I remain deeply concerned that a key reason why the Board never looked at SolarWinds—as the President directed it to do so—was because it would have required the board to examine and document serious negligence by the US government,” Wyden said. Among his concerns is a government cyberdefense system that failed to detect the SolarWinds attack.

Silvers said while the board did not investigate SolarWinds, it has been given a pass by the independent Government Accountability Office, which said in an April study examining the implementation of the executive order that the board had fulfilled its mandate to conduct the review.

The GAO’s determination puzzled cybersecurity experts. “Rob Silvers has been declaring by fiat for a long time that the CSRB did its job regarding SolarWinds, but simply declaring something to be so doesn’t make it true,” said Tarah Wheeler, the CEO of Red Queen Dynamics, a cybersecurity firm, who co-authored a Harvard Kennedy School report outlining how a “cyber NTSB” should operate.

Silvers said the board’s first and second reports, while not probing SolarWinds, resulted in important government changes, such as new Federal Communications Commission rules related to cell phones.

“The tangible impacts of the board’s work to date speak for itself and in bearing out the wisdom of the choices of what the board has reviewed,” he said.

The president ordered a board to probe a massive Russian cyberattack. It never did. Read More »