Author name: DJ Henderson

labor-board-confirms-amazon-drivers-are-employees,-in-finding-hailed-by-union

Labor board confirms Amazon drivers are employees, in finding hailed by union

Driving a hard bargain —

“We are Amazon workers”: Delivery drivers celebrate labor board finding.

Labor board confirms Amazon drivers are employees, in finding hailed by union

Amazon may be forced to meet some unionized delivery drivers at the bargaining table after a regional National Labor Relations Board (NLRB) director determined Thursday that Amazon is a joint employer of contractors hired to ensure the e-commerce giant delivers its packages when promised.

This seems like a potentially big loss for Amazon, which had long argued that delivery service partners (DSPs) exclusively employed the delivery drivers, not Amazon. By rejecting its employer status, Amazon had previously argued that it had no duty to bargain with driver unions and no responsibility for alleged union busting, The Washington Post reported.

But now, after a yearlong investigation, the NLRB has issued what Amazon delivery drivers’ union has claimed was “a groundbreaking decision that sets the stage for Amazon delivery drivers across the country to organize with the Teamsters.”

In a press release reviewed by Ars, the NLRB regional director confirmed that as a joint employer, Amazon had “unlawfully failed and refused to bargain with the union” after terminating their DSP’s contract and terminating “all unionized employees.” The NLRB found that rather than bargaining with the union, Amazon “delayed start times by grounding vans and not preparing packages for loading,” withheld information from the union, and “made unlawful threats.” Teamsters said those threats included “job loss” and “intimidating employees with security guards.”

Sean M. O’Brien, the Teamsters general president, claimed the win for drivers unionizing not just in California but for nearly 280,000 drivers nationwide.

“Amazon drivers have taken their future into their own hands and won a monumental determination that makes clear Amazon has a legal obligation to bargain with its drivers over their working conditions,” O’Brien said. “This strike has paved the way for every other Amazon worker in the country to demand what they deserve and to get Amazon to the bargaining table.”

Unless a settlement is reached, the NLRB will soon “issue a complaint against Amazon and prosecute the corporate giant at a trial” after finding that “Amazon engaged in a long list of egregious unfair labor practices at its Palmdale facility,” Teamsters said.

Apparently downplaying the NLRB determination, Amazon is claiming that the Teamsters are trying to “misrepresent what is happening here.” Seemingly Amazon is taking issue with the union claiming that an NLRB determination on the merits of their case is a major win when the NLRB has yet to issue a final ruling.

According to the NLRB’s press release, “a merit determination is not a ‘Board decision/ruling’—it is the first step in the NLRB’s General Counsel litigating the allegations after investigating an unfair labor practice charge.”

Amazon’s spokesperson, Eileen Hards, told Ars that the NLRB office confirmed to Amazon that it will be “dismissing most of the Teamsters’ more significant claims it filed last year in Palmdale.” That apparently includes dismissing the Teamsters’ claims that Amazon unlawfully terminated its contract with one of their DSPs and that Amazon had a legal obligation to honor the Teamsters’ contract with that DSP.

Next, the NLRB will determine if the “remaining allegations should be decided by an administrative law judge,” Hards said. After that, Amazon will have opportunities to appeal any unfavorable rulings, first to the Board and then to a federal appeals court, the NLRB confirmed to Ars.

Hards confirmed that Amazon still expects all the Teamsters’ remaining claims will be dismissed.

“As we have said all along, there is no merit to the Teamsters’ claims,” Hards told Ars. “If and when the agency decides it wants to litigate the remaining allegations, we expect they will be dismissed as well.”

But Hards declined to comment on the impacts of the NLRB’s determination that Amazon is a joint employer of the unionized delivery drivers.

One Amazon driver in Palmdale, Jessie Moreno, said that worker conditions for Amazon drivers could improve because of the determination.

“Amazon can no longer dodge responsibility for our low wages and dangerous working conditions, and it cannot continue to get away with committing unfair labor practices,” Moreno said. “We are Amazon workers, and we are holding Amazon accountable.”

Amazon drivers uniting “like never before”

The NLRB determination came following a complaint from 84 Amazon workers from Palmdale, California, who became the first Amazon delivery drivers to unionize in April 2023, represented by Teamsters Local 396.

While their DSP recognized the union, workers launched an unfair labor strike in June 2023 after Amazon allegedly “engaged in dozens of unfair labor practices in violation of federal labor law in an effort to quash workers’ organizing efforts,” the Teamsters said.

The picket line quickly expanded “to over 50 Amazon warehouses across 10 states,” the Teamsters said. Most recently, drivers in Skokie, Illinois, “launched their own unfair labor practice strike in June 2024,” right around the same time that “more than 5,500 members of the Amazon Labor Union in New York voted by an overwhelming 98.3 percent to affiliate with the Teamsters.”

In their blog, the Teamsters said that Amazon “has avoided responsibility for its drivers through its DSP subcontractor business model” since 2018, but drivers hope that yesterday’s NLRB determination could put an end to the dodgy tactic.

“The NLRB’s joint employer determination shatters that myth” that “DSP drivers are not official employees of Amazon” and “makes clear that through its DSP business model, Amazon exercises widespread control over drivers’ labor and working conditions, making Amazon the drivers’ employer,” the Teamsters said.

The Teamsters said that they are “confident” that “the NLRB’s regional determination for the Palmdale workers will extend to Amazon DSP drivers who unionize nationwide.” One union member and Amazon driver, Brandi Diaz, celebrated what she considered to be the US government recognizing that the DSP program is a “sham.”

“We wear Amazon uniforms, we drive Amazon vans, and Amazon controls every minute of our day,” Diaz said. “Amazon can no longer have all the benefits of their own fleet of drivers without the responsibilities that come with it. The time has come for Amazon drivers across the country to organize with the Teamsters and demand what we deserve.”

Drivers are currently fighting to increase wages and improve driver safety amid what they claim are unchecked dangerous conditions they must navigate as Amazon drivers. Moreno said that the NLRB determination was a significant step toward unionizing more drivers and ending Amazon’s allegedly unfair labor practices nationwide.

“We have been on strike to stop Amazon’s lawbreaking and we are winning at the NLRB, while we are uniting Amazon workers across the country like never before,” Moreno said.

Labor board confirms Amazon drivers are employees, in finding hailed by union Read More »

gearbox-founder-says-epic-games-store-hopes-were-“misplaced-or-overly-optimistic”

Gearbox founder says Epic Games Store hopes were “misplaced or overly optimistic”

Nice try —

Pitchford’s prediction that Steam could be “a dying store” have not come to pass.

Artist's conception of Randy Pitchford surveying the Epic Games Store landscape years after <em>Borderlands 3</em>‘s exclusive launch there.” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/08/bl4-800×332.png”></img><figcaption>
<p><a data-height=Enlarge / Artist’s conception of Randy Pitchford surveying the Epic Games Store landscape years after Borderlands 3‘s exclusive launch there.

It’s been five years now since the PC version of Borderlands 3 launched as a high-profile timed exclusive on the Epic Games Store. At the time, Gearbox’s Randy Pitchford memorably mused that Steam “may look like a dying store” in “five or ten years” thanks to increased competition from Epic and others.

Fast-forward to this week’s announcement of Borderlands 4, and despite Pitchford’s old comments, the sequel will not follow its predecessor’s example of EGS exclusivity. The new game plans to launch on Steam and EGS simultaneously sometime in 2025 (alongside PS5 and Xbox Series X/S versions).

When one social media user noticed that change this week, Pitchford responded with another lengthy message explaining why his early hopes for the Epic Games Store’s rise to dominance were “misplaced or overly optimistic.”

In the short team, Pitchford said his high hopes for Epic’s effort were initially “validated” by the launches of Borderlands 3 and 2022 spin-off Tiny Tina’s Wonderlands (which was available on EGS for three months before its Steam release). “Borderlands 3 and Wonderlands demonstrated clearly that the customers show up for the games, not the storefront,” he said.

But Pitchford now says Epic didn’t “successfully press its advantage” to take a significant chunk of Steam’s dominant market power. “Famously, Steam does very little to earn the massive cut they take and continues its effective monopoly in the West while would-be competitors with much more developer friendly models continue to shoot themselves in the foot,” Pitchford said.

“The industry gives Steam their monopoly because publishers are afraid to take the risk to support more developer and publisher friendly stores,” he continued. “It’s all very interesting and there is a huge amount of opportunity in the PC gaming space for retail disruption, but no one seems to be able to make it happen.”

A limited success or an Epic failure?

Internal documents revealed in the Epic vs. Apple case in 2021 show that both Gearbox and Epic seemed to benefit from the Borderlands 3 exclusivity deal. Epic set a guaranteed sales floor of $80 million to help attract Borderlands 3 to the platform—if the game sold less on EGS, Epic would pay Gearbox the difference to reach that amount. But Gearbox’s game managed to hit that sales floor in just two weeks, bringing in more revenue on its own than the entirety of EGS had for the previous nine months while also attracting plenty of new EGS users.

  • Borderlands 3‘s exclusive launch was a huge revenue boost for the Epic Games Store.

    Epic vs. Apple court filing

  • Epic recouped its $80 million upfront revenue guarantee for Borderlands 3 within two weeks.

Not all of Epic’s attempts to secure exclusives were so successful, though. In 2019, Epic paid roughly $542 million in minimum guarantees for exclusive titles projected to earn just $336 million over their lifetimes. That $206 million difference that amounts to throwing money at publishers in hopes that their exclusive games would help attract new users to EGS.

And that continuing effort hasn’t been a total failure for Epic; by the end of 2023, the company said there were 75 million active monthly users for its PC store, up from 68 million the year before. But that’s still relatively tiny compared to Steam, which had 132 million active monthly users back in 2021. While Valve hasn’t released monthly user numbers since then, Steam’s concurrent user peak has increased about 67 percent (per SteamDB tracking) since the end of 2021—from 21.17 million to 35.55 million. That suggests Steam’s current monthly user number could be well over 200 million.

Things look worse for Epic when you compare the $950 million spent by EGS players in 2023 to the estimated $8.8 billion Steam players spent that same year.

To be fair, pushing a new PC storefront from a standing start to about 10 percent of Steam’s massive revenue in about five years is impressive. But that result still has to be disappointing for Epic, which projected in 2019 that EGS could represent 35 to 50 percent of the entire PC games market in 2024.

It’s an open question whether Epic’s limited success is a result of the company’s failure to “press its advantage,” as Pitchford opines, or just a sign that Steam’s massive entrenched network effects have proven more resilient than he expected. Regardless, Borderlands 4‘s Steam launch— following the lead of other former EGS exclusive publishers—doesn’t mean Pitchford has given up hope that a Steam-killer could still come down the pike.

“I sincerely hope Epic keeps up the fight and makes headway,” Pitchford said. “Epic is going to have to prioritize the store and try some new initiatives while also doubling down on earning pivotal exclusives if it is going to have a chance. I also hope other viable competitors arrive. I am sure we will all be watching.”

Gearbox founder says Epic Games Store hopes were “misplaced or overly optimistic” Read More »

astronomers-think-they’ve-found-a-plausible-explanation-of-the-wow!-signal

Astronomers think they’ve found a plausible explanation of the Wow! signal

“I’m not saying it’s aliens…” —

Magnetars could zap clouds of atomic hydrogen, producing focused microwave beams.

The Wow! signal represented as

Enlarge / The Wow! signal, represented as “6EQUJ5,” was discovered in 1977 by astronomer Jerry Ehman.

Public domain

An unusually bright burst of radio waves—dubbed the Wow! signal—discovered in the 1970s has baffled astronomers ever since, given the tantalizing possibility that it just might be from an alien civilization trying to communicate with us. A team of astronomers think they might have a better explanation, according to a preprint posted to the physics arXiv: clouds of atomic hydrogen that essentially act like a naturally occurring galactic maser, emitting a beam of intense microwave radiation when zapped by a flare from a passing magnetar.

As previously reported, the Wow! signal was detected on August 18, 1977, by The Ohio State University Radio Observatory, known as “Big Ear.” Astronomy professor Jerry Ehman was analyzing Big Ear data in the form of printouts that, to the untrained eye, looked like someone had simply smashed the number row of a typewriter with a preference for lower digits. Numbers and letters in the Big Ear data indicated, essentially, the intensity of the electromagnetic signal picked up by the telescope over time, starting at ones and moving up to letters in the double digits (A was 10, B was 11, and so on). Most of the page was covered in ones and twos, with a stray six or seven sprinkled in.

But that day, Ehman found an anomaly: 6EQUJ5 (sometimes misinterpreted as a message encoded in the radio signal). This signal had started out at an intensity of six—already an outlier on the page—climbed to E, then Q, peaked at U—the highest power signal Big Ear had ever seen—then decreased again. Ehman circled the sequence in red pen and wrote “Wow!” next to it. The signal appeared to be coming from the direction of the Sagittarius constellation, and the entire signal lasted for about 72 seconds. Alas, SETI researchers have never been able to detect the so-called “Wow! Signal” again, despite many tries with radio telescopes around the world.

One reason for the excited reaction is that such a signal had been proposed as a possible communication from extraterrestrial civilizations in a 1959 paper by Cornell University physicists Philip Morrison and Giuseppe Cocconi. Morrison and Cocconi thought that such a civilization might use the 1420 megahertz frequency naturally emitted by hydrogen, the universe’s most abundant element and, therefore, something an alien civilization would be familiar with. In fact, the Big Ear had been reassigned to the SETI project in 1973 specifically to hunt for possible signals. Ehman himself was quite skeptical of the “it could be aliens” hypothesis for several decades, although he admitted in a 2019 interview that “the Wow! signal certainly has the potential of being the first signal from extraterrestrial intelligence.”

Several other alternative hypotheses have been suggested. For instance, Antonio Paris suggested in 2016 that the signal may have come from the hydrogen cloud surrounding a pair of comets, 266P/Christensen and 335P/Gibbs. This was rejected by most astronomers, however, in part because comets don’t emit strongly at the relevant frequencies. Others have suggested the signal was the result of interference from satellites orbiting the Earth, or a signal from Earth reflected off a piece of space debris.

Space maser!

Astrobiologist Abel Mendez of the University of Puerto Rico at Arecibo and his co-authors think they have the strongest astrophysical explanation to date with their cosmic maser hypothesis. The team was actually hunting for habitable exoplanets using signals from red dwarf stars. In some of the last archival data collected at the Arecibo radio telescope (which collapsed in 2020), they noticed several signals that were remarkably similar to the Wow! signal in terms of frequency—just much less intense (bright).

Mendez admitted to Science News that he had always viewed the Wow! signal as just a fluke—he certainly didn’t think it was aliens. But he realized that if the signals they were identifying had blazed brighter, even momentarily, they would be very much like the Wow! signal. As for the mechanism that caused such a brightening, Mendez et al. propose that a magnetar (a highly magnetic neutron star) passing behind a cloud of atomic hydrogen could have flared up with sufficient energy to produce stimulated emission in the form of a tightly focused beam of microwave radiation—a cosmic maser. (Masers are akin to lasers, except they emit microwave radiation rather than visible radiation.)

Proving their working hypothesis will be much more challenging, although there have been rare sightings of such naturally occurring masers from hydrogen molecules in space. But nobody has ever spotted an atomic hydrogen cloud with an associated maser, and that’s what would be needed to explain the intensity of the Wow! signal. That’s why other astronomers are opting for cautious skepticism. “A magnetar is going to produce [short] radio emissions as well. Do you really need this complicated maser stuff happening as well to explain the Wow! signal?” Michael Garrett of the University of Manchester told New Scientist. “Personally, I don’t think so. It just makes a complicated story even more complicated.”

arXiv, 2024. DOI: 10.48550/arXiv.2408.08513  (About DOIs).

Astronomers think they’ve found a plausible explanation of the Wow! signal Read More »

civilization-vii-hands-on:-this-strategy-sequel-rethinks-the-long-game

Civilization VII hands-on: This strategy sequel rethinks the long game

One More Turn —

Classic turn-based gameplay meets a radical rethink of the overall structure.

A Mayan city in Civilization VII

Enlarge / Firaxis has upped the ante on presentation for the cities. It’s still a bit abstract and removed, but they have more vibrancy, detail, and movement than before.

2K Games

2K Games provided a flight from Chicago to Baltimore and accommodation for two nights so that Ars could participate in the preview opportunity for Civilization VII. Ars does not accept paid editorial content.

From squares to hexes, from tech trees to civic trees, over its more than 30 years across seven mainline entries, the Civilization franchise continues to evolve.

Firaxis, the studio that has developed the Civilization games for many years, has a mantra when making a sequel: 33 percent of the game stays the same, 33 percent gets updated, and 33 percent is brand new.

Recently, I had the opportunity to play Civilization VII, the next entry, which is due to launch in February 2025. The build I played was an early alpha build, but the bones of the game it will become were there, and it’s interesting to see which third Firaxis kept the same and which third it has reimagined.

It turns out that the core of the game that its developers won’t much want to change is the turn-to-turn experience. But in the case of Civilization VII, all bets are off when it comes to the overall arc of a long journey, from sticks and stones to space travel.

Rethinking the structure of a Civilization game

Most of the time, playing Civilization VII feels a lot like playing Civilization VI—but there’s one big change that spans the whole game that seems to be this sequel’s tentpole feature.

That’s the new Ages system. The long game is now broken into three segments: Antiquity, Exploration, and Modern. Each Age has some unique systems and gameplay, though most systems span all three.

Within each age, you’re given a handful of “Legacy Paths” to choose from. These map closely to the franchise’s long-standing victory conditions: Science, Economic, Cultural, and Military. The idea is that you pick the Legacy Path you want to pursue, and each Legacy Path has different success conditions that change across each of the three Ages.

These conditions are big and broad, and Firaxis thankfully hasn’t gotten too jazzy with them. For example, I played in the Age of Antiquity and pursued the Cultural path, so my goal was to build a certain number of Wonders before the end of the Age.

In some ways, this is similar to the boom-and-bust cycle of Dark and Golden Ages in Civilization VI, but I found it much more natural in VII. In VI, I often found myself making arbitrary-seeming choices I didn’t think made sense for my long-term strategy just to game the system and get the Age transition I wanted. In this new game, the Legacy Path objectives are likely to always be completely in line with the overall victory strategy you’re pursuing.

One of the advantages of this new structure is support for shorter games that aren’t just hyper-compressed versions of a larger game. Previously, the only way to play a game of Civilization that wasn’t a dozen or more hours long was to pick one of the faster game speeds, but that fundamentally changed how the game felt to play.

This is a Roman city, but you could have a non-Roman historical leader, like Egypt's Hatshepsut, at the helm.

Enlarge / This is a Roman city, but you could have a non-Roman historical leader, like Egypt’s Hatshepsut, at the helm.

2K Games

Now, Civilization VII gives you the ability to play a match that’s just one Age, if you choose to.

The new Ages system is integrated with another big change: your choice of leader and civilization are no longer tied together when you start a new game, and they’re not set in stone, either.

Now you pick both a civilization and a leader separately at the start—and you can do some weird, ahistorical combinations, like Greece’s Alexander as the leader of China. Each leader and civilization offers specific bonuses, so this gives more customization of your playstyle at the start.

It doesn’t end there, though. At the end of each Age, you can essentially change civilizations (though as far as I could tell, you stick with the leader). Firaxis says it took inspiration for this feature from history—like the fact that London was a Roman city before it became an English one in the Medieval era.

Which civilization you can transition to is dictated by what you did within the Legacy Path system, among other things.

The amount of time I had to play the game was just enough to almost finish the Antiquity Age, so I didn’t get to see this in action, but it sounds like an interesting new system.

Civilization VII hands-on: This strategy sequel rethinks the long game Read More »

nvidia-is-ditching-dedicated-g-sync-modules-to-push-back-against-freesync’s-ubiquity

Nvidia is ditching dedicated G-Sync modules to push back against FreeSync’s ubiquity

sync or swim —

But G-Sync will still require specific G-Sync-capable MediaTek scaler chips.

Nvidia is ditching dedicated G-Sync modules to push back against FreeSync’s ubiquity

Nvidia

Back in 2013, Nvidia introduced a new technology called G-Sync to eliminate screen tearing and stuttering effects and reduce input lag when playing PC games. The company accomplished this by tying your display’s refresh rate to the actual frame rate of the game you were playing, and similar variable refresh-rate (VRR) technology has become a mainstay even in budget monitors and TVs today.

The issue for Nvidia is that G-Sync isn’t what has been driving most of that adoption. G-Sync has always required extra dedicated hardware inside of displays, increasing the costs for both users and monitor manufacturers. The VRR technology in most low-end to mid-range screens these days is usually some version of the royalty-free AMD FreeSync or the similar VESA Adaptive-Sync standard, both of which provide G-Sync’s most important features without requiring extra hardware. Nvidia more or less acknowledged that the free-to-use, cheap-to-implement VRR technologies had won in 2019 when it announced its “G-Sync Compatible” certification tier for FreeSync monitors. The list of G-Sync Compatible screens now vastly outnumbers the list of G-Sync and G-Sync Ultimate screens.

Today, Nvidia is announcing a change that’s meant to keep G-Sync alive as its own separate technology while eliminating the requirement for expensive additional hardware. Nvidia says it’s partnering with chipmaker MediaTek to build G-Sync capabilities directly into scaler chips that MediaTek is creating for upcoming monitors. G-Sync modules ordinarily replace these scaler chips, but they’re entirely separate boards with expensive FPGA chips and dedicated RAM.

These new MediaTek scalers will support all the same features that current dedicated G-Sync modules do. Nvidia says that three G-Sync monitors with MediaTek scaler chips inside will launch “later this year”: the Asus ROG Swift PG27AQNR, the Acer Predator XB273U F5, and the AOC AGON PRO AG276QSG2. These are all 27-inch 1440p displays with maximum refresh rates of 360 Hz.

As of this writing, none of these companies has announced pricing for these displays—the current Asus PG27AQN has a traditional G-Sync module and a 360 Hz refresh rate and currently goes for around $800, so we’d hope for the new version to be significantly cheaper to make good on Nvidia’s claim that the MediaTek chips will reduce costs (or, if they do reduce costs, whether monitor makers are willing to pass those savings on to consumers).

For most people most of the time, there won’t be an appreciable difference between a “true” G-Sync monitor and one that uses FreeSync or Adaptive-Sync, but there are still a few fringe benefits. G-Sync monitors support a refresh rate between 1 and the maximum refresh rate of the monitor, whereas FreeSync and Adaptive-Sync stop working on most displays when the frame rate drops below 40 or 48 frames per second. All G-Sync monitors also support “variable overdrive” technology to help eliminate display ghosting, and the new MediaTek-powered displays will support the recent “G-Sync Pulsar” feature to reduce blur.

Nvidia is ditching dedicated G-Sync modules to push back against FreeSync’s ubiquity Read More »

against-all-odds,-an-asteroid-mining-company-appears-to-be-making-headway

Against all odds, an asteroid mining company appears to be making headway

Forging ahead —

“It’s not easy to ever raise for an asteroid mining company, right?”

The Odin spacecraft passed vibration testing.

Enlarge / The Odin spacecraft passed vibration testing.

Astro Forge

When I first spoke with space entrepreneurs Jose Acain and Matt Gialich a little more than two years ago, I wondered whether I would ever talk to them again.

That is not meant to be offensive; rather, it is a reflection of the fact that the business they entered into—mining asteroids for platinum and other precious metals—is a perilous one. To date, NASA and other space agencies have spent billions of dollars returning a few grams of rocky material from asteroids. Humanity has never visited a metal-rich asteroid, although that will finally change with NASA’s $1.4 billion Psyche mission in 2029. And so commercial asteroid mining seems like a stretch, and indeed, other similarly minded startups have come and gone.

But it turns out that I did hear from Acain and Gialich again about their asteroid mining venture, AstroForge. On Tuesday the co-founders announced that they have successfully raised $40 million in Series A funding and shared plans for their next two missions. AstroForge has now raised a total of $55 million to date.

“It was challenging,” Gialich said of the latest fundraising effort, in an interview. “It’s not easy to ever raise for an asteroid mining company, right? Let’s be honest. We talked two years ago and you told us this. And you were not wrong. So a big part of this funding round was just showing people that we can actually build a spacecraft.”

Making some mistakes

In April 2023, the company launched a shoebox-sized cubesat, named the Brokkr-1 mission, on a SpaceX Transporter flight. Although the vehicle flew as intended for a while, AstroForge was unable to send the necessary commands to the spacecraft to initiate a demonstration of its space-based refining technology.

However, Gialich said AstroForge learned a lot from this mission and is working toward launching a second spacecraft named Odin. This will be a rideshare payload on the Intuitive Machines-2 mission, which is due to launch during the fourth quarter of this year. If successful, the Odin mission would be spectacular. About seven months after launching, Odin will attempt to fly by a near-Earth, metallic-rich asteroid while capturing images and taking data—truly visiting terra incognita. Odin would also be the first private mission to fly by a body in the Solar System beyond the Moon.

It has not been an easy project to develop. In the name of expediency, AstroForge initially sought to develop this spacecraft by largely outsourcing key components from suppliers—a practice known as horizontal integration. However, in March, the Odin spacecraft failed vibration testing. “Originally, our concept was to be different than SpaceX, and be horizontally integrated, not vertical,” Gialich said. “That was completely wrong. We have very much made changes there to be vertical.”

After the original vehicle failed vibration testing, which ensures it can survive the rigors of launch, AstroForge decided to bring forward a spacecraft being developed internally for the company’s third flight and use that for the Odin mission. To remain on track for a launch this year, the company had to complete vibration testing of the new, 100-kg Odin vehicle by August 1. AstroForge made that deadline but still must complete several other tests before shipping Odin to the launch pad.

Docking with an asteroid

On Tuesday, the company also announced plans for its third mission, Vestri (the company is naming its missions after Norse deities). This spacecraft will be about twice as large as Odin and is intended to return to the targeted metallic asteroid and dock with it. The docking mechanism is simple—since the asteroid is likely to be iron-rich, Vestri will use magnets to attach itself.

The plan is to use a mass spectrometer to sample and characterize the asteroid weekly until the spacecraft fails. AstroForge seeks to launch Vestri on another Intuitive Machines mission in 2025. Vestri’s goals are highly ambitious, as no private spacecraft has ever landed on a body beyond the Moon.

AstroForge is tracking several candidate asteroids as the target body for Odin and Vestri, Gialich said, each of which is about 400 meters across. He won’t make a final decision for several months. The company does not want to tip its hand due to the interest of potential competitors, including China-based Origin Space.

However, there is no shortage of potential targets. Scientists estimate that there are about 10 million near-Earth asteroids, which come within one astronomical unit (the distance between the Sun and Earth) of our planet. Perhaps 3 to 5 percent of these are rich in metals, so there are potentially hundreds of thousands of candidates for mining.

Against all odds, an asteroid mining company appears to be making headway Read More »

amd-signs-$4.9-billion-deal-to-challenge-nvidia’s-ai-infrastructure-lead

AMD signs $4.9 billion deal to challenge Nvidia’s AI infrastructure lead

chip wars —

Company hopes acquisition of ZT Systems will accelerate adoption of its data center chips.

Visitors walk past the AMD booth at the 2024 Mobile World Congress

AMD has agreed to buy artificial intelligence infrastructure group ZT Systems in a $4.9 billion cash and stock transaction, extending a run of AI investments by the chip company as it seeks to challenge market leader Nvidia.

The California-based group said the acquisition would help accelerate the adoption of its Instinct line of AI data center chips, which compete with Nvidia’s popular graphics processing units (GPUs).

ZT Systems, a private company founded three decades ago, builds custom computing infrastructure for the biggest AI “hyperscalers.” While the company does not disclose its customers, the hyperscalers include the likes of Microsoft, Meta, and Amazon.

The deal marks AMD’s biggest acquisition since it bought Xilinx for $35 billion in 2022.

“It brings a thousand world-class design engineers into our team, it allows us to develop silicon and systems in parallel and, most importantly, get the newest AI infrastructure up and running in data centers as fast as possible,” AMD’s chief executive Lisa Su told the Financial Times.

“It really helps us deploy our technology much faster because this is what our customers are telling us [they need],” Su added.

The transaction is expected to close in the first half of 2025, subject to regulatory approval, after which New Jersey-based ZT Systems will be folded into AMD’s data center business group. The $4.9bn valuation includes up to $400mn contingent on “certain post-closing milestones.”

Citi and Latham & Watkins are advising AMD, while ZT Systems has retained Goldman Sachs and Paul, Weiss.

The move comes as AMD seeks to break Nvidia’s stranglehold on the AI data center chip market, which earlier this year saw Nvidia temporarily become the world’s most valuable company as big tech companies pour billions of dollars into its chips to train and deploy powerful new AI models.

Part of Nvidia’s success stems from its “systems” approach to the AI chip market, offering end-to-end computing infrastructure that includes pre-packaged server racks, networking equipment, and software tools to make it easier for developers to build AI applications on its chips.

AMD’s acquisition shows the chipmaker building out its own “systems” offering. The company rolled out its MI300 line of AI chips last year, and says it will launch its next-generation MI350 chip in 2025 to compete with Nvidia’s new Blackwell line of GPUs.

In May, Microsoft was one of the first AI hyperscalers to adopt the MI300, building it into its Azure cloud platform to run AI models such as OpenAI’s GPT-4. AMD’s quarterly revenue for the chips surpassed $1 billion for the first time in the three months to June 30.

But while AMD has feted the MI300 as its fastest-ever product ramp, its data center revenue still represented a fraction of the $22.6 billion that Nvidia’s data center business raked in for the quarter to the end of April.

In March, ZT Systems announced a partnership with Nvidia to build custom AI infrastructure using its Blackwell chips. “I think we certainly believe ZT as part of AMD will significantly accelerate the adoption of AMD AI solutions,” Su said, but “we have customer commitments and we are certainly going to honour those”.

Su added that she expected regulators’ review of the deal to focus on the US and Europe.

In addition to increasing its research and development spending, AMD says it has invested more than $1 billion over the past year to expand its AI hardware and software ecosystem.

In July the company announced it was acquiring Finnish AI start-up Silo AI for $665 million, the largest acquisition of a privately held AI startup in Europe in a decade.

© 2024 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post to the web.

AMD signs $4.9 billion deal to challenge Nvidia’s AI infrastructure lead Read More »

rocket-lab-entered-“hero-mode”-to-finish-mars-probes—now-it’s-up-to-blue-origin

Rocket Lab entered “hero mode” to finish Mars probes—now it’s up to Blue Origin

The two spacecraft for NASA's ESCAPADE mission at Rocket Lab's factory in Long Beach, California.

Enlarge / The two spacecraft for NASA’s ESCAPADE mission at Rocket Lab’s factory in Long Beach, California.

Two NASA spacecraft built by Rocket Lab are on the road from California to Florida this weekend to begin preparations for launch on Blue Origin’s first New Glenn rocket.

These two science probes must launch between late September and mid-October to take advantage of a planetary alignment between Earth and Mars that only happens once every 26 months. NASA tapped Blue Origin, Jeff Bezos’ space company, to launch the Escape and Plasma Acceleration and Dynamics Explorers (ESCAPADE) mission with a $20 million contract.

Last November, the space agency confirmed the $79 million ESCAPADE mission will launch on the inaugural flight of Blue Origin’s New Glenn rocket. With this piece of information, the opaque schedule for Blue Origin’s long-delayed first New Glenn mission suddenly became more clear.

The launch period opens on September 29. The two identical Mars-bound spacecraft for the ESCAPADE mission, nicknamed Blue and Gold, are now complete. Rocket Lab announced Friday that its manufacturing team packed the satellites and shipped them from their factory in Long Beach, California. Over the weekend, they arrived at a clean room facility just outside the gates of NASA’s Kennedy Space Center in Florida, where technicians will perform final checkups and load hydrazine fuel into both spacecraft, each a little more than a half-ton in mass.

Then, if Blue Origin is ready, ground teams will connect the ESCAPADE spacecraft with the New Glenn’s launch adapter, encapsulate the probes inside the payload fairing, and mount them on top of the rocket.

“There’s a whole bunch of checking and tests to make sure everything’s OK, and then we move into fueling, and then we integrate with the launch vehicle. So it’s a big milestone,” said Rob Lillis, the mission’s lead scientist from the University of California Berkeley’s Space Science Laboratory. “There have been some challenges along the way. This wasn’t easy to make happen on this schedule and for this cost. So we’re very happy to be where we are.”

Racing to the finish line

But there’s a lot for Blue Origin to accomplish in the next couple of months if the New Glenn rocket is going to be ready to send the ESCAPADE mission toward Mars in this year’s launch period. Blue Origin has not fully exercised a New Glenn rocket during a launch countdown, hasn’t pumped a full load of cryogenic propellants into the launch vehicle, and hasn’t test-fired a full complement of first stage or second stage engines.

These activities typically take place months before the first launch of a large new orbital-class rocket. For comparison, SpaceX test-fired its first fully assembled Falcon 9 rocket on the launch pad about three months before its first flight in 2010. United Launch Alliance completed a hot-fire test of its new Vulcan rocket on the launch pad last year, about seven months before its inaugural flight.

However, Blue Origin is making visible progress toward the first flight of New Glenn, after years of speculation and few outward signs of advancement. Earlier this year, the company raised a full-scale, 320-foot-tall (98-meter) New Glenn rocket on its launch pad at Cape Canaveral Space Force Station and loaded it with liquid nitrogen, a cryogenic substitute for the methane and liquid hydrogen fuel it will burn in flight.

Rocket Lab entered “hero mode” to finish Mars probes—now it’s up to Blue Origin Read More »

google-denies-reports-that-it’s-discontinuing-fitbit-products

Google denies reports that it’s discontinuing Fitbit products

Fitbit lives on … for now —

Claims that there will be no new Versas or Senses is incorrect, rep says.

The Fitbit Sense 2.

Enlarge / The Fitbit Sense 2.

Google

Google is denying a recent report that it is no longer making Fitbit smartwatches. A company spokesperson told Ars Technica today that Google has no current plans to discontinue the Fitbit Sense or Fitbit Versa product lines.

On Sunday, TechRadar published an article titled “RIP Fitbit smartwatches—an end we could see coming a mile away.” The article noted last week’s announcement of the new Google Pixel Watch 3. Notably, the watch from Google, which acquired Fitbit in 2019, gives users free access to the Daily Readiness Score, a feature that previously required a Fitbit Premium subscription (Pixel Watch 3 owners also get six free months of Fitbit Premium). The publication said that Fitbit has been “consigned to wearable history” and reported:

Google quietly confirmed that there would never be another Fitbit Sense or Versa model produced. From now on, Fitbit-branded devices will be relegated to Google’s best fitness trackers: the Fitbit Inspire, Luxe, and Charge ranges. The smartwatch form factor would be exclusively reserved for the Pixel Watch line.

The story followed a report from Engadget last week, when the puiblication said that “moving forward everything from Fitbit would focus on the more minimalistic, long-lasting trackers the brand has become synonymous with,” citing a conversation with the senior director of product management for Pixel Wearables, Sandeep Waraich. “Pixel Watches are our next iteration of smartwatch for Fitbit,” he reportedly said.

When reached for comment, however, a Google spokesperson told me that the TechRadar story is “not correct” and shared the following statement:

We are very committed to Fitbit, and even more importantly to the customers that use and depend on those products and technology. It’s also worth noting that many of the health and fitness features we launched in Pixel Watch 3 were because of Fitbit’s innovation and ground-breaking fitness advancements. In addition, we just launched Fitbit Ace LTE, [a smartwatch for kids released on June 5], and you’ll continue to see new products and innovation from Fitbit.

While the company rep told me that they could not confirm a specific upcoming Sense or Versa model or any other specifics about Google’s product road map, they claimed that Google hasn’t discontinued the lines.

Fitbit fears

TechRadar’s concerns about Fitbit smartwatches dying also stem from the Sense 2 and Versa 4 lacking some features of its predecessors, including ways to control music or access music apps. The Pixel Watch, meanwhile, has music app support, like YouTube, Spotify, and Pandora. “Once Google completed its acquisition in January 2021 and debuted its first Pixel Watch in 2022, the Versa and the Sense watches were holdovers of a bygone era,” TechRadar wrote.

Google also has more than its fair share of dead products, prompting Fitbit fans to be wary about the future of the smartwatch brand.

However, Google’s spokesperson noted that “part of everything that we just launched from Pixel Watch is based on Fitbit technology, so it is not going anywhere.”

While Fitbit tech and perhaps its name may live on, it’s reasonable to question the brand’s longevity. Concerns about Google discontinuing Fitbit smartwatches have been fueled by Google taking Fitbit features and incorporating them into Google-branded watches. Google has also discontinued various beloved Fitbit features, including the Fitbit.com online dashboard, social features, and the ability to sync with computers. Google also previously announced that it’s closing all Fitbit accounts (forcing users onto Google accounts) next year and also shut down the Fitbit SDK for app development. Google’s Fitbit reputation has been further damaged by widely reported battery problems that some Charge 5 users have experienced. Google denied that the quick-dying battery issue stemmed from a firmware update but never publicly confirmed what it believes the problem is. This Google-fication of Fitbit has led long-term customers to publicly complain about Google allegedly reducing customer support and care for Fitbit users.

At this time, Google isn’t announcing the end of any Fitbit product lines. But it remains possible that if future devices arrive, they may lack the features of previous Fitbits or Pixel Watches. The Fitbit brand isn’t dead, but Fitbit, as people knew it before Google’s acquisition, is no more.

This article was updated with information from Engadget. 

Google denies reports that it’s discontinuing Fitbit products Read More »

$50-2gb-raspberry-pi-5-comes-with-a-lower-price-and-a-tweaked,-cheaper-cpu

$50 2GB Raspberry Pi 5 comes with a lower price and a tweaked, cheaper CPU

cheaper pi —

Despite changes, 2GB Pi 5 is “functionally identical” to other iterations.

The 8GB Raspberry Pi 5 with the official fan and heatsink installed.

Enlarge / The 8GB Raspberry Pi 5 with the official fan and heatsink installed.

Andrew Cunningham

We’re many months past the worst of the Raspberry Pi shortages, and the board is finally widely available at its suggested retail price at most sites without wait times or quantity limitations. One sign that the Pi Foundation is feeling more confident about the stock situation: the launch of a new 2GB configuration of the Raspberry Pi 5, available starting today for $50. That’s $10 less than the 4GB configuration and $30 less than the 8GB version of the board.

Raspberry Pi CEO Eben Upton writes that the 2GB version of the board includes a revised version of the Broadcom BCM2712C1 SoC that is slightly cheaper to manufacture. Upton says that the D0 stepping of the BCM2712C1 strips out some “dark silicon” built-in functionality that the Pi wasn’t using but was still taking up space on the silicon die and increasing the cost of the chip.

“From the perspective of a Raspberry Pi user, [the chip] is functionally identical to its predecessor: the same fast quad-core processor; the same multimedia capabilities; and the same PCI Express bus that has proven to be one of the most exciting features of the Raspberry Pi 5 platform,” Upton writes. “However, it is cheaper to make, and so is available to us at somewhat lower cost. And this, combined with the savings from halving the memory capacity, has allowed us to take $10 out of the cost of the finished product.”

At $50, the price tag is still north of the baseline $35 price that the Pi started at for many years. The Pi 4 had a 1GB model for $35 when it launched, and there was a $35 2GB model available for a while in 2020, but widespread shortages and supply chain issues led to a “temporary” price increase in late 2021 that is, as of this writing, still in place. At least the 2GB Pi 5 is only $5 more expensive than the 2GB version of the Pi 4, which is still in stock for $45 at many retailers.

Though you’ll want a fully fledged 8GB Raspberry Pi if you want to try using one as an everyday desktop PC, there are plenty of Pi use cases that will benefit from its additional speed and connectivity options without needing more RAM. Retro emulation boxes aren’t necessarily RAM-hungry but can benefit from the Pi 5’s extra CPU and GPU speed, and many types of lightweight server apps (Wireguard, Homebridge, Pi-hole, to name a few) can benefit from the faster Wi-Fi and Ethernet and improved support for more reliable NVMe storage.

All that said, for just $10 more, we’d still probably point most people to the more flexible and future-proof 4GB version. The Pi boards sitting around my house have all lived multiple lives at this point, picking up new tasks as my needs have changed, and new Pi boards have come out—if your Pi project today won’t benefit from more RAM, it’s possible that tomorrow’s Pi project will.

The 2GB Pi 5 is available for order from outlets like PiShop and CanaKit and should filter out to other Pi retailers soon.

$50 2GB Raspberry Pi 5 comes with a lower price and a tweaked, cheaper CPU Read More »

new-windows-11-build-removes-ancient,-arbitrary-32gb-size-limit-for-fat32-disks

New Windows 11 build removes ancient, arbitrary 32GB size limit for FAT32 disks

getting fat —

But the Windows NT-era disk formatting UI hasn’t been fixed yet.

If you've formatted a disk in Windows in the last 30 years, you may have come across this dialog box.

Enlarge / If you’ve formatted a disk in Windows in the last 30 years, you may have come across this dialog box.

Andrew Cunningham

As we wait for this fall’s Windows 11 24H2 update to be released to the general public, work continues on other new features that could be part of other future Windows updates. A new Canary channel Windows Insider build released yesterday fixes a decades-old and arbitrary limitation that restricted new FAT32 partitions to 32GB in size, even though the filesystem itself has a maximum supported size of 2TB (and Windows can read and recognize 2TB FAT32 partitions without an issue).

For now, this limit is only being lifted for the command-line formatting tools in Windows. The disk formatting UI, which looks more or less the same now as it did when it was introduced in Windows NT 4.0 almost 30 years ago, still has the arbitrary 32GB capacity restriction.

The 32GB limit can allegedly be pinned on former Microsoft programmer Dave Plummer, who occasionally shares stories about his time working on Windows in the 1990s and early 2000s. Plummer says that he wrote the file format dialog, intending it as a “temporary” solution, and arbitrarily chose 32GB as a size limit for disks, likely because it seemed big enough at the time (Windows NT 4.0 required a whopping 110MB of disk space).

There aren’t a ton of reasons to actually use a FAT32 disk in 2024, and it’s been replaced by other filesystems for just about everything. As a filesystem for your main OS drive, it was replaced by NTFS decades ago; as a widely compatible filesystem for external drives that can be read from and written to by many operating systems, you’d probably want to use exFAT instead. FAT32 still has a 4GB limit on the size of individual files.

But if you’re formatting a disk to use with an old version of Windows, or with some older device that can only work with FAT32 disks, this tweak could make Windows a tiny bit more useful for you.

Listing image by Alpha Six

New Windows 11 build removes ancient, arbitrary 32GB size limit for FAT32 disks Read More »

isp-to-supreme-court:-we-shouldn’t-have-to-disconnect-users-accused-of-piracy

ISP to Supreme Court: We shouldn’t have to disconnect users accused of piracy

A pair of scissors cutting an Ethernet cable.

A large Internet service provider wants the Supreme Court to rule that ISPs shouldn’t have to disconnect broadband users who have been accused of piracy. Cable firm Cox Communications, which is trying to overturn a ruling in a copyright infringement lawsuit brought by Sony, petitioned the Supreme Court to take up the case yesterday.

Cox said in a press release that a recent appeals court ruling “would force ISPs to terminate Internet service to households or businesses based on unproven allegations of infringing activity, and put them in a position of having to police their networks—contrary to customer expectations… Terminating Internet service would not just impact the individual accused of unlawfully downloading content, it would kick an entire household off the Internet.”

The case began in 2018 when Sony and other music copyright holders sued Cox, claiming that it didn’t adequately fight piracy on its network and failed to terminate repeat infringers. A US District Court jury in the Eastern District of Virginia ruled in December 2019 that Cox must pay $1 billion in damages to the major record labels.

Digital rights groups such as the Electronic Frontier Foundation (EFF) objected to the ruling, saying it “would result in innocent and vulnerable users losing essential Internet access.” The case went to the US Court of Appeals for the 4th Circuit, which vacated the $1 billion damages award in February 2024 but upheld one of the major copyright infringement verdicts.

Specifically, the appeals court affirmed the jury’s finding that Cox was guilty of willful contributory infringement and reversed a verdict on vicarious infringement. The vicarious liability verdict was scrapped “because Cox did not profit from its subscribers’ acts of infringement.”

Cox wants ruling on contributory infringement

On the contributory infringement charge, appeals court judges indicated that their hands were tied in part by Cox’s failure to make a key argument to the District Court. Proving “contributory infringement by an Internet service provider based on its subscribers’ direct infringement” can be achieved by showing “willful blindness,” the court said.

“Cox did not argue to the district court, as it does now on appeal, that notices of past infringement failed to establish its knowledge that the same subscriber was substantially certain to infringe again… Because Cox did not press this argument in the district court, it is forfeited for appeal,” the appeals court said. In District Court, Cox argued that copyright infringement notices sent to the ISP were too vague.

The Supreme Court held in MGM v. Grokster, in 2005, that “One who distributes a device with the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement, going beyond mere distribution with knowledge of third-party action, is liable for the resulting acts of infringement by third parties using the device, regardless of the device’s lawful uses.”

In its Supreme Court petition yesterday, Cox said that circuit appeals courts “have split three ways over the scope of that ruling, developing differing standards for when it is appropriate to hold an online service provider secondarily liable for copyright infringement committed by users.”

Cox asked justices to decide whether the 4th Circuit “err[ed] in holding that a service provider can be held liable for ‘materially contributing’ to copyright infringement merely because it knew that people were using certain accounts to infringe and did not terminate access, without proof that the service provider affirmatively fostered infringement or otherwise intended to promote it.”

The case raises one other major question, Cox told SCOTUS:

Generally, a defendant cannot be held liable as a willful violator of the law—and subject to increased penalties—without proof that it knew or recklessly disregarded a high risk that its own conduct was illegal. In conflict with the Eighth Circuit, the Fourth Circuit upheld an instruction allowing the jury to find willfulness if Cox knew its subscribers’ conduct was illegal—without proof Cox knew its own conduct in not terminating them was illegal.

Justices should rule on whether the 4th Circuit “err[ed] in holding that mere knowledge of another’s direct infringement suffices to find willfulness,” Cox said.

ISP to Supreme Court: We shouldn’t have to disconnect users accused of piracy Read More »