Author name: Kelly Newman

ai-and-ml-enter-motorsports:-how-gm-is-using-them-to-win-more-races

AI and ML enter motorsports: How GM is using them to win more races

not LLM or generative AI —

From modeling tire wear and fuel use to predicting cautions based on radio traffic.

SAO PAULO, BRAZIL - JULY 13: The #02 Cadillac Racing Cadillac V-Series.R of Earl Bamber, and Alex Lynn in action ahead of the Six Hours of Sao Paulo at the Autodromo de Interlagos on July 13, 2024 in Sao Paulo, Brazil.

Enlarge / The Cadillac V-Series.R is one of General Motors’ factory-backed racing programs.

James Moy Photography/Getty Images

It is hard to escape the feeling that a few too many businesses are jumping on the AI hype train because it’s hype-y, rather than because AI offers an underlying benefit to their operation. So I will admit to a little inherent skepticism, and perhaps a touch of morbid curiosity, when General Motors got in touch wanting to show off some of the new AI/machine learning tools it has been using to win more races in NASCAR, sportscar racing, and IndyCar. As it turns out, that skepticism was misplaced.

GM has fingers in a lot of motorsport pies, but there are four top-level programs it really, really cares about. Number one for an American automaker is NASCAR—still the king of motorsport here—where Chevrolet supplies engines to six Cup teams. IndyCar, which could once boast of being America’s favorite racing, is home to another six Chevy-powered teams. And then there’s sportscar racing; right now, Cadillac is competing in IMSA’s GTP class and the World Endurance Championship’s Hypercar class, plus a factory Corvette Racing effort in IMSA.

“In all the series we race we either have key partners or specific teams that run our cars. And part of the technical support that they get from us are the capabilities of my team,” said Jonathan Bolenbaugh, motorsports analytics leader at GM, based at GM’s Charlotte Technical Center in North Carolina.

Unlike generative AI that’s being developed to displace humans from creative activities, GM sees the role of AI and ML as supporting human subject-matter experts so they can make the cars go faster. And it’s using these tools in a variety of applications.

One of GM's command centers at its Charlotte Technical Center in North Carolina.

Enlarge / One of GM’s command centers at its Charlotte Technical Center in North Carolina.

General Motors

Each team in each of those various series (obviously) has people on the ground at each race, and invariably more engineers and strategists helping them from Indianapolis, Charlotte, or wherever it is that the particular race team has its home base. But they’ll also be tied in with a team from GM Motorsport, working from one of a number of command centers at its Charlotte Technical Center.

What did they say?

Connecting all three are streams and streams of data from the cars themselves (in series that allow car-to-pit telemetry) but also voice comms, text-based messaging, timing and scoring data from officials, trackside photographs, and more. And one thing Bolenbaugh’s team and their suite of tools can do is help make sense of that data quickly enough for it to be actionable.

“In a series like F1, a lot of teams will have students who are potentially newer members of the team literally listening to the radio and typing out what is happening, then saying, ‘hey, this is about pitting. This is about track conditions,'” Bolenbaugh said.

Instead of giving that to the internship kids, GM built a real time audio transcription tool to do that job. After trying out a commercial off-the-shelf solution, it decided to build its own, “a combination of open source and some of our proprietary code,” Bolenbaugh said. As anyone who has ever been to a race track can attest, it’s a loud environment, so GM had to train models with all the background noise present.

“We’ve been able to really improve our accuracy and usability of the tool to the point where some of the manual support for that capability is now dwindling,” he said, with the benefit that it frees up the humans, who would otherwise be transcribing, to apply their brains in more useful ways.

Take a look at this

Another tool developed by Bolenbaugh and his team was built to quickly analyze images taken by trackside photographers working for the teams and OEMs. While some of the footage they shoot might be for marketing or PR, a lot of it is for the engineers.

Two years ago, getting those photos from the photographer’s camera to the team was the work of two to three minutes. Now, “from shutter click at the racetrack in a NASCAR event to AI-tagged into an application for us to get information out of those photos is seven seconds,” Bolenbaugh said.

Sometimes you don't need a ML tool to analyze a photo to tell you the car is damaged.

Enlarge / Sometimes you don’t need a ML tool to analyze a photo to tell you the car is damaged.

Jeffrey Vest/Icon Sportswire via Getty Images

“Time is everything, and the shortest lap time that we run—the Coliseum would be an outlier, but maybe like 18 seconds is probably a short lap time. So we need to be faster than from when they pass that pit lane entry to when they come back again,” he said.

At the rollout of this particular tool at a NASCAR race last year, one of GM’s partner teams was able to avoid a cautionary pitstop after its driver scraped the wall, when the young engineer who developed the tool was able to show them a seconds-old photo of the right side of the car that showed it had escaped any damage.

“They didn’t have to wait for a spotter to look, they didn’t have to wait for the driver’s opinion. They knew that didn’t have damage. That team made the playoffs in that series by four points, so in the event that they would have pitted, there’s a likelihood where they didn’t make it,” he said. In cases where a car is damaged, the image analysis tool can automatically flag that and make that known quickly through an alert.

Not all of the images are used for snap decisions like that—engineers can glean a lot about their rivals from photos, too.

“We would be very interested in things related to the geometry of the car for the setup settings—wicker settings, wing angles… ride heights of the car, how close the car is to the ground—those are all things that would be great to know from an engineering standpoint, and those would be objectives that we would have in doing image analysis,” said Patrick Canupp, director of motorsports competition engineering at GM.

Many of the photographers you see working trackside will be shooting on behalf of teams or manufacturers.

Enlarge / Many of the photographers you see working trackside will be shooting on behalf of teams or manufacturers.

Steve Russell/Toronto Star via Getty Images

“It’s not straightforward to take a set of still images and determine a lot of engineering information from those. And so we’re working on that actively to help with all the photos that come in to us on a race weekend—there’s thousands of them. And so it’s a lot of information that we have at our access, that we want to try to maximize the engineering information that we glean from all of that data. It’s kind of a big data problem that AI is really geared for,” Canupp said.

The computer says we should pit now

Remember that transcribed audio feed from earlier? “If a bunch of drivers are starting to talk about something similar in the race like the track condition, we can start inferring, based on… the occurrence of certain words, that the track is changing,” said Bolenbaugh. “It might not just be your car… if drivers are talking about something on track, the likelihood of a caution, which is a part of our strategy model, might be going up.”

That feeds into a strategy tool that also takes lap times from timing and scoring, as well as fuel efficiency data in racing series that provide it for all cars, or a predictive model to do the same in series like NASCAR and IndyCar where teams don’t get to see that kind of data from their competitors, as well as models of tire wear.

“One of the biggest things that we need to manage is tires, fuel, and lap time. Everything is a trade-off between trying to execute the race the fastest,” Bolenbaugh said.

Obviously races are dynamic situations, and so “multiple times a lap as the scenario changes, we’re updating our recommendation. So, with tire fall off [as the tire wears and loses grip], you’re following up in real time, predicting where it’s going to be. We are constantly evolving during the race and doing transfer learning so we go into the weekend, as the race unfolds, continuing to train models in real time,” Bolenbaugh said.

AI and ML enter motorsports: How GM is using them to win more races Read More »

lego’s-newest-retro-art-piece-is-a-1,215-piece-super-mario-world-homage

Lego’s newest retro art piece is a 1,215-piece Super Mario World homage

let’s-a-go —

$130 set is available for preorder now, ships on October 1.

  • The Lego Mario & Yoshi set is an homage to 1990’s Super Mario World.

    The Lego Group

  • From the front, it looks like a fairly straightforward re-creation of the game’s 16-bit sprites.

    The Lego Group

  • Behind the facade are complex mechanics that move Yoshi’s feet and arms and bob his body up and down, to make him look like he’s walking. A separate dial opens his mouth and extends his tongue.

    The Lego Group

Nintendo and Lego are at it again—they’ve announced another collaboration today as a follow-up to the interactive Mario sets, the replica Nintendo Entertainment System, the unfolding question mark block with the Mario 64 worlds inside, and other sets besides.

The latest addition is an homage to 1990’s Super Mario World, Mario’s debut outing on the then-new 16-bit Super Nintendo Entertainment System. At first, the 1,215-piece set just looks like a caped Mario sitting on top of Yoshi. But a look at the back reveals more complex mechanics, including a hand crank that makes Yoshi’s feet and arms move and a dial that opens his mouth and extends his tongue.

Most of the Mario sets have included some kind of interactive moving part, even if it’s as simple as the movable mouth on the Lego Piranha Plant. Yoshi’s mechanical crank most strongly resembles the NES set, though, which included a CRT-style TV set with a crank that made the contents of the screen scroll so that Mario could “walk.”

The Mario & Yoshi set is available to preorder from Lego’s online store for $129.99. It begins shipping on October 1.

Lego has also branched out into other video game-themed sets. In 2022, the company began selling a replica Atari 2600, complete with faux-wood paneling. More recently, Lego has collaborated with Epic Games on several Fortnite-themed sets, including the Battle Bus.

Listing image by The Lego Group

Lego’s newest retro art piece is a 1,215-piece Super Mario World homage Read More »

new-zealand-“deeply-shocked”-after-canada-drone-spied-on-its-olympic-practices—twice

New Zealand “deeply shocked” after Canada drone-spied on its Olympic practices—twice

Droned —

Two Canadians have already been sent home over the incident.

New Zealand “deeply shocked” after Canada drone-spied on its Olympic practices—twice

Aurich Lawson | Getty Images

On July 22, the New Zealand women’s football (soccer) team was training in Saint-Étienne, France, for its upcoming Olympics matchup against Canada when team officials noticed a drone hovering near the practice pitch. Suspecting skullduggery, the New Zealand squad called the local police, and gendarmes located and then detained the nearby drone operator. He turned out to be one Joseph Lombardi, an “unaccredited analyst with Canada Soccer”—and he was apparently spying on the New Zealand practice and relaying information to a Canadian assistant coach.

On July 23, the New Zealand Olympic Committee put out a statement saying it was “deeply shocked and disappointed by this incident, which occurred just three days before the sides are due to face each other in their opening game of Paris 2024.” It also complained to the official International Olympic Committee integrity unit.

Early today, July 24, the Canadian side issued its own statement saying that it “stands for fair-play and we are shocked and disappointed. We offer our heartfelt apologies to New Zealand Football, to all the players affected, and to the New Zealand Olympic Committee.”

Later in the day, a follow-up Canadian statement revealed that this was actually the second drone-spying incident; the New Zealand side had also been watched by drone at its July 19 practice.

Team Canada announced four responses to these incidents:

  • “Joseph Lombardi, an unaccredited analyst with Canada Soccer, is being removed from the Canadian Olympic Team and will be sent home immediately.
  • Jasmine Mander, an assistant coach to whom Mr. Lombardi report sent [sic], is being removed from the Canadian Olympic Team and will be sent home immediately.
  • [The Canadian Olympic Committee] has accepted the decision of Head Coach Bev Priestman to remove herself from coaching the match against New Zealand on July 25th.
  • Canada Soccer staff will undergo mandatory ethics training.”

Drones are now everywhere—swarming the skies over Ukraine’s battlefields, flying from Houthi-controlled Yemen to Tel Aviv, scouting political assassination attempt options. Disney is running an 800-drone light show in Florida. The roofer who recently showed up to look at my shingles brought a drone with him. My kid owns one.

So, from a technical perspective, stories like this little spying scandal are no surprise at all. But for the Olympics, already awash in high-tech cheating scandals such as years-long state-sponsored doping campaigns, drone spying is just one more depressing example of how humans excel at using our tools to ruin good things in creative new ways.

And it’s a good reminder that every crazy example in those terrible HR training videos your boss makes you watch every year are included for a reason. So if you see “drone ethics” creeping into your compliance program right after sections on “how to avoid being phished” and “don’t let anyone else follow you through the door after you swipe your keycard”… well, now you know why.

New Zealand “deeply shocked” after Canada drone-spied on its Olympic practices—twice Read More »

crowdstrike-blames-testing-bugs-for-security-update-that-took-down-8.5m-windows-pcs

CrowdStrike blames testing bugs for security update that took down 8.5M Windows PCs

oops —

Company says it’s improving testing processes to avoid a repeat.

CrowdStrike's Falcon security software brought down as many as 8.5 million Windows PCs over the weekend.

Enlarge / CrowdStrike’s Falcon security software brought down as many as 8.5 million Windows PCs over the weekend.

CrowdStrike

Security firm CrowdStrike has posted a preliminary post-incident report about the botched update to its Falcon security software that caused as many as 8.5 million Windows PCs to crash over the weekend, delaying flights, disrupting emergency response systems, and generally wreaking havoc.

The detailed post explains exactly what happened: At just after midnight Eastern time, CrowdStrike deployed “a content configuration update” to allow its software to “gather telemetry on possible novel threat techniques.” CrowdStrike says that these Rapid Response Content updates are tested before being deployed, and one of the steps involves checking updates using something called the Content Validator. In this case, “a bug in the Content Validator” failed to detect “problematic content data” in the update responsible for the crashing systems.

CrowdStrike says it is making changes to its testing and deployment processes to prevent something like this from happening again. The company is specifically including “additional validation checks to the Content Validator” and adding more layers of testing to its process.

The biggest change will probably be “a staggered deployment strategy for Rapid Response Content” going forward. In a staggered deployment system, updates are initially released to a small group of PCs, and then availability is slowly expanded once it becomes clear that the update isn’t causing major problems. Microsoft uses a phased rollout for Windows security and feature updates after a couple of major hiccups during the Windows 10 era. To this end, CrowdStrike will “improve monitoring for both sensor and system performance” to help “guide a phased rollout.”

CrowdStrike says it will also give its customers more control over when Rapid Response Content updates are deployed so that updates that take down millions of systems aren’t deployed at (say) midnight when fewer people are around to notice or fix things. Customers will also be able to subscribe to release notes about these updates.

Recovery of affected systems is ongoing. Rebooting systems multiple times (as many as 15, according to Microsoft) can give them enough time to grab a new, non-broken update file before they crash, resolving the issue. Microsoft has also created tools that can boot systems via USB or a network so that the bad update file can be deleted, allowing systems to restart normally.

In addition to this preliminary incident report, CrowdStrike says it will release “the full Root Cause Analysis” once it has finished investigating the issue.

CrowdStrike blames testing bugs for security update that took down 8.5M Windows PCs Read More »

appeals-court-denies-stay-to-states-trying-to-block-epa’s-carbon-limits

Appeals Court denies stay to states trying to block EPA’s carbon limits

You can’t stay here —

The EPA’s plan to cut carbon emissions from power plants can go ahead.

Cooling towers emitting steam, viewed from above.

On Friday, the US Court of Appeals for the DC Circuit denied a request to put a hold on recently formulated rules that would limit carbon emissions made by fossil fuel power plants. The request, made as part of a case that sees 25 states squaring off against the EPA, would have put the federal government’s plan on hold while the case continued. Instead, the EPA will be allowed to continue the process of putting its rules into effect, and the larger case will be heard under an accelerated schedule.

Here we go again

The EPA’s efforts to regulate carbon emissions from power plants go back all the way to the second Bush administration, when a group of states successfully sued the EPA to force it to regulate greenhouse gas emissions. This led to a formal endangerment finding regarding greenhouse gases during the Obama administration, something that remained unchallenged even during Donald Trump’s term in office.

Obama tried to regulate emissions through the Clean Power Plan, but his second term came to an end before this plan had cleared court hurdles, allowing the Trump administration to formulate a replacement that did far less than the Clean Power Plan. This took place against a backdrop of accelerated displacement of coal by natural gas and renewables that had already surpassed the changes envisioned under the Clean Power Plan.

In any case, the Trump plan was thrown out by the courts on the day before Biden’s administration, allowing his EPA to start with a clean slate. Biden’s original plan, which would have had states regulate emissions from their electric grids by regulating them as a single system, was thrown out by the Supreme Court, which ruled that emissions would need to be regulated on a per-plant basis in a decision termed West Virginia v. EPA.

So, that’s what the agency is now trying to do. Its plan, issued last year, would allow fossil-fuel-burning plants that are being shut down in the early 2030s to continue operating without restrictions. Others will need to either install carbon capture equipment, or natural gas plants could swap in green hydrogen as their primary fuel.

And again

In response, 25 states have sued to block the rule (you can check out this filing to see if yours is among them). The states also sought a stay that would prevent the rule from being implemented while the case went forward. In it, they argue that carbon capture technology isn’t mature enough to form the basis of these regulations (something we predicted was likely to be a point of contention). The suit also suggests that the rules would effectively put coal out of business, something that’s beyond the EPA’s remit.

The DC Court of Appeals, however, was not impressed, ruling that the states’ arguments regarding carbon capture are insufficient: “Petitioners have not shown they are likely to succeed on those claims given the record in this case.” And that’s the key hurdle for determining whether a stay is justified. And the regulations don’t pose a likelihood of irreparable harm, as the court notes that states aren’t even expected to submit a plan for at least two years, and the regulations won’t kick in until 2030 at the earliest.

Meanwhile, the states cited the Supreme Court’s West Virginia v. EPA decision to argue against these rules, suggesting they represent a “major question” that requires input from Congress. The Court was also not impressed, writing that “EPA has claimed only the power to ‘set emissions limits under Section 111 based on the application of measures that would reduce pollution by causing the regulated source to operate more cleanly,’ a type of conduct that falls well within EPA’s bailiwick.”

To respond to the states’ concerns about the potential for irreparable harm, the court plans to consider them during the 2024 term and has given the parties just two weeks to submit proposed schedules for briefings on the case.

Appeals Court denies stay to states trying to block EPA’s carbon limits Read More »

intel-has-finally-tracked-down-the-problem-making-13th-and-14th-gen-cpus-crash

Intel has finally tracked down the problem making 13th- and 14th-gen CPUs crash

crash no more? —

But microcode update can’t fix CPUs that are already crashing or unstable.

Intel's Core i9-13900K.

Enlarge / Intel’s Core i9-13900K.

Andrew Cunningham

For several months, Intel has been investigating reports that high-end 13th- and 14th-generation desktop CPUs (mainly, but not exclusively, the Core i9-13900K and 14900K) were crashing during gameplay. Intel partially addressed the issue by insisting that third-party motherboard makers adhere to Intel’s recommended default power settings in their motherboards, but the company said it was still working to identify the root cause of the problem.

The company announced yesterday that it has wrapped up its investigation and that a microcode update to fix the problem should be shipping out to motherboard makers in mid-August “following full validation.” Microcode updates like this generally require a BIOS update, so exactly when the patch hits your specific motherboard will be up to the company that made it.

Intel says that an analysis of defective processors “confirms that the elevated operating voltage is stemming from a microcode algorithm resulting in incorrect voltage requests to the processor.” In other words, the CPU is receiving too much power, which is degrading stability over time.

If you’re using a 13th- or 14th-generation CPU and you’re not noticing any problems, the microcode update should prevent your processor from degrading. But if you’re already noticing stability problems, Tom’s Hardware reports that “the bug causes irreversible degradation of the impacted processors” and that the fix will not be able to reverse the damage that has already happened.

There has been no mention of 12th-generation processors, including the Core i9-12900K, suffering from the same issues. The 12th-gen processors use Intel’s Alder Lake architecture, whereas the high-end 13th- and 14th-gen chips use a modified architecture called Raptor Lake that comes with higher clock speeds, a bit more cache memory, and additional E-cores.

Tom’s Hardware also says that Intel will continue to replace CPUs that are exhibiting problems and that the microcode update shouldn’t noticeably affect CPU performance.

Intel also separately confirmed speculation that there was an oxidation-related manufacturing issue with some early 13th-generation Core processors but that the problems were fixed in 2023 and weren’t related to the crashes and instability that the microcode update is fixing.

Intel has finally tracked down the problem making 13th- and 14th-gen CPUs crash Read More »

spacex-just-stomped-the-competition-for-a-new-contract—that’s-not-great

SpaceX just stomped the competition for a new contract—that’s not great

A rocket sits on a launch pad during a purple- and gold-streaked dawn.

Enlarge / With Dragon and Falcon, SpaceX has become an essential contractor for NASA.

SpaceX

There is an emerging truth about NASA’s push toward commercial contracts that is increasingly difficult to escape: Companies not named SpaceX are struggling with NASA’s approach of awarding firm, fixed-price contracts for space services.

This belief is underscored by the recent award of an $843 million contract to SpaceX for a heavily modified Dragon spacecraft that will be used to deorbit the International Space Station by 2030.

The recently released source selection statement for the “US Deorbit Vehicle” contract, a process led by NASA head of space operations Ken Bowersox, reveals that the competition was a total stomp. SpaceX faced just a single serious competitor in this process, Northrop Grumman. And in all three categories—price, mission suitability, and past performance—SpaceX significantly outclassed Northrop.

Although it’s wonderful that NASA has an excellent contractor in SpaceX, it’s not healthy in the long term that there are so few credible competitors. Moreover, a careful reading of the source selection statement reveals that NASA had to really work to get a competition at all.

“I was really happy that we got proposals from the companies that we did,” Bowersox said during a media teleconference last week. “The companies that sent us proposals are both great companies, and it was awesome to see that interest. I would have expected a few more [proposals], honestly, but I was very happy to get the ones that we got.”

Commercial initiatives struggling

NASA’s push into “commercial” space began nearly two decades ago with a program to deliver cargo to the International Space Station. The space agency initially selected SpaceX and Rocketplane Kistler to develop rockets and spacecraft to accomplish this, but after Kistler missed milestones, the company was subsequently replaced by Orbital Sciences Corporation. The cargo delivery program was largely successful, resulting in the Cargo Dragon (SpaceX) and Cygnus (Orbital Sciences) spacecraft. It continues to this day.

A commercial approach generally means that NASA pays a “fixed” price for a service rather than paying a contractor’s costs plus a fee. It also means that NASA hopes to become one of many customers. The idea is that, as the first mover, NASA is helping to stimulate a market by which its fixed-priced contractors can also sell their services to other entities—both private companies and other space agencies.

NASA has since extended this commercial approach to crew, with SpaceX and Boeing winning large contracts in 2014. However, only SpaceX has flown operational astronaut missions, while Boeing remains in the development and test phase, with its ongoing Crew Flight Test. Whereas SpaceX has sold half a dozen private crewed missions on Dragon, Boeing has yet to announce any.

Such a commercial approach has also been tried with lunar cargo delivery through the “Commercial Lunar Payload Services” program, as well as larger lunar landers (Human Landing System), next-generation spacesuits, and commercial space stations. Each of these programs has a mixed record at best. For example, NASA’s inspector general was highly critical of the lunar cargo program in a recent report, and one of the two spacesuit contractors, Collins Aerospace, recently dropped out because it could not execute on its fixed-price contract.

Some of NASA’s most important traditional space contractors, including Lockheed Martin, Boeing, and Northrop Grumman, have all said they are reconsidering whether to participate in fixed-price contract competitions in the future. For example, Northrop CEO Kathy Warden said last August, “We are being even more disciplined moving forward in ensuring that we work with the government to have the appropriate use of fixed-price contracts.”

So the large traditional space contractors don’t like fixed-price contracts, and many new space companies are struggling to survive in this environment.

SpaceX just stomped the competition for a new contract—that’s not great Read More »

apple-“clearly-underreporting”-child-sex-abuse,-watchdogs-say

Apple “clearly underreporting” child sex abuse, watchdogs say

Apple “clearly underreporting” child sex abuse, watchdogs say

After years of controversies over plans to scan iCloud to find more child sexual abuse materials (CSAM), Apple abandoned those plans last year. Now, child safety experts have accused the tech giant of not only failing to flag CSAM exchanged and stored on its services—including iCloud, iMessage, and FaceTime—but also allegedly failing to report all the CSAM that is flagged.

The United Kingdom’s National Society for the Prevention of Cruelty to Children (NSPCC) shared UK police data with The Guardian showing that Apple is “vastly undercounting how often” CSAM is found globally on its services.

According to the NSPCC, police investigated more CSAM cases in just the UK alone in 2023 than Apple reported globally for the entire year. Between April 2022 and March 2023 in England and Wales, the NSPCC found, “Apple was implicated in 337 recorded offenses of child abuse images.” But in 2023, Apple only reported 267 instances of CSAM to the National Center for Missing & Exploited Children (NCMEC), supposedly representing all the CSAM on its platforms worldwide, The Guardian reported.

Large tech companies in the US must report CSAM to NCMEC when it’s found, but while Apple reports a couple hundred CSAM cases annually, its big tech peers like Meta and Google report millions, NCMEC’s report showed. Experts told The Guardian that there’s ongoing concern that Apple “clearly” undercounts CSAM on its platforms.

Richard Collard, the NSPCC’s head of child safety online policy, told The Guardian that he believes Apple’s child safety efforts need major improvements.

“There is a concerning discrepancy between the number of UK child abuse image crimes taking place on Apple’s services and the almost negligible number of global reports of abuse content they make to authorities,” Collard told The Guardian. “Apple is clearly behind many of their peers in tackling child sexual abuse when all tech firms should be investing in safety and preparing for the rollout of the Online Safety Act in the UK.”

Outside the UK, other child safety experts shared Collard’s concerns. Sarah Gardner, the CEO of a Los Angeles-based child protection organization called the Heat Initiative, told The Guardian that she considers Apple’s platforms a “black hole” obscuring CSAM. And she expects that Apple’s efforts to bring AI to its platforms will intensify the problem, potentially making it easier to spread AI-generated CSAM in an environment where sexual predators may expect less enforcement.

“Apple does not detect CSAM in the majority of its environments at scale, at all,” Gardner told The Guardian.

Gardner agreed with Collard that Apple is “clearly underreporting” and has “not invested in trust and safety teams to be able to handle this” as it rushes to bring sophisticated AI features to its platforms. Last month, Apple integrated ChatGPT into Siri, iOS and Mac OS, perhaps setting expectations for continually enhanced generative AI features to be touted in future Apple gear.

“The company is moving ahead to a territory that we know could be incredibly detrimental and dangerous to children without the track record of being able to handle it,” Gardner told The Guardian.

So far, Apple has not commented on the NSPCC’s report. Last September, Apple did respond to the Heat Initiative’s demands to detect more CSAM, saying that rather than focusing on scanning for illegal content, its focus is on connecting vulnerable or victimized users directly with local resources and law enforcement that can assist them in their communities.

Apple “clearly underreporting” child sex abuse, watchdogs say Read More »

astronomers-discover-technique-to-spot-ai-fakes-using-galaxy-measurement-tools

Astronomers discover technique to spot AI fakes using galaxy-measurement tools

stars in their eyes —

Researchers use technique to quantify eyeball reflections that often reveal deepfake images.

Researchers write,

Enlarge / Researchers write, “In this image, the person on the left (Scarlett Johansson) is real, while the person on the right is AI-generated. Their eyeballs are depicted underneath their faces. The reflections in the eyeballs are consistent for the real person, but incorrect (from a physics point of view) for the fake person.”

In 2024, it’s almost trivial to create realistic AI-generated images of people, which has led to fears about how these deceptive images might be detected. Researchers at the University of Hull recently unveiled a novel method for detecting AI-generated deepfake images by analyzing reflections in human eyes. The technique, presented at the Royal Astronomical Society’s National Astronomy Meeting last week, adapts tools used by astronomers to study galaxies for scrutinizing the consistency of light reflections in eyeballs.

Adejumoke Owolabi, an MSc student at the University of Hull, headed the research under the guidance of Dr. Kevin Pimbblet, professor of astrophysics.

Their detection technique is based on a simple principle: A pair of eyes being illuminated by the same set of light sources will typically have a similarly shaped set of light reflections in each eyeball. Many AI-generated images created to date don’t take eyeball reflections into account, so the simulated light reflections are often inconsistent between each eye.

A series of real eyes showing largely consistent reflections in both eyes.

Enlarge / A series of real eyes showing largely consistent reflections in both eyes.

In some ways, the astronomy angle isn’t always necessary for this kind of deepfake detection because a quick glance at a pair of eyes in a photo can reveal reflection inconsistencies, which is something artists who paint portraits have to keep in mind. But the application of astronomy tools to automatically measure and quantify eye reflections in deepfakes is a novel development.

Automated detection

In a Royal Astronomical Society blog post, Pimbblet explained that Owolabi developed a technique to detect eyeball reflections automatically and ran the reflections’ morphological features through indices to compare similarity between left and right eyeballs. Their findings revealed that deepfakes often exhibit differences between the pair of eyes.

The team applied methods from astronomy to quantify and compare eyeball reflections. They used the Gini coefficient, typically employed to measure light distribution in galaxy images, to assess the uniformity of reflections across eye pixels. A Gini value closer to 0 indicates evenly distributed light, while a value approaching 1 suggests concentrated light in a single pixel.

A series of deepfake eyes showing inconsistent reflections in each eye.

Enlarge / A series of deepfake eyes showing inconsistent reflections in each eye.

In the Royal Astronomical Society post, Pimbblet drew comparisons between how they measured eyeball reflection shape and how they typically measure galaxy shape in telescope imagery: “To measure the shapes of galaxies, we analyze whether they’re centrally compact, whether they’re symmetric, and how smooth they are. We analyze the light distribution.”

The researchers also explored the use of CAS parameters (concentration, asymmetry, smoothness), another tool from astronomy for measuring galactic light distribution. However, this method proved less effective in identifying fake eyes.

A detection arms race

While the eye-reflection technique offers a potential path for detecting AI-generated images, the method might not work if AI models evolve to incorporate physically accurate eye reflections, perhaps applied as a subsequent step after image generation. The technique also requires a clear, up-close view of eyeballs to work.

The approach also risks producing false positives, as even authentic photos can sometimes exhibit inconsistent eye reflections due to varied lighting conditions or post-processing techniques. But analyzing eye reflections may still be a useful tool in a larger deepfake detection toolset that also considers other factors such as hair texture, anatomy, skin details, and background consistency.

While the technique shows promise in the short term, Dr. Pimbblet cautioned that it’s not perfect. “There are false positives and false negatives; it’s not going to get everything,” he told the Royal Astronomical Society. “But this method provides us with a basis, a plan of attack, in the arms race to detect deepfakes.”

Astronomers discover technique to spot AI fakes using galaxy-measurement tools Read More »

we’re-building-nuclear-spaceships-again—this-time-for-real 

We’re building nuclear spaceships again—this time for real 

Artist concept of the Demonstration for Rocket to Agile Cislunar Operations (DRACO) spacecraft.

Enlarge / Artist concept of the Demonstration for Rocket to Agile Cislunar Operations (DRACO) spacecraft.

DARPA

Phoebus 2A, the most powerful space nuclear reactor ever made, was fired up at Nevada Test Site on June 26, 1968. The test lasted 750 seconds and confirmed it could carry first humans to Mars. But Phoebus 2A did not take anyone to Mars. It was too large, it cost too much, and it didn’t mesh with Nixon’s idea that we had no business going anywhere further than low-Earth orbit.

But it wasn’t NASA that first called for rockets with nuclear engines. It was the military that wanted to use them for intercontinental ballistic missiles. And now, the military wants them again.

Nuclear-powered ICBMs

The work on nuclear thermal rockets (NTRs) started with the Rover program initiated by the US Air Force in the mid-1950s. The concept was simple on paper. Take tanks of liquid hydrogen and use turbopumps to feed this hydrogen through a nuclear reactor core to heat it up to very high temperatures and expel it through the nozzle to generate thrust. Instead of causing the gas to heat and expand by burning it in a combustion chamber, the gas was heated by coming into contact with a nuclear reactor.

Tokino, vectorized by CommiM at en.wikipedia

The key advantage was fuel efficiency. “Specific impulse,” a measurement that’s something like the gas mileage of a rocket, could be calculated from the square root of the exhaust gas temperature divided by the molecular weight of the propellant. This meant the most efficient propellant for rockets was hydrogen because it had the lowest molecular weight.

In chemical rockets, hydrogen had to be mixed with an oxidizer, which increased the total molecular weight of the propellant but was necessary for combustion to happen. Nuclear rockets didn’t need combustion and could work with pure hydrogen, which made them at least twice as efficient. The Air Force wanted to efficiently deliver nuclear warheads to targets around the world.

The problem was that running stationary reactors on Earth was one thing; making them fly was quite another.

Space reactor challenge

Fuel rods made with uranium 235 oxide distributed in a metal or ceramic matrix comprise the core of a standard fission reactor. Fission happens when a slow-moving neutron is absorbed by a uranium 235 nucleus and splits it into two lighter nuclei, releasing huge amounts of energy and excess, very fast neutrons. These excess neutrons normally don’t trigger further fissions, as they move too fast to get absorbed by other uranium nuclei.

Starting a chain reaction that keeps the reactor going depends on slowing them down with a moderator, like water, that “moderates” their speed. This reaction is kept at moderate levels using control rods made of neutron-absorbing materials, usually boron or cadmium, that limit the number of neutrons that can trigger fission. Reactors are dialed up or down by moving the control rods in and out of the core.

Translating any of this to a flying reactor is a challenge. The first problem is the fuel. The hotter you make the exhaust gas, the more you increase specific impulse, so NTRs needed the core to operate at temperatures reaching 3,000 K—nearly 1,800 K higher than ground-based reactors. Manufacturing fuel rods that could survive such temperatures proved extremely difficult.

Then there was the hydrogen itself, which is extremely corrosive at these temperatures, especially when interacting with those few materials that are stable at 3,000 K. Finally, standard control rods had to go, too, because on the ground, they were gravitationally dropped into the core, and that wouldn’t work in flight.

Los Alamos Scientific Laboratory proposed a few promising NTR designs that addressed all these issues in 1955 and 1956, but the program really picked up pace after it was transferred to NASA and Atomic Energy Commission (AEC) in 1958, There, the idea was rebranded as NERVA, Nuclear Engine for Rocket Vehicle Applications. NASA and AEC, blessed with nearly unlimited budget, got busy building space reactors—lots of them.

We’re building nuclear spaceships again—this time for real  Read More »

will-burying-biomass-underground-curb-climate-change?

Will burying biomass underground curb climate change?

stacking bricks —

Though carbon removal startups may limit global warming, significant questions remain.

Will burying biomass underground curb climate change?

On April 11, a small company called Graphyte began pumping out beige bricks, somewhat the consistency of particle board, from its new plant in Pine Bluff, Arkansas. The bricks don’t look like much, but they come with a lofty goal: to help stop climate change.

Graphyte, a startup backed by billionaire Bill Gates’ Breakthrough Energy Ventures, will bury its bricks deep underground, trapping carbon there. The company bills it as the largest carbon dioxide removal project in the world.

Scientists have long warned of the dire threat posed by global warming. It’s gotten so bad though that the long-sought mitigation, cutting carbon dioxide emissions from every sector of the economy, might not be enough of a fix. To stave off the worst—including large swaths of the Earth exposed to severe heat waves, water scarcity, and crop failures—some experts say there is a deep need to remove previously emitted carbon, too. And that can be done anywhere on Earth—even in places not known for climate-friendly policies, like Arkansas.

Graphyte aims to store carbon that would otherwise be released from plant material as it burns or decomposes at a competitive sub-$100 per metric ton, and it wants to open new operations as soon as possible, single-handedly removing tens of thousands of tons of carbon annually, said Barclay Rogers, the company’s founder and CEO. Nevertheless, that’s nowhere near the amount of carbon that will have to be removed to register as a blip in global carbon emissions. “I’m worried about our scale of deployment,” he said. “I think we need to get serious fast.”

Hundreds of carbon removal startups have popped up over the past few years, but the fledgling industry has made little progress so far. That leads to the inevitable question: Could Graphyte and companies like it actually play a major role in combating climate change? And will a popular business model among these companies, inviting other companies to voluntarily buy “carbon credits” for those buried bricks, actually work?

Whether carbon emissions are cut to begin with, or pulled out of the atmosphere after they’ve already been let loose, climate scientists stress that there is no time to waste. The clock began ticking years ago, with the arrival of unprecedented fires and floods, superstorms, and intense droughts around the world. But carbon removal, as it’s currently envisioned, also poses additional sociological, economic, and ethical questions. Skeptics, for instance, say it could discourage more pressing efforts on cutting carbon emissions, leaving some experts wondering whether it will even work at all.

Still, the Intergovernmental Panel on Climate Change, the world’s forefront group of climate experts, is counting on carbon removal technology to dramatically scale up. If the industry is to make a difference, experimentation and research and development should be done quickly, within the next few years, said Gregory Nemet, professor of public affairs who studies low-carbon innovation at the University of Wisconsin-Madison. “Then after that is the time to really start going big and scaling up so that it becomes climate-relevant,” he added. “Scale-up is a big challenge.”

Will burying biomass underground curb climate change? Read More »

armada-to-apophis—scientists-recycle-old-ideas-for-rare-asteroid-encounter

Armada to Apophis—scientists recycle old ideas for rare asteroid encounter

Tick-tock —

“It will miss the Earth. It will miss the Earth. It will miss the Earth.”

This artist's concept shows the possible appearance of ESA's RAMSES spacecraft, which will release two small CubeSats for additional observations at Apophis.

Enlarge / This artist’s concept shows the possible appearance of ESA’s RAMSES spacecraft, which will release two small CubeSats for additional observations at Apophis.

For nearly 20 years, scientists have known an asteroid named Apophis will pass unusually close to Earth on Friday, April 13, 2029. But most officials at the world’s space agencies stopped paying much attention when updated measurements ruled out the chance Apophis will impact Earth anytime soon.

Now, Apophis is again on the agenda, but this time as a science opportunity, not as a threat. The problem is there’s not much time to design, build and launch a spacecraft to get into position near Apophis in less than five years. The good news is there are designs, and in some cases, existing spacecraft, that governments can repurpose for missions to Apophis, a rocky asteroid about the size of three football fields.

Scientists discovered Apophis in 2004, and the first measurements of its orbit indicated there was a small chance it could strike Earth in 2029 or in 2036. Using more detailed radar observations of Apophis, scientists in 2021 ruled out any danger to Earth for at least the next 100 years.

“The three most important things about Apophis are: It will miss the Earth. It will miss the Earth. It will miss the Earth,” said Richard Binzel, a professor of planetary science at MIT. Binzel has co-chaired several conferences since 2020 aimed at drumming up support for space missions to take advantage of the Apophis opportunity in 2029.

“An asteroid this large comes this close only once per 1,000 years, or less frequently,” Binzel told Ars. “This is an experiment that nature is doing for us, bringing a large asteroid this close, such that Earth’s gravitational forces and tidal forces are going to tug and possibly shake this asteroid. The asteroid’s response is insightful to its interior.”

It’s important, Binzel argues, to get a glimpse of Apophis before and after its closest approach in 2029, when it will pass less than 20,000 miles (32,000 kilometers) from Earth’s surface, closer than the orbits of geostationary satellites.

“This is a natural experiment that will reveal how hazardous asteroids are put together, and there is no other way to get this information without vastly complicated spacecraft experiments,” Binzel said. “So this is a once-per-many-thousands-of-years experiment that nature is doing for us. We have to figure out how to watch.”

This week, the European Space Agency announced preliminary approval for a mission named RAMSES, which would launch in April 2028, a year ahead of the Apophis flyby, to rendezvous with the asteroid in early 2029. If ESA member states grant full approval for development next year, the RAMSES spacecraft will accompany Apophis throughout its flyby with Earth, collecting imagery and other scientific measurements before, during, and after closest approach.

The challenge of building and launching RAMSES in less than four years will serve as good practice for a potential future real-world scenario. If astronomers find an asteroid that’s really on a collision course with Earth, it might be necessary to respond quickly. Given enough time, space agencies could mount a reconnaissance mission, and if necessary, a mission to deflect or redirect the asteroid, likely using a technique similar to the one demonstrated by NASA’s DART mission in 2022.

“RAMSES will demonstrate that humankind can deploy a reconnaissance mission to rendezvous with an incoming asteroid in just a few years,” said Richard Moissl, head of ESA’s planetary defense office. “This type of mission is a cornerstone of humankind’s response to a hazardous asteroid. A reconnaissance mission would be launched first to analyze the incoming asteroid’s orbit and structure. The results would be used to determine how best to redirect the asteroid or to rule out non-impacts before an expensive deflector mission is developed.”

Shaking off the cobwebs

In order to make a 2028 launch feasible for RAMSES, ESA will reuse the design of a roughly half-ton spacecraft named Hera, which is scheduled for launch in October on a mission to survey the binary asteroid system targeted by the DART impact experiment in 2022. Copying the design of Hera will reduce the time needed to get RAMSES to the launch pad, ESA officials said.

“Hera demonstrated how ESA and European industry can meet strict deadlines and RAMSES will follow its example,” said Paolo Martino, who leads ESA’s development of Ramses, which stands for the Rapid Apophis Mission for Space Safety.

ESA’s space safety board recently authorized preparatory work on the RAMSES mission using funds already in the agency’s budget. OHB, the German spacecraft manufacturer that is building Hera, will also lead the industrial team working on RAMSES. The cost of RAMSES will be “significantly lower” than the 300-million-euro ($380 million) cost of the Hera mission, Martino wrote in an email to Ars.

“There is still so much we have yet to learn about asteroids but, until now, we have had to travel deep into the Solar System to study them and perform experiments ourselves to interact with their surface,” said Patrick Michel, a planetary scientist at the French National Center for Scientific Research, and principal investigator on the Hera mission.

“For the first time ever, nature is bringing one to us and conducting the experiment itself,” Michel said in a press release. “All we need to do is watch as Apophis is stretched and squeezed by strong tidal forces that may trigger landslides and other disturbances and reveal new material from beneath the surface.”

Assuming it gets the final go-ahead next year, RAMSES will join NASA’s OSIRIS-APEX mission in exploring Apophis. NASA is steering the spacecraft, already in space after its use on the OSIRIS-REx asteroid sample return mission, toward a rendezvous with Apophis in 2029, but it won’t arrive at its new target until a few weeks after its close flyby of Earth. The intricacies of orbital mechanics prevent a rendezvous with Apophis any earlier.

Observations from OSIRIS-APEX, a larger spacecraft than RAMSES with a sophisticated suite of instruments, “will deliver a detailed look of what Apophis is like after the Earth encounter,” Binzel said. “But until we establish the state of Apophis before the Earth encounter, we have only one side of the picture.”

At its closest approach, asteroid Apophis will closer to Earth than the ring of geostationary satellites over the equator.

Enlarge / At its closest approach, asteroid Apophis will closer to Earth than the ring of geostationary satellites over the equator.

Scientists are also urging NASA to consider launching a pair of mothballed science probes on a trajectory to fly by Apophis some time before its April 2029 encounter with Earth. These two spacecraft were built for NASA’s Janus mission, which the agency canceled last year after the mission fell victim to launch delays with NASA’s larger Psyche asteroid explorer. The Janus probes were supposed to launch on the same rocket as Psyche, but problems with the Psyche mission forced a delay in the launch of more than one year.

Despite the delay, Psyche could still reach its destination in the asteroid belt, but the new launch trajectory meant Janus would be unable to visit the two binary asteroids scientists originally wanted to explore with the probes. After spending nearly $50 million on the mission, NASA put the twin Janus spacecraft, each about the size of a suitcase, into long-term storage.

At the most recent workshop on Apophis missions in April, scientists heard presentations on more than 20 concepts for spacecraft and instrument measurements at Apophis.

They included an idea from Blue Origin, Jeff Bezos’s space company, to use its Blue Ring space tug as a host platform for multiple instruments and landers that could descend to the surface of Apophis, assuming research institutions have enough time and money to develop their payloads. A startup named Exploration Laboratories has proposed partnering with NASA’s Jet Propulsion Laboratory on a small spacecraft mission to Apophis.

“At the conclusion of the workshop, it was my job to try to bring forward some consensus, because if we don’t have some consensus on our top priority, we may end up with nothing,” Binzel said. “The consensus recommendation for ESA was to more forward with RAMSES.”

Workshop participants also gently nudged NASA to use the Janus probes for a mission to Apophis. “Apophis is a mission in search of a spacecraft, and Janus is a spacecraft in search of a mission,” Binzel said. “As a matter of efficiency and basic logic, Janus to Apophis is the highest priority.”

A matter of money

But NASA’s science budget, and especially funding for its planetary science vision, is under stress. Earlier this week, NASA canceled an already-built lunar rover named VIPER after spending $450 million on the mission. The mission had exceeded its original development cost by greater than 30 percent, prompting an automatic cancellation review.

The funding level for NASA’s science mission directorate this year is nearly $500 million less than last year’s budget, and $900 million below the White House’s budget request for fiscal year 2024. Because of the tight budget, NASA officials have said, for now, they are not starting development of any new planetary science missions as they focus on finishing projects already in the pipeline, like the Europa Clipper mission, the Dragonfly quadcopter to visit Saturn’s moon Titan, and the Near-Earth Object (NEO) Surveyor telescope to search for potentially hazardous asteroids.

These grainy radar views of asteroid Apophis were captured using radars at NASA's Goldstone Deep Space Communications Complex in California and Green Bank Telescope in West Virginia.

Enlarge / These grainy radar views of asteroid Apophis were captured using radars at NASA’s Goldstone Deep Space Communications Complex in California and Green Bank Telescope in West Virginia.

NASA has asked the Janus team to look at the feasibility of launching on the same rocket as NEO Surveyor in 2027, according to Dan Scheeres, the Janus principal investigator at the University of Colorado. With such a launch in 2027, Janus could capture the first up-close images of Apophis before RAMSES and OSIRIS-APEX get there.

“This is something that we’re currently presenting in some discussions with NASA, just to make sure that they understand what the possibilities are there,” Scheeres said in a meeting last week of the Small Bodies Advisory Group, which represents the asteroid science community.

“These spacecraft are capable of performing future scientific flyby missions to near-Earth asteroids,” Scheeres said. “Each spacecraft has a high-quality Malin visible imager and a thermal infrared imager. Each spacecraft has the ability to track and image an asteroid system through a close, fast flyby.”

“The scientific return from an Apophis flyby by Janus could be one of the best opportunities out there,” said Daniella DellaGiustina, lead scientist on the OSIRIS-APEX mission from the University of Arizona.

Binzel, who has led the charge for Apophis missions, said there is also some symbolic value to having a spacecraft escort the asteroid by Earth. Apophis will be visible in the skies over Europe and Africa when it is closest to our planet.

“When 2 billion people are watching this, they are going to ask, ‘What are our space agencies doing?’ And if the answer is, ‘Oh, we’ll be there. We’re getting there,’ which is OSIRIS-APEX, I don’t think that’s a very satisfying answer,” Binzel said.

“As the international space community, we want to demonstrate on April 13, 2029, that we are there and we are watching, and we are watching because we want to gain the most knowledge and the most understanding about these objects that is possible, because someday it could matter,” Binzel said. “Someday, our detailed knowledge of hazardous asteroids would be among the most important knowledge bases for the future of humanity.”

Armada to Apophis—scientists recycle old ideas for rare asteroid encounter Read More »