Author name: Mike M.

4chan-fined-$26k-for-refusing-to-assess-risks-under-uk-online-safety-act

4chan fined $26K for refusing to assess risks under UK Online Safety Act

The risk assessments also seem to unconstitutionally compel speech, they argued, forcing them to share information and “potentially incriminate themselves on demand.” That conflicts with 4chan and Kiwi Farms’ Fourth Amendment rights, as well as “the right against self-incrimination and the due process clause of the Fifth Amendment of the US Constitution,” the suit says.

Additionally, “the First Amendment protects Plaintiffs’ right to permit anonymous use of their platforms,” 4chan and Kiwi Farms argued, opposing Ofcom’s requirements to verify ages of users. (This may be their weakest argument as the US increasingly moves to embrace age gates.)

4chan is hoping a US district court will intervene and ban enforcement of the OSA, arguing that the US must act now to protect all US companies. Failing to act now could be a slippery slope, as the UK is supposedly targeting “the most well-known, but small and, financially speaking, defenseless platforms” in the US before mounting attacks to censor “larger American companies,” 4chan and Kiwi Farms argued.

Ofcom has until November 25 to respond to the lawsuit and has maintained that the OSA is not a censorship law.

On Monday, Britain’s technology secretary, Liz Kendall, called OSA a “lifeline” meant to protect people across the UK “from the darkest corners of the Internet,” the Record reported.

“Services can no longer ignore illegal content, like encouraging self-harm or suicide, circulating online which can devastate young lives and leaves families shattered,” Kendall said. “This fine is a clear warning to those who fail to remove illegal content or protect children from harmful material.”

Whether 4chan and Kiwi Farms can win their fight to create a carveout in the OSA for American companies remains unclear, but the Federal Trade Commission agrees that the UK law is an overreach. In August, FTC Chair Andrew Ferguson warned US tech companies against complying with the OSA, claiming that censoring Americans to comply with UK law is a violation of the FTC Act, the Record reported.

“American consumers do not reasonably expect to be censored to appease a foreign power and may be deceived by such actions,” Ferguson told tech executives in a letter.

Another lawyer backing 4chan, Preston Byrne, seemed to echo Ferguson, telling the BBC, “American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail.”

4chan fined $26K for refusing to assess risks under UK Online Safety Act Read More »

software-update-bricks-some-jeep-4xe-hybrids-over-the-weekend

Software update bricks some Jeep 4xe hybrids over the weekend

Owners of some Jeep Wrangler 4xe hybrids have been left stranded after installing an over-the-air software update this weekend. The automaker pushed out a telematics update for the Uconnect infotainment system that evidently wasn’t ready, resulting in cars losing power while driving and then becoming stranded.

Stranded Jeep owners have been detailing their experiences in forum and Reddit posts, as well as on YouTube. The buggy update doesn’t appear to brick the car immediately. Instead, the failure appears to occur while driving—a far more serious problem. For some, this happened close to home and at low speed, but others claim to have experienced a powertrain failure at highway speeds.

Jeep pulled the update after reports of problems, but the software had already downloaded to many owners’ cars by then. A member of Stellantis’ social engagement team told 4xe owners at a Jeep forum to ignore the update pop-up if they haven’t installed it yet.

Owners were also advised to avoid using either hybrid or electric modes if they had updated their 4xe and not already suffered a powertrain failure. Yesterday, Jeep pushed out a fix.

As Crowdstrike showed last year, Friday afternoons are a bad time to push out a software update. Now Stellantis has learned that lesson, too. Ars has reached out to Stellantis, and we’ll update this post if we get a reply.

Software update bricks some Jeep 4xe hybrids over the weekend Read More »

marvel-gets-meta-with-wonder-man-teaser

Marvel gets meta with Wonder Man teaser

Marvel Studios has dropped the first teaser for Wonder Man, an eight-episode miniseries slated for a January release, ahead of its panel at New York Comic Con this weekend.

Part of the MCU’s Phase Six, the miniseries was created by Destin Daniel Cretton (Shang-Chi and the Legend of Five Rings) and Andrew Guest (Hawkeye), with Guest serving as showrunner. It has been in development since 2022.

The comic book version of the character is the son of a rich industrialist who inherits the family munitions factory but is being crushed by the competition: Stark Industries. Baron Zemo (Falcon and the Winter Soldier) then recruits him to infiltrate and betray the Avengers, giving him super powers (“ionic energy”) via a special serum. He eventually becomes a superhero and Avengers ally, helping them take on Doctor Doom, among other exploits. Since we know Doctor Doom is the Big Bad of the upcoming two new Avengers movies, a Wonder Man miniseries makes sense.

In the new miniseries, Yahya Abdul-Mateen II stars as Simon Williams, aka Wonder Man, an actor and stunt person with actual superpowers who decides to audition for the lead role in a superhero TV series—a reboot of an earlier Wonder Man incarnation. Demetrius Grosse plays Simon’s brother, Eric, aka Grim Reaper; Ed Harris plays Simon’s agent, Neal Saroyan; and Arian Moayed plays P. Clearly, an agent with the Department of Damage Control. Lauren Glazier, Josh Gad, Byron Bowers, Bechir Sylvain, and Manny McCord will also appear in as-yet-undisclosed roles

Marvel gets meta with Wonder Man teaser Read More »

“extremely-angry”-trump-threatens-“massive”-tariff-on-all-chinese-exports

“Extremely angry” Trump threatens “massive” tariff on all Chinese exports

The chairman of the House of Representatives’ Select Committee on the Chinese Communist Party (CCP), John Moolenaar (R-Mich.), issued a statement, suggesting that, unlike Trump, he’d seen China’s rare earths move coming. He pushed Trump to interpret China’s export controls as “an economic declaration of war against the United States and a slap in the face to President Trump.”

“China has fired a loaded gun at the American economy, seeking to cut off critical minerals used to make the semiconductors that power the American military, economy, and devices we use every day including cars, phones, computers, and TVs,” Moolenaar said. “Every American will be negatively affected by China’s action, and that’s why we must address America’s vulnerabilities and build our own leverage against China.”

To strike back forcefully, Moolenaar suggested passing a law he sponsored that he said would “end preferential trade treatment for China, build a resilient resource reserve of critical minerals, secure American research and campuses from Chinese influence, and strangle China’s technology sector with export controls instead of selling it advanced chips.”

Moolenaar also emphasized steps he recommended back in September that he claimed Trump could take to “create real leverage with China” in the face of its stranglehold on rare earths.

Those included “restricting or suspending Chinese airline landing rights in the US,” “reviewing export control policies governing the sale of commercial aircraft, parts, and maintenance services to China,” and “restricting outbound investment in China’s aviation sector in coordination with key allies.”

“These steps would send a clear message to Beijing that it cannot choke off critical supplies to our defense industries without consequences to its own strategic sectors,” Moolenaar wrote in his September letter to Trump. “By acting together, the US and its allies can strengthen our resilience, reinforce solidarity, and create real leverage with China.”

“Extremely angry” Trump threatens “massive” tariff on all Chinese exports Read More »

amd-and-sony’s-ps6-chipset-aims-to-rethink-the-current-graphics-pipeline

AMD and Sony’s PS6 chipset aims to rethink the current graphics pipeline

It feels like it was just yesterday that Sony hardware architect Mark Cerny was first teasing Sony’s “PS4 successor” and its “enhanced ray-tracing capabilities” powered by new AMD chips. Now that we’re nearly five full years into the PS5 era, it’s time for Sony and AMD to start teasing the new chips that will power what Cerny calls “a future console in a few years’ time.”

In a quick nine-minute video posted Thursday, Cerny sat down with Jack Huynh, the senior VP and general manager of AMD’s Computing and Graphics Group, to talk about “Project Amethyst,” a co-engineering effort between both companies that was also teased back in July. And while that Project Amethyst hardware currently only exists in the form of a simulation, Cerny said that the “results are quite promising” for a project that’s still in the “early days.”

Mo’ ML, fewer problems?

Project Amethyst is focused on going beyond traditional rasterization techniques that don’t scale well when you try to “brute force that with raw power alone,” Huynh said in the video. Instead, the new architecture is focused on more efficient running of the kinds of machine-learning-based neural networks behind AMD’s FSR upscaling technology and Sony’s similar PSSR system.

From the same source. Two branches. One vision.

My good friend and fellow gamer @cerny and I recently reflected on our shared journey — symbolized by these two pieces of amethyst, split from the same stone.

Project Amethyst is a co-engineering effort between @PlayStation and… pic.twitter.com/De9HWV3Ub2

— Jack Huynh (@JackMHuynh) July 1, 2025

While that kind of upscaling currently helps let GPUs pump out 4K graphics in real time, Cerny said that the “nature of the GPU fights us here,” requiring calculations to be broken up into subproblems to be handled in a somewhat inefficient parallel process by the GPU’s individual compute units.

To get around this issue, Project Amethyst uses “neural arrays” that let compute units share data and process problems like a “single focused AI engine,” Cerny said. While the entire GPU won’t be connected in this manner, connecting small sets of compute units like this allows for more scalable shader engines that can “process a large chunk of the screen in one go,” Cerny said. That means Project Amethyst will let “more and more of what you see on screen… be touched or enhanced by ML,” Huynh added.

AMD and Sony’s PS6 chipset aims to rethink the current graphics pipeline Read More »

“like-putting-on-glasses-for-the-first-time”—how-ai-improves-earthquake-detection

“Like putting on glasses for the first time”—how AI improves earthquake detection


AI is “comically good” at detecting small earthquakes—here’s why that matters.

Credit: Aurich Lawson | Getty Images

On January 1, 2008, at 1: 59 am in Calipatria, California, an earthquake happened. You haven’t heard of this earthquake; even if you had been living in Calipatria, you wouldn’t have felt anything. It was magnitude -0.53, about the same amount of shaking as a truck passing by. Still, this earthquake is notable, not because it was large but because it was small—and yet we know about it.

Over the past seven years, AI tools based on computer imaging have almost completely automated one of the fundamental tasks of seismology: detecting earthquakes. What used to be the task of human analysts—and later, simpler computer programs—can now be done automatically and quickly by machine-learning tools.

These machine-learning tools can detect smaller earthquakes than human analysts, especially in noisy environments like cities. Earthquakes give valuable information about the composition of the Earth and what hazards might occur in the future.

“In the best-case scenario, when you adopt these new techniques, even on the same old data, it’s kind of like putting on glasses for the first time, and you can see the leaves on the trees,” said Kyle Bradley, co-author of the Earthquake Insights newsletter.

I talked with several earthquake scientists, and they all agreed that machine-learning methods have replaced humans for the better in these specific tasks.

“It’s really remarkable,” Judith Hubbard, a Cornell University professor and Bradley’s co-author, told me.

Less certain is what comes next. Earthquake detection is a fundamental part of seismology, but there are many other data processing tasks that have yet to be disrupted. The biggest potential impacts, all the way to earthquake forecasting, haven’t materialized yet.

“It really was a revolution,” said Joe Byrnes, a professor at the University of Texas at Dallas. “But the revolution is ongoing.”

When an earthquake happens in one place, the shaking passes through the ground, similar to how sound waves pass through the air. In both cases, it’s possible to draw inferences about the materials the waves pass through.

Imagine tapping a wall to figure out if it’s hollow. Because a solid wall vibrates differently than a hollow wall, you can figure out the structure by sound.

With earthquakes, this same principle holds. Seismic waves pass through different materials (rock, oil, magma, etc.) differently, and scientists use these vibrations to image the Earth’s interior.

The main tool that scientists traditionally use is a seismometer. These record the movement of the Earth in three directions: up–down, north–south, and east–west. If an earthquake happens, seismometers can measure the shaking in that particular location.

An old-fashioned physical seismometer. Today, seismometers record data digitally. Credit: Yamaguchi先生 on Wikimedia CC BY-SA 3.0

Scientists then process raw seismometer information to identify earthquakes.

Earthquakes produce multiple types of shaking, which travel at different speeds. Two types, Primary (P) waves and Secondary (S) waves are particularly important, and scientists like to identify the start of each of these phases.

Before good algorithms, earthquake cataloging had to happen by hand. Byrnes said that “traditionally, something like the lab at the United States Geological Survey would have an army of mostly undergraduate students or interns looking at seismograms.”

However, there are only so many earthquakes you can find and classify manually. Creating algorithms to effectively find and process earthquakes has long been a priority in the field—especially since the arrival of computers in the early 1950s.

“The field of seismology historically has always advanced as computing has advanced,” Bradley told me.

There’s a big challenge with traditional algorithms, though: They can’t easily find smaller quakes, especially in noisy environments.

Composite seismogram of common events. Note how each event has a slightly different shape. Credit: EarthScope Consortium CC BY 4.0

As we see in the seismogram above, many different events can cause seismic signals. If a method is too sensitive, it risks falsely detecting events as earthquakes. The problem is especially bad in cities, where the constant hum of traffic and buildings can drown out small earthquakes.

However, earthquakes have a characteristic “shape.” The magnitude 7.7 earthquake above looks quite different from the helicopter landing, for instance.

So one idea scientists had was to make templates from human-labeled datasets. If a new waveform correlates closely with an existing template, it’s almost certainly an earthquake.

Template matching works very well if you have enough human-labeled examples. In 2019, Zach Ross’ lab at Caltech used template matching to find 10 times as many earthquakes in Southern California as had previously been known, including the earthquake at the start of this story. Almost all of the new 1.6 million quakes they found were very small, magnitude 1 and below.

If you don’t have an extensive pre-existing dataset of templates, however, you can’t easily apply template matching. That isn’t a problem in Southern California—which already had a basically complete record of earthquakes down to magnitude 1.7—but it’s a challenge elsewhere.

Also, template matching is computationally expensive. Creating a Southern California quake dataset using template matching took 200 Nvidia P100 GPUs running for days on end.

There had to be a better way.

AI detection models solve all of these problems:

  • They are faster than template matching.

  • Because AI detection models are very small (around 350,000 parameters compared to billions in LLMs like GPT4.0), they can be run on consumer CPUs.

  • AI models generalize well to regions not represented in the original dataset.

As an added bonus, AI models can give better information about when the different types of earthquake shaking arrive. Timing the arrivals of the two most important waves—P and S waves—is called phase picking. It allows scientists to draw inferences about the structure of the quake. AI models can do this alongside earthquake detection.

The basic task of earthquake detection (and phase picking) looks like this:

Cropped figure from Earthquake Transformer—an attentive deep-learning model for simultaneous earthquake detection and phase picking. Credit: Nature Communications

The first three rows represent different directions of vibration (east–west, north–south, and up–down respectively). Given these three dimensions of vibration, can we determine if an earthquake occurred, and if so, when it started?

We want to detect the initial P wave, which arrives directly from the site of the earthquake. But this can be tricky because echoes of the P wave may get reflected off other rock layers and arrive later, making the waveform more complicated.

Ideally, then, our model outputs three things at every time step in the sample:

  1. The probability that an earthquake is occurring at that moment.

  2. The probability that the first P wave arrives at that moment.

  3. The probability that the first S wave arrives at that moment.

We see all three outputs in the fourth row: the detection in green, the P wave arrival in blue, and the S wave arrival in red. (There are two earthquakes in this sample.)

To train an AI model, scientists take large amounts of labeled data, like what’s above, and do supervised training. I’ll describe one of the most used models: Earthquake Transformer, which was developed around 2020 by a Stanford University team led by S. Mostafa Mousavi, who later became a Harvard professor.

Like many earthquake detection models, Earthquake Transformer adapts ideas from image classification. Readers may be familiar with AlexNet, a famous image-recognition model that kicked off the deep-learning boom in 2012.

AlexNet used convolutions, a neural network architecture that’s based on the idea that pixels that are physically close together are more likely to be related. The first convolutional layer of AlexNet broke an image down into small chunks—11 pixels on a side—and classified each chunk based on the presence of simple features like edges or gradients.

The next layer took the first layer’s classifications as input and checked for higher-level concepts such as textures or simple shapes.

Each convolutional layer analyzed a larger portion of the image and operated at a higher level of abstraction. By the final layers, the network was looking at the entire image and identifying objects like “mushroom” and “container ship.”

Images are two-dimensional, so AlexNet is based on two-dimensional convolutions. By contrast, seismograph data is one-dimensional, so Earthquake Transformer uses one-dimensional convolutions over the time dimension. The first layer analyzes vibration data in 0.1-second chunks, while later layers identify patterns over progressively longer time periods.

It’s difficult to say what exact patterns the earthquake model is picking out, but we can analogize this to a hypothetical audio transcription model using one-dimensional convolutions. That model might first identify consonants, then syllables, then words, then sentences over increasing time scales.

Earthquake Transformer converts raw waveform data into a collection of high-level representations that indicate the likelihood of earthquakes and other seismologically significant events. This is followed by a series of deconvolution layers that pinpoint exactly when an earthquake—and its all-important P and S waves—occurred.

The model also uses an attention layer in the middle of the model to mix information between different parts of the time series. The attention mechanism is most famous in large language models, where it helps pass information between words. It plays a similar role in seismographic detection. Earthquake seismograms have a general structure: P waves followed by S waves followed by other types of shaking. So if a segment looks like the start of a P wave, the attention mechanism helps it check that it fits into a broader earthquake pattern.

All of the Earthquake Transformer’s components are standard designs from the neural network literature. Other successful detection models, like PhaseNet, are even simpler. PhaseNet uses only one-dimensional convolutions to pick the arrival times of earthquake waves. There are no attention layers.

Generally, there hasn’t been “much need to invent new architectures for seismology,” according to Byrnes. The techniques derived from image processing have been sufficient.

What made these generic architectures work so well then? Data. Lots of it.

Ars has previously reported on how the introduction of ImageNet, an image recognition benchmark, helped spark the deep learning boom. Large, publicly available earthquake datasets have played a similar role in seismology.

Earthquake Transformer was trained using the Stanford Earthquake Dataset (STEAD), which contains 1.2 million human-labeled segments of seismogram data from around the world. (The paper for STEAD explicitly mentions ImageNet as an inspiration). Other models, like PhaseNet, were also trained on hundreds of thousands or millions of labeled segments.

All recorded earthquakes in the Stanford Earthquake Dataset. Credit: IEEE (CC BY 4.0)

The combination of the data and the architecture just works. The current models are “comically good” at identifying and classifying earthquakes, according to Byrnes. Typically, machine-learning methods find 10 or more times the quakes that were previously identified in an area. You can see this directly in an Italian earthquake catalog:

From Machine learning and earthquake forecasting—next steps by Beroza et al. Credit: Nature Communications (CC-BY 4.0)

AI tools won’t necessarily detect more earthquakes than template matching. But AI-based techniques are much less compute- and labor-intensive, making them more accessible to the average research project and easier to apply in regions around the world.

All in all, these machine-learning models are so good that they’ve almost completely supplanted traditional methods for detecting and phase-picking earthquakes, especially for smaller magnitudes.

The holy grail of earthquake science is earthquake prediction. For instance, scientists know that a large quake will happen near Seattle but have little ability to know whether it will happen tomorrow or in a hundred years. It would be helpful if we could predict earthquakes precisely enough to allow people in affected areas to evacuate.

You might think AI tools would help predict earthquakes, but that doesn’t seem to have happened yet.

The applications are more technical and less flashy, said Cornell’s Judith Hubbard.

Better AI models have given seismologists much more comprehensive earthquake catalogs, which have unlocked “a lot of different techniques,” Bradley said.

One of the coolest applications is in understanding and imaging volcanoes. Volcanic activity produces a large number of small earthquakes, whose locations help scientists understand the structure of the magma system. In a 2022 paper, John Wilding and co-authors used a large AI-generated earthquake catalog to create this incredible image of the structure of the Hawaiian volcanic system.

Each dot represents an individual earthquake. Credit: Wilding et al., The magmatic web beneath Hawai‘i.

They provided direct evidence of a previously hypothesized magma connection between the deep Pāhala sill complex and Mauna Loa’s shallow volcanic structure. You can see this in the image with the arrow labeled as Pāhala-Mauna Loa seismicity band. The authors were also able to clarify the structure of the Pāhala sill complex into discrete sheets of magma. This level of detail could potentially facilitate better real-time monitoring of earthquakes and more accurate eruption forecasting.

Another promising area is lowering the cost of dealing with huge datasets. Distributed Acoustic Sensing (DAS) is a powerful technique that uses fiber-optic cables to measure seismic activity across the entire length of the cable. A single DAS array can produce “hundreds of gigabytes of data” a day, according to Jiaxuan Li, a professor at the University of Houston. That much data can produce extremely high-resolution datasets—enough to pick out individual footsteps.

AI tools make it possible to very accurately time earthquakes in DAS data. Before the introduction of AI techniques for phase picking in DAS data, Li and some of his collaborators attempted to use traditional techniques. While these “work roughly,” they weren’t accurate enough for their downstream analysis. Without AI, much of his work would have been “much harder,” he told me.

Li is also optimistic that AI tools will be able to help him isolate “new types of signals” in the rich DAS data in the future.

Not all AI techniques have paid off

As in many other scientific fields, seismologists face some pressure to adopt AI methods, whether or not they are relevant to their research.

“The schools want you to put the word AI in front of everything,” Byrnes said. “It’s a little out of control.”

This can lead to papers that are technically sound but practically useless. Hubbard and Bradley told me that they’ve seen a lot of papers based on AI techniques that “reveal a fundamental misunderstanding of how earthquakes work.”

They pointed out that graduate students can feel pressure to specialize in AI methods at the cost of learning less about the fundamentals of the scientific field. They fear that if this type of AI-driven research becomes entrenched, older methods will get “out-competed by a kind of meaninglessness.”

While these are real issues, and ones Understanding AI has reported on before, I don’t think they detract from the success of AI earthquake detection. In the last five years, an AI-based workflow has almost completely replaced one of the fundamental tasks in seismology for the better.

That’s pretty cool.

Kai Williams is a reporter for Understanding AI, a Substack newsletter founded by Ars Technica alum Timothy B. Lee. His work is supported by a Tarbell Fellowship. Subscribe to Understanding AI to get more from Tim and Kai.

“Like putting on glasses for the first time”—how AI improves earthquake detection Read More »

it’s-prime-day-2025-part-two,-and-here-are-more-of-the-best-deals-we-could-find

It’s Prime Day 2025 part two, and here are more of the best deals we could find

Skip to content

Updated deals on keyboards, laptops, chargers, cameras, and lots of other stuff!

Photograph of Optimus Prime in NYC Photograph of Optimus Prime in NYC

Optimus Prime, the patron saint of Prime Day, observed in midtown Manhattan in June 2023. Credit: Raymond Hall / Getty Images

Optimus Prime, the patron saint of Prime Day, observed in midtown Manhattan in June 2023. Credit: Raymond Hall / Getty Images

Portable power stations

Streaming gear

Cameras

Macbooks

Other laptops

Keyboards and mice

Monitors

Android phones

Indoor security cameras

Ars Technica may earn compensation for sales from links on this post through affiliate programs.

Loading Loading comments…

Most Read

  1. Listing image for first story in Most Read: Elon Musk tries to make Apple and mobile carriers regret choosing Starlink rivals

It’s Prime Day 2025 part two, and here are more of the best deals we could find Read More »

deloitte-will-refund-australian-government-for-ai-hallucination-filled-report

Deloitte will refund Australian government for AI hallucination-filled report

The Australian Financial Review reports that Deloitte Australia will offer the Australian government a partial refund for a report that was littered with AI-hallucinated quotes and references to nonexistent research.

Deloitte’s “Targeted Compliance Framework Assurance Review” was finalized in July and published by Australia’s Department of Employment and Workplace Relations (DEWR) in August (Internet Archive version of the original). The report, which cost Australian taxpayers nearly $440,000 AUD (about $290,000 USD), focuses on the technical framework the government uses to automate penalties under the country’s welfare system.

Shortly after the report was published, though, Sydney University Deputy Director of Health Law Chris Rudge noticed citations to multiple papers and publications that did not exist. That included multiple references to nonexistent reports by Lisa Burton Crawford, a real professor at the University of Sydney law school.

“It is concerning to see research attributed to me in this way,” Crawford told the AFR in August. “I would like to see an explanation from Deloitte as to how the citations were generated.”

“A small number of corrections”

Deloitte and the DEWR buried that explanation in an updated version of the original report published Friday “to address a small number of corrections to references and footnotes,” according to the DEWR website. On page 58 of that 273-page updated report, Deloitte added a reference to “a generative AI large language model (Azure OpenAI GPT-4o) based tool chain” that was used as part of the technical workstream to help “[assess] whether system code state can be mapped to business requirements and compliance needs.”

Deloitte will refund Australian government for AI hallucination-filled report Read More »

openai,-jony-ive-struggle-with-technical-details-on-secretive-new-ai-gadget

OpenAI, Jony Ive struggle with technical details on secretive new AI gadget

OpenAI overtook Elon Musk’s SpaceX to become the world’s most valuable private company this week, after a deal that valued it at $500 billion. One of the ways the ChatGPT maker is seeking to justify the price tag is a push into hardware.

The goal is to improve the “smart speakers” of the past decade, such as Amazon’s Echo speaker and its Alexa digital assistant, which are generally used for a limited set of functions such as listening to music and setting kitchen timers.

OpenAI and Ive are seeking to build a more powerful and useful machine. But two people familiar with the project said that settling on the device’s “voice” and its mannerisms were a challenge.

One issue is ensuring the device only chimes in when useful, preventing it from talking too much or not knowing when to finish the conversation—an ongoing issue with ChatGPT.



“The concept is that you should have a friend who’s a computer who isn’t your weird AI girlfriend… like [Apple’s digital voice assistant] Siri but better,” said one person who was briefed on the plans. OpenAI was looking for “ways for it to be accessible but not intrusive.”

“Model personality is a hard thing to balance,” said another person close to the project. “It can’t be too sycophantic, not too direct, helpful, but doesn’t keep talking in a feedback loop.”

OpenAI’s device will be entering a difficult market. Friend, an AI companion worn as a pendant around your neck, has been criticized for being “creepy” and having a “snarky” personality. An AI pin made by Humane, a company that Altman personally invested in, has been scrapped.

Still, OpenAI has been on a hiring spree to build its hardware business. Its acquisition of io brought in more than 20 former Apple hardware employees poached by Ive from his alma mater. It has also recruited at least a dozen other Apple device experts this year, according to LinkedIn accounts.

It has similarly poached members of Meta’s staff working on the Big Tech group’s Quest headset and smart glasses.

OpenAI is also working with Chinese contract manufacturers, including Luxshare, to create its first device, according to two people familiar with the development that was first reported by The Information. The people added that the device might be assembled outside of China.

OpenAI and LoveFrom, Ive’s design group, declined to comment.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

OpenAI, Jony Ive struggle with technical details on secretive new AI gadget Read More »

ice-wants-to-build-a-24/7-social-media-surveillance-team

ICE wants to build a 24/7 social media surveillance team

Together, these teams would operate as intelligence arms of ICE’s Enforcement and Removal Operations division. They will receive tips and incoming cases, research individuals online, and package the results into dossiers that could be used by field offices to plan arrests.

The scope of information contractors are expected to collect is broad. Draft instructions specify open-source intelligence: public posts, photos, and messages on platforms from Facebook to Reddit to TikTok. Analysts may also be tasked with checking more obscure or foreign-based sites, such as Russia’s VKontakte.

They would also be armed with powerful commercial databases such as LexisNexis Accurint and Thomson Reuters CLEAR, which knit together property records, phone bills, utilities, vehicle registrations, and other personal details into searchable files.

The plan calls for strict turnaround times. Urgent cases, such as suspected national security threats or people on ICE’s Top Ten Most Wanted list, must be researched within 30 minutes. High-priority cases get one hour; lower-priority leads must be completed within the workday. ICE expects at least three-quarters of all cases to meet those deadlines, with top contractors hitting closer to 95 percent.

The plan goes beyond staffing. ICE also wants algorithms, asking contractors to spell out how they might weave artificial intelligence into the hunt—a solicitation that mirrors other recent proposals. The agency has also set aside more than a million dollars a year to arm analysts with the latest surveillance tools.

ICE did not immediately respond to a request for comment.

Earlier this year, The Intercept revealed that ICE had floated plans for a system that could automatically scan social media for “negative sentiment” toward the agency and flag users thought to show a “proclivity for violence.” Procurement records previously reviewed by 404 Media identified software used by the agency to build dossiers on flagged individuals, compiling personal details, family links, and even using facial recognition to connect images across the web. Observers warned it was unclear how such technology could distinguish genuine threats from political speech.

ICE wants to build a 24/7 social media surveillance team Read More »

nearly-80%-of-americans-want-congress-to-extend-aca-tax-credits,-poll-finds

Nearly 80% of Americans want Congress to extend ACA tax credits, poll finds

According to new polling data, nearly 80 percent of Americans support extending Affordable Care Act (ACA) enhanced premium tax credits, which are set to expire at the end of this year—and are at the center of a funding dispute that led to a shutdown of the federal government this week.

The poll, conducted by KFF and released Friday, found that 78 percent of Americans want the tax credits extended, including 92 percent of Democrats, 59 percent of Republicans—and even a majority (57 percent) of Republicans who identify as Donald Trump-aligned MAGA (Make America Great Again) supporters.

A separate analysis published by KFF earlier this week found that if the credits are not extended, monthly premiums for ACA Marketplace plans would more than double on average. Specifically, the current average premium of $888 would jump to $1,904 in 2026, a 114 percent increase.

Consequences

The polling released today found that, in addition to broad support for the credits, many Americans are unaware that they are in peril. About six in ten adults say they have heard “a little” (30 percent) or “nothing at all” (31 percent) about the credits expiring.

“There is a hot debate in Washington about the looming ACA premium hikes, but our poll shows that most people in the marketplaces don’t know about them yet and are in for a shock when they learn about them in November,” KFF President and CEO Drew Altman said in a statement.

Yet more concerning, the poll found that among people who buy their own insurance plans, 70 percent said they would face a significant disruption to their household finances if their premiums were to double. Furthermore, 42 percent said they would ultimately go without health insurance in such a case. Currently, over 24 million Americans get their insurance through the ACA Marketplace.

Nearly 80% of Americans want Congress to extend ACA tax credits, poll finds Read More »

rally-arcade-classics-is-a-fun-’90s-throwback-racing-game

Rally Arcade Classics is a fun ’90s-throwback racing game

Over the years, racing sims have come a long way. Gaming PCs and consoles have become more powerful, physics and tire models have become more accurate, and after COVID, it seems like nearly everyone has a sim rig setup at home. Sim racing has even become an accepted route into the world of real-life motorsport (not to be confused with the Indy Racing League).

But what if you aren’t looking to become the next Max Verstappen? What if you miss the more carefree days of old, where the fidelity wasn’t quite so high, nor were the stakes? Rally Arcade Classics is worth a look.

Developed by NET2KGAMES, you might think of RAC as a spiritual successor to legendary titles like Sega Rally and Colin McRae Rally. Forget about the Nürburgring or even street circuits laid out in famous cities you might have visited; instead, this game is about point-to-point racing against the clock—mostly—across landscapes that long-time World Rally Championship fans will remember.

Not a Focus but a Sufoc WRC, getting air in Finland. Credit: NET2KGAMES

There’s Finland, with plenty of fast dirt roads, complete with crests that will launch your car into the air. Or the dusty, sinewy mountain roads of Greece. Catalyuna (in Spain) provides technical tarmac stages. And Monte Carlo combines tarmac, ice, snow, and challenging corners. But since this is rallying, each location is broken into a series of short stages. Oh, and some of them will be at night.

Then there are the cars. This is an indie game, not a AAA title, so there are no official OEM licenses here. But there are plenty of cars you’ll recognize from the 1970s, ’80s, and ’90s. These comprise a mix of front-, rear-, and all-wheel drive machinery, some of them road cars and others heavily modified for rallying. You start off in the slowest of these, the Kopper, which is an off-brand Mini Cooper, a car that won a famous victory at the 1964 Monte Carlo Rally, despite being many, many horsepower down on the mostly RWD cars it beat.

The models of the cars, while not Gran Turismo 7-level, are close enough that you don’t really notice the Peugeot 205 is called the Paigot 5, or the Golf GTI now being the Wolf. The Betta is a Lancia Delta Integrale, the Fourtro is an Audi Quattro, and the Selicka is the Toyota Celica, but I must admit I’m not quite sure why the Subaru Imprezas are called the Imperial R and the MR Bang STI—answers in the comments if you know, please.

Rally Arcade Classics is a fun ’90s-throwback racing game Read More »