Author name: 9u50fv

some-of-apple’s-last-holdout-accessories-have-switched-from-lightning-to-usb-c

Some of Apple’s last holdout accessories have switched from Lightning to USB-C

One of the last major holdouts against USB-C has majorly loosened its grasp. All the accessories that come with Apple’s newest iMac—the Magic Keyboard, Magic Mouse, and Magic Trackpad—ship with USB-C charging and connection ports rather than the Lightning ports they have featured for nearly a decade.


“These accessories now come with USB-C ports, so users can charge all of their favorite devices with just a single cable,” Apple writes in announcing its new M4-powered iMac, in the way that only Apple can, suggesting that something already known to so many is, when brought into Apple’s loop, notable and new.

Apple’s shift from its own Lightning connector, in use since 2012, to USB-C was sparked by European Union policies enacted in 2022. Apple gradually implemented USB-C on other devices, like its iPad Pro and MacBooks, over time, but the iPhone 15’s USB-C port made the “switch” somewhat formal.

The iMac and its color-matched accessories kept with Lightning until today’s new release. The back of the iMac has long featured USB-C ports, but the accessories were charged with USB-C-to-Lightning cables. This leaves the iPhone SE and iPhone 14 as the remaining Lightning-port-ed Apple gear that Apple still sells. Apple’s Vision Pro battery pack contains a kind of Lightning-style connector, although not a true Lightning cable. The forthcoming iPhone SE will, given the need to sell it in Europe, almost certainly feature USB-C as well.

It has been a slow, brokered, and uneven path, but it’s getting to the point where a collection of good USB-C cables and charging bricks can power most of your computing devices… except for those with very specific charging demands, like a Raspberry Pi or the cheap or old stuff that still takes USB micro. And some things just refuse to give up barrel chargers, like certain enterprise laptops and network switches.

Regardless, it’s a big day for those who only want one kind of cable on their desk.

Some of Apple’s last holdout accessories have switched from Lightning to USB-C Read More »

don’t-fall-for-ai-scams-cloning-cops’-voices,-police-warn

Don’t fall for AI scams cloning cops’ voices, police warn

AI is giving scammers a more convincing way to impersonate police, reports show.

Just last week, the Salt Lake City Police Department (SLCPD) warned of an email scam using AI to convincingly clone the voice of Police Chief Mike Brown.

A citizen tipped off cops after receiving a suspicious email that included a video showing the police chief claiming that they “owed the federal government nearly $100,000.”

To dupe their targets, the scammers cut together real footage from one of Brown’s prior TV interviews with AI-generated audio that SLCPD said “is clear and closely impersonates the voice of Chief Brown, which could lead community members to believe the message was legitimate.”

The FBI has warned for years of scammers attempting extortion by impersonating cops or government officials. But as AI voice-cloning technology has advanced, these scams could become much harder to detect, to the point where even the most forward-thinking companies like OpenAI have been hesitant to release the latest tech due to obvious concerns about potential abuse.

SLCPD noted that there were clues in the email impersonating their police chief that a tech-savvy citizen could have picked up on. A more careful listen reveals “the message had unnatural speech patterns, odd emphasis on certain words, and an inconsistent tone,” as well as “detectable acoustic edits from one sentence to the next.” And perhaps most glaringly, the scam email came from “a Google account and had the Salt Lake City Police Department’s name in it followed by a numeric number,” instead of from the police department’s official email domain of “slc.gov.”

SLCPD isn’t the only police department dealing with AI cop impersonators. Tulsa had a similar problem this summer when scammers started calling residents using a convincing fake voice designed to sound like Tulsa police officer Eric Spradlin, Public Radio Tulsa reported. A software developer who received the call, Myles David, said he understood the AI risks today but that even he was “caught off guard” and had to call police to verify the call wasn’t real.

Don’t fall for AI scams cloning cops’ voices, police warn Read More »

a-how-to-for-ethical-geoengineering-research

A how-to for ethical geoengineering research

Holistic climate justice: The guidelines recognize that geoengineering won’t affect just those people currently residing on Earth, but on future generations as well. Some methods, like stratospheric aerosols, don’t eliminate the risks caused by warming, but shift them onto future generations, who will face sudden and potentially dramatic warming if the geoengineering is ever stopped. Others may cause regional differences in either benefits or warming, shifting consequences to different populations.

Special attention should be paid to those who have historically been on the wrong side of environmental problems in the past. And harms to nature need to be considered as well.

Inclusive public participation: The research shouldn’t be approached as simply a scientific process; instead, any affected communities should be included in the process, and informed consent should be obtained from them. There should be ongoing public engagement with those communities and adapt to their cultural values.

Transparency: The public needs to be aware of who’s funding any geoengineering research and ensure that whoever’s providing the money doesn’t influence decisions regarding the design of the research. Those decisions, and the considerations behind them, should also be made clear to the public.

Informed governance: Any experiments have to conform to laws ranging from local to international. Any research programs should be approved by an independent body before any work starts. All the parties involved—and this could include the funders, the institutions, and outside contractors—should be held accountable to governments, public institutions, and those who will potentially be impacted by the work.

If you think this will make pursuing this research considerably more complicated, you are absolutely correct. But again, even tests of these approaches could have serious environmental consequences. And many of these things represent best practices for any research with potential public consequences; the fact that they haven’t always been pursued is not an excuse to continue to avoid doing them.

A how-to for ethical geoengineering research Read More »

fallout:-london-is-a-huge-fallout-4-mod-that-is-now-playable—and-worth-playing

Fallout: London is a huge Fallout 4 mod that is now playable—and worth playing

The UK equivalent of a Pip-Boy 3000, which is nice to see after so many hours with the wrist-mounted one. Team FOLON

‘Ello, what’s all this, then?

Fallout: London takes place 160 years after the global nuclear war, 40 years before Fallout 3, and in a part of the world that is both remote and didn’t really have official Fallout lore. That means a lot of the typical Fallout fare—Deathclaws, Super Mutants, the Pip-Boy 3000—is left out.

Or, rather, replaced with scores of new enemies, lore, companions, factions, and even some mechanics picked up from the modding scene (ladders!). It’s a kick to see the across-the-pond variants of wasteland stuff: tinned beans, medieval weapons, the Atta-Boy personal computer. There is at least one dog, a bulldog, and his name is Churchill.

As for the story, stop me if you’ve heard this one before: You, newly awakened from an underground chamber (not a Vault, though), enter a ruined London, one riven by factions with deep disagreements about how to move things forward. You’ll take up quests, pick sides, befriend or blast people, and do a lot of peeking into abandoned buildings, hoping to find that last screw you need for a shotgun modification.

London falling

When you first start Fallout: London, you’ll see a London that looks like, honestly, crap. Whatever London did to anger the nuke-having powers of the world, it got them good and mad, and parts of the city are very busted. The city’s disposition to underground spaces has done it well, though, and you can often find yesteryear’s glory in a Tube tunnel, a bunker, or a basement.

As you move on, you’ll get the surge of seeing a part of London you remember, either from a visit or from media, and how it looks with a bit of char to it. The post-war inhabitants have also made their own spaces inside the ruins, some more sophisticated and welcoming than others. Everywhere you look, you can see that familiar Fallout aesthetic—1950s atomic-minded culture persisting until its downfall—shifted into Greenwich Mean Time.

Fallout: London is a huge Fallout 4 mod that is now playable—and worth playing Read More »

what-i-learned-from-3-years-of-running-windows-11-on-“unsupported”-pcs

What I learned from 3 years of running Windows 11 on “unsupported” PCs


where we’re going, we don’t need support

When your old PC goes over the Windows 10 update cliff, can Windows 11 save it?

Credit: Andrew Cunningham

Credit: Andrew Cunningham

The Windows 10 update cliff is coming in October 2025. We’ve explained why that’s a big deal, and we have a comprehensive guide to updating to Windows 11 (recently updated to account for changes in Windows 11 24H2) so you can keep getting security updates, whether you’re on an officially supported PC or not.

But this is more than just a theoretical exercise; I’ve been using Windows 11 on some kind of “unsupported” system practically since it launched to stay abreast of what the experience is actually like and to keep tabs on whether Microsoft would make good on its threats to pull support from these systems at any time.

Now that we’re three years in, and since I’ve been using Windows 11 24H2 on a 2012-era desktop and laptop as my primary work machines on and off for a few months now, I can paint a pretty complete picture of what Windows 11 is like on these PCs. As the Windows 10 update cliff approaches, it’s worth asking: Is running “unsupported” Windows 11 a good way to keep an older but still functional machine running, especially for non-technical users?

My hardware

I’ve run Windows 11 on a fair amount of old hardware, including PCs as old as a late XP-era Core 2 Duo Dell Inspiron desktop. For the first couple of years, I ran it most commonly on an old Dell XPS 13 9333 with a Core i5-4250U and 8GB of RAM and a Dell Latitude 3379 2-in-1 that just barely falls short of the official requirements (both systems are also pressed into service for ChromeOS Flex testing periodically).

But I’ve been running the 24H2 update as my main work OS on two machines. The first is a Dell Optiplex 3010 desktop with a 3rd-generation Core i5-3xxx CPU, which had been my mother’s main desktop until I upgraded it a year or so ago. The second is a Lenovo ThinkPad X230 with a i5-3320M inside, a little brick of a machine that I picked up for next to nothing on Goodwill’s online auction site.

Credit: Andrew Cunningham

Both systems, and the desktop in particular, have been upgraded quite a bit; the laptop has 8GB of RAM while the desktop has 16GB, both are running SATA SSDs, and the desktop has a low-profile AMD Radeon Pro WX2100 in it, a cheap way to get support for running multiple 4K monitors. The desktop also has USB Wi-Fi and Bluetooth dongles and an internal expansion card that provides a pair of USB 3.0 Type-A ports and a single USB-C port. Systems of this vintage are pretty easy to refurbish since components are old enough that they’ve gone way down in price but not so old that they’ve become rare collectors’ items. It’s another way to get a usable computer for $100—or for free if you know where to look.

And these systems were meant to be maintained and upgraded. It’s one of the beautiful things about a standardized PC platform, though these days we’ve given a lot of that flexibility up in favor of smaller, thinner devices and larger batteries. It is possible to upgrade and refurbish these 12-year-old computers to the point that they run modern operating systems well because they were designed to leave room for that possibility.

But no matter how much you upgrade any of these PCs or how well you maintain them, they will never meet Windows 11’s official requirements. That’s the problem.

Using it feels pretty normal

Once it’s installed, Windows 11 is mostly Windows 11, whether your PC is officially supported or not. Credit: Andrew Cunningham

Depending on how you do it, it can be a minor pain to get Windows 11 up and running on a computer that doesn’t natively support it. But once the OS is installed, Microsoft’s early warnings about instability and the possible ending of updates have proven to be mostly unfounded.

A Windows 11 PC will still grab all of the same drivers from Windows Update as a Windows 10 PC would, and any post-Vista drivers have at least a chance of working in Windows 11 as long as they’re 64-bit. But Windows 10 was widely supported on hardware going back to the turn of the 2010s. If it shipped with Windows 8 or even Windows 7, your hardware should mostly work, give or take the occasional edge case. I’ve yet to have a catastrophic crash or software failure on any of the systems I’m using, and they’re all from the 2012–2016 era.

Once Windows 11 is installed, routine software updates and app updates from the Microsoft Store are downloaded and installed on my “unsupported” systems the same way they are on my “supported” ones. You don’t have to think about how you’re running an unsupported operating system; Windows remains Windows. That’s the big takeaway here—if you’re happy with the performance of your unsupported PC under Windows 10, nothing about the way Windows 11 runs will give you problems.

…Until you want to install a big update

There’s one exception for the PCs I’ve had running unsupported Windows 11 installs in the long term: They don’t want to automatically download and install the yearly feature updates for Windows. So a 22H2 install will keep downloading and installing updates for as long as they’re offered, but it won’t offer to update itself to versions 23H2 or 24H2.

This behavior may be targeted specifically at unsupported PCs, or it may just be a byproduct of how Microsoft rolls out these yearly updates (if you have a supported system with a known hardware or driver issue, for example, Microsoft will withhold these updates until the issues are resolved). Either way, it’s an irritating thing to have to deal with every year or every other year—Microsoft supports most of its annual updates for two years after they’re released to the public. So 23H2 and 24H2 are currently supported, while 22H2 and 21H2 (the first release of Windows 11) are at the end of the line.

This essentially means you’ll need to repeat the steps for doing a new unsupported Windows 11 install every time you want to upgrade. As we detail in our guide, that’s relatively simple if your PC has Secure Boot and a TPM but doesn’t have a supported processor. Make a simple registry tweak, download the Installation Assistant or an ISO file to run Setup from, and the Windows 11 installer will let you off with a warning and then proceed normally, leaving your files and apps in place.

Without Secure Boot or a TPM, though, installing these upgrades in place is more difficult. Trying to run an upgrade install from within Windows just means the system will yell at you about the things your PC is missing. Booting from a USB drive that has been doctored to overlook the requirements will help you do a clean install, but it will delete all your existing files and apps.

If you’re running into this problem and still want to try an upgrade install, there’s one more workaround you can try.

  1. Download an ISO for the version of Windows 11 you want to install, and then either make a USB install drive or simply mount the ISO file in Windows by double-clicking it.
  2. Open a Command Prompt window as Administrator and navigate to whatever drive letter the Windows install media is using. Usually that will be D: or E:, depending on what drives you have installed in your system; type the drive letter and colon into the command prompt window and press Enter.
  3. Type setup.exe /product server

You’ll notice that the subsequent setup screens all say they’re “installing Windows Server” rather than the regular version of Windows, but that’s not actually true—the Windows image that comes with these ISO files is still regular old Windows 11, and that’s what the installer is using to upgrade your system. It’s just running a Windows Server-branded version of the installer that apparently isn’t making the same stringent hardware checks that the normal Windows 11 installer is.

This workaround allowed me to do an in-place upgrade of Windows 11 24H2 onto a Windows 10 22H2 PC with no TPM enabled. It should also work for upgrading an older version of Windows 11 to 24H2.

Older PCs are still very useful!

This 2012-era desktop can be outfitted with 16 GB of memory and a GPU that can drive multiple 4K displays, things that wouldn’t have been common when it was manufactured. But no matter how much you upgrade it, Windows 11 will never officially support it. Credit: Andrew Cunningham

Having to go out of your way to keep Windows 11 up to date on an unsupported PC is a fairly major pain. But unless your hardware is exceptionally wretched (I wouldn’t recommend trying to get by with less than 4GB of RAM at an absolute bare minimum, or with a spinning hard drive, or with an aging low-end N-series Pentium or Celeron chip), you’ll find that decade-old laptops and desktops can still hold up pretty well when you’re sticking to light or medium-sized workloads.

I haven’t found this surprising. Major high-end CPU performance improvements have come in fits and starts over the last decade, and today’s (Windows 11-supported) barebones bargain basement Intel N100 PCs perform a lot like decade-old mainstream quad-core desktop processors.

With its RAM and GPU updates, my Optiplex 3010 and its Core i5 worked pretty well with my normal dual-4K desktop monitor setup (it couldn’t drive my Gigabyte M28U at higher than 60 Hz, but that’s a GPU limitation). Yes, I could feel the difference between an aging Core i5-3475S and the Core i7-12700 in my regular Windows desktop, and it didn’t take much at all for CPU usage to spike to 100 percent and stay there, always a sign that your CPU is holding you back. But once apps were loaded, they felt responsive, and I had absolutely no issues writing, recording and editing audio, and working in Affinity Photo on the odd image or two.

I wouldn’t recommend using this system to play games, nor would I recommend overpaying for a brand-new GPU to pair with an older quad-core CPU like this one (I chose the GPU I did specifically for its display outputs, not its gaming prowess). If you wanted to, you could still probably get respectable midrange gaming performance out of a 4th-, 6th-, or 7th-gen Intel Core i5 or i7 or a first-generation AMD Ryzen CPU paired with a GeForce RTX 4060 or 3060, or a Radeon RX 7600. Resist the urge to overspend, consider used cards as a way to keep costs down, and check your power supply before you install anything—the years-old 300 W power supply in a cheap Dell office desktop will need to be replaced before you can use it with any GPU that has an external power connector.

My experience with the old Goodwill-sourced ThinkPad was also mostly pretty good. It had both Secure Boot and a TPM, making installation and upgrades easier. The old fingerprint sensor (a slow and finicky swipe-to-scan sensor) and its 2013-era driver even support Windows Hello. I certainly minded the cramped, low-resolution screen—display quality and screen-to-bezel ratio being the most noticeable changes between a 12-year-old system and a modern one—but it worked reliably with a new battery in it. It even helped me focus a bit at work; a 1366×768 screen just doesn’t invite heavy multitasking.

But the mid-2010s are a dividing line, and new laptops are better than old laptops

That brings me to my biggest word of warning.

If you want to run Windows 11 on an older desktop, one where the computer is just a box that you plug stuff into, the age of the hardware isn’t all that much of a concern. Upgrading components is easier whether you’re talking about a filthy keyboard, a failing monitor, or a stick of RAM. And you don’t need to be concerned as much with power use or battery life.

But for laptops? Let me tell you, there are things about using a laptop from 2012 that you don’t want to remember.

Three important dividing lines: In 2013, Intel’s 4th-generation Haswell processors gave huge battery life boosts to laptops thanks to lower power use when idle and the ability to switch more quickly between active and idle states. In 2015, Dell introduced the first with a slim-bezeled design (though it would be some years before it would fix the bottom-mounted up-your-nose webcam), which is probably the single most influential laptop design change since the MacBook Air. And around the same time (though it’s hard to pinpoint an exact date), more laptops began adopting Microsoft’s Precision Touchpad specification rather than using finicky, inconsistent third-party drivers, making PC laptop touchpads considerably less annoying than they had been up until that point.

And those aren’t the only niceties that have become standard or near-standard on midrange and high-end laptops these days. We also have high-resolution, high-density displays; the adoption of taller screen aspect ratios like 16: 10 and 3:2, giving us more vertical screen space to use; USB-C charging, replacing the need for proprietary power bricks; and backlit keyboards!

The ThinkPad X230 I bought doesn’t have a backlit keyboard, but it does have a bizarre little booklight next to the webcam that shines down onto the keyboard to illuminate it. This is sort of neat if you’re already the kind of person inclined to describe janky old laptops as “neat,” but it’s not as practical.

Even if you set aside degraded, swollen, or otherwise broken batteries and the extra wear and tear that comes with portability, a laptop from the last three or four years will have a ton of useful upgrades and amenities aside from extra speed. That’s not to say that older laptops can’t be useful because they obviously can be. But it’s also a place where an upgrade can make a bigger difference than just getting you Windows 11 support.

Some security concerns

Some old PCs will never meet Windows 11’s more stringent security requirements, and PC makers often stop updating their systems long before Microsoft drops support. Credit: Andrew Cunningham

Windows 11’s system requirements were controversial in part because they were focused mostly on previously obscure security features like TPM 2.0 modules, hypervisor-protected code integrity (HVCI), and mode-based execution control (MBEC). A TPM module makes it possible to seamlessly encrypt your PC’s local storage, among other things, while HVCI helps to isolate data in memory from the rest of the operating system to make it harder for malicious software to steal things (MBEC is just a CPU technology that speeds up HVCI, which can come with a hefty performance penalty on older systems).

Aside from those specific security features, there are other concerns when using old PCs, some of the same ones we’ve discussed in macOS as Apple has wound down support for Intel Macs. Microsoft’s patches can protect against software security vulnerabilities in Windows, and they can provide some partial mitigations for firmware-based vulnerabilities since even fully patched and fully supported systems won’t always have all the latest BIOS fixes installed.

But software can’t patch everything, and even the best-supported laptops with 5th- or 6th-generation Core CPUs in them will be a year or two past the days when they could expect new BIOS updates or driver fixes.

The PC companies and motherboard makers make some of these determinations; cheap consumer laptops tend to get less firmware and software support regardless of whether Intel or AMD are fixing problems on their ends. But Intel (for example) stops supporting its CPUs altogether after seven or eight years (support ended for 7th-generation CPUs in March). For any vulnerabilities discovered after that, you’re on your own, or you have to trust in software-based mitigations.

I don’t want to overplay the severity or the riskiness of these kinds of security vulnerabilities. Lots of firmware-level security bugs are the kinds of things that are exploited by sophisticated hackers targeting corporate or government systems—not necessarily everyday people who are just using an old laptop to check their email or do their banking. If you’re using good everyday security hygiene otherwise—using strong passwords or passkeys, two-factor authentication, and disk encryption (all things you should already be doing in Windows 10)—an old PC will still be reasonably safe and secure.

A viable, if imperfect, option for keeping an old PC alive

If you have a Windows 10 PC that is still working well or that you can easily upgrade to give it a new lease on life, and you don’t want to pay whatever Microsoft is planning to charge for continued Windows 10 update support, installing Windows 11 may be the path of least resistance for you despite the installation and update hurdles.

Especially for PCs that only miss the Windows 11 support cutoff by a year or two, you’ll get an operating system that still runs reasonably well on your PC, should still support all of your hardware, and will continue to run the software you’re comfortable with. Yes, the installation process for Windows’ annual feature updates is more annoying than it should be. But if you’re just trying to squeeze a handful of years out of an older PC, it might not be an issue you have to deal with very often. And though Windows 11 is different from Windows 10, it doesn’t come with the same learning curve that switching to an alternate operating system like ChromeOS Flex or Linux would.

Eventually, these PCs will age out of circulation, and the point will be moot. But even three years into Windows 11’s life cycle, I can’t help but feel that the system requirements could stand to be relaxed a bit. That ship sailed a long time ago, but given how many PCs are still running Windows 10 less than a year from the end of guaranteed security updates, expanding compatibility is a move Microsoft could consider to close the adoption gap and bring more PCs along.

Even if that doesn’t happen, try running Windows 11 on an older but still functional PC sometime. Once you clean it up a bit to rein in some of modern Microsoft’s worst design impulses, I think you’ll be pleasantly surprised.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

What I learned from 3 years of running Windows 11 on “unsupported” PCs Read More »

google’s-deepmind-is-building-an-ai-to-keep-us-from-hating-each-other

Google’s DeepMind is building an AI to keep us from hating each other


The AI did better than professional mediators at getting people to reach agreement.

Image of two older men arguing on a park bench.

An unprecedented 80 percent of Americans, according to a recent Gallup poll, think the country is deeply divided over its most important values ahead of the November elections. The general public’s polarization now encompasses issues like immigration, health care, identity politics, transgender rights, or whether we should support Ukraine. Fly across the Atlantic and you’ll see the same thing happening in the European Union and the UK.

To try to reverse this trend, Google’s DeepMind built an AI system designed to aid people in resolving conflicts. It’s called the Habermas Machine after Jürgen Habermas, a German philosopher who argued that an agreement in a public sphere can always be reached when rational people engage in discussions as equals, with mutual respect and perfect communication.

But is DeepMind’s Nobel Prize-winning ingenuity really enough to solve our political conflicts the same way they solved chess or StarCraft or predicting protein structures? Is it even the right tool?

Philosopher in the machine

One of the cornerstone ideas in Habermas’ philosophy is that the reason why people can’t agree with each other is fundamentally procedural and does not lie in the problem under discussion itself. There are no irreconcilable issues—it’s just the mechanisms we use for discussion are flawed. If we could create an ideal communication system, Habermas argued, we could work every problem out.

“Now, of course, Habermas has been dramatically criticized for this being a very exotic view of the world. But our Habermas Machine is an attempt to do exactly that. We tried to rethink how people might deliberate and use modern technology to facilitate it,” says Christopher Summerfield, a professor of cognitive science at Oxford University and a former DeepMind staff scientist who worked on the Habermas Machine.

The Habermas Machine relies on what’s called the caucus mediation principle. This is where a mediator, in this case the AI, sits through private meetings with all the discussion participants individually, takes their statements on the issue at hand, and then gets back to them with a group statement, trying to get everyone to agree with it. DeepMind’s mediating AI plays into one of the strengths of LLMs, which is the ability to briefly summarize a long body of text in a very short time. The difference here is that instead of summarizing one piece of text provided by one user, the Habermas Machine summarizes multiple texts provided by multiple users, trying to extract the shared ideas and find common ground in all of them.

But it has more tricks up its sleeve than simply processing text. At a technical level, the Habermas Machine is a system of two large language models. The first is the generative model based on the slightly fine-tuned Chinchilla, a somewhat dated LLM introduced by DeepMind back in 2022. Its job is to generate multiple candidates for a group statement based on statements submitted by the discussion participants. The second component in the Habermas Machine is a reward model that analyzes individual participants’ statements and uses them to predict how likely each individual is to agree with the candidate group statements proposed by the generative model.

Once that’s done, the candidate group statement with the highest predicted acceptance score is presented to the participants. Then, the participants write their critiques of this group statement, feed those critiques back into the system which generates updated group’s statements and repeats the process. The cycle goes on till the group statement is acceptable to everyone.

Once the AI was ready, DeepMind’s team started a fairly large testing campaign that involved over five thousand people discussing issues such as “should the voting age be lowered to 16?” or “should the British National Health Service be privatized?” Here, the Habermas Machine outperformed human mediators.

Scientific diligence

Most of the first batch of participants were sourced through a crowdsourcing research platform. They were divided into groups of five, and each team was assigned a topic to discuss, chosen from a list of over 5,000  statements about important issues in British politics. There were also control groups working with human mediators. In the caucus mediation process, those human mediators achieved a 44 percent acceptance rate for their handcrafted group statements. The AI scored 56 percent. Participants usually found the AI group statements to be better written as well.

But the testing didn’t end there. Because people you can find on crowdsourcing research platforms are unlikely to be representative of the British population, DeepMind also used a more carefully selected group of participants. They partnered with the Sortition Foundation, which specializes in organizing citizen assemblies in the UK, and assembled a group of 200 people representative of British society when it comes to age, ethnicity, socioeconomic status etc. The assembly was divided into groups of three that deliberated over the same nine questions. And the Habermas Machine worked just as well.

The agreement rate for the statement “we should be trying to reduce the number of people in prison” rose from a pre-discussion 60 percent agreement to 75 percent. The support for the more divisive idea of making it easier for asylum seekers to enter the country went from 39 percent at the start to 51 percent at the end of discussion, which allowed it to achieve majority support. The same thing happened with the problem of encouraging national pride, which started with 42 percent support and ended at 57 percent. The views held by the people in the assembly converged on five out of nine questions. Agreement was not reached on issues like Brexit, where participants were particularly entrenched in their starting positions. Still, in most cases, they left the experiment less divided than they were coming in. But there were some question marks.

The questions were not selected entirely at random. They were vetted, as the team wrote in their paper, to “minimize the risk of provoking offensive commentary.” But isn’t that just an elegant way of saying, ‘We carefully chose issues unlikely to make people dig in and throw insults at each other so our results could look better?’

Conflicting values

“One example of the things we excluded is the issue of transgender rights,” Summerfield told Ars. “This, for a lot of people, has become a matter of cultural identity. Now clearly that’s a topic which we can all have different views on, but we wanted to err on the side of caution and make sure we didn’t make our participants feel unsafe. We didn’t want anyone to come out of the experiment feeling that their basic fundamental view of the world had been dramatically challenged.”

The problem is that when your aim is to make people less divided, you need to know where the division lines are drawn. And those lines, if Gallup polls are to be trusted, are not only drawn between issues like whether the voting age should be 16 or 18 or 21. They are drawn between conflicting values. The Daily Show’s Jon Stewart argued that, for the right side of the US’s political spectrum, the only division line that matters today is “woke” versus “not woke.”

Summerfield and the rest of the Habermas Machine team excluded the question about transgender rights because they believed participants’ well-being should take precedence over the benefit of testing their AI’s performance on more divisive issues. They excluded other questions as well like the problem of climate change.

Here, the reason Summerfield gave was that climate change is a part of an objective reality—it either exists or it doesn’t, and we know it does. It’s not a matter of opinion you can discuss. That’s scientifically accurate. But when the goal is fixing politics, scientific accuracy isn’t necessarily the end state.

If major political parties are to accept the Habermas Machine as the mediator, it has to be universally perceived as impartial. But at least some of the people behind AIs are arguing that an AI can’t be impartial. After OpenAI released the ChatGPT in 2022, Elon Musk posted a tweet, the first of many, where he argued against what he called the “woke” AI. “The danger of training AI to be woke—in other words, lie—is deadly,” Musk wrote. Eleven months later, he announced Grok, his own AI system marketed as “anti-woke.” Over 200 million of his followers were introduced to the idea that there were “woke AIs” that had to be countered by building “anti-woke AIs”—a world where the AI was no longer an agnostic machine but a tool pushing the political agendas of its creators.

Playing pigeons’ games

“I personally think Musk is right that there have been some tests which have shown that the responses of language models tend to favor more progressive and more libertarian views,” Summerfield says. “But it’s interesting to note that those experiments have been usually run by forcing the language model to respond to multiple-choice questions. You ask ‘is there too much immigration’ for example, and the answers are either yes or no. This way the model is kind of forced to take an opinion.”

He said that if you use the same queries as open-ended questions, the responses you get are, for the large part, neutral and balanced. “So, although there have been papers that express the same view as Musk, in practice, I think it’s absolutely untrue,” Summerfield claims.

Does it even matter?

Summerfield did what you would expect a scientist to do: He dismissed Musk’s claims as based on a selective reading of the evidence. That’s usually checkmate in the world of science. But in the world politics, being correct is not what matters the most. Musk was short, catchy, and easy to share and remember. Trying to counter that by discussing methodology in some papers nobody read was a bit like playing chess with a pigeon.

At the same time, Summerfield had his own ideas about AI that others might consider dystopian. “If politicians want to know what the general public thinks today, they might run a poll. But people’s opinions are nuanced, and our tool allows for aggregation of opinions, potentially many opinions, in the highly dimensional space of language itself,” he says. While his idea is that the Habermas Machine can potentially find useful points of political consensus, nothing is stopping it from also being used to craft speeches optimized to win over as many people as possible.

That may be in keeping with Habermas’ philosophy, though. If you look past the myriads of abstract concepts ever-present in German idealism, it offers a pretty bleak view of the world. “The system,” driven by power and money of corporations and corrupt politicians, is out to colonize “the lifeworld,” roughly equivalent to the private sphere we share with our families, friends, and communities. The way you get things done in “the lifeworld” is through seeking consensus, and the Habermas Machine, according to DeepMind, is meant to help with that. The way you get things done in “the system,” on the other hand, is through succeeding—playing it like a game and doing whatever it takes to win with no holds barred, and Habermas Machine apparently can help with that, too.

The DeepMind team reached out to Habermas to get him involved in the project. They wanted to know what he’d have to say about the AI system bearing his name.  But Habermas has never got back to them. “Apparently, he doesn’t use emails,” Summerfield says.

Science, 2024.  DOI: 10.1126/science.adq2852

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Google’s DeepMind is building an AI to keep us from hating each other Read More »

annoyed-redditors-tanking-google-search-results-illustrates-perils-of-ai-scrapers

Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers

Fed up Londoners

Apparently, some London residents are getting fed up with social media influencers whose reviews make long lines of tourists at their favorite restaurants, sometimes just for the likes. Christian Calgie, a reporter for London-based news publication Daily Express, pointed out this trend on X yesterday, noting the boom of Redditors referring people to Angus Steakhouse, a chain restaurant, to combat it.

As Gizmodo deduced, the trend seemed to start on the r/London subreddit, where a user complained about a spot in Borough Market being “ruined by influencers” on Monday:

“Last 2 times I have been there has been a queue of over 200 people, and the ones with the food are just doing the selfie shit for their [I]nsta[gram] pages and then throwing most of the food away.”

As of this writing, the post has 4,900 upvotes and numerous responses suggesting that Redditors talk about how good Angus Steakhouse is so that Google picks up on it. Commenters quickly understood the assignment.

“Agreed with other posters Angus steakhouse is absolutely top tier and tourists shoyldnt [sic] miss out on it,” one Redditor wrote.

Another Reddit user wrote:

Spreading misinformation suddenly becomes a noble goal.

As of this writing, asking Google for the best steak, steakhouse, or steak sandwich in London (or similar) isn’t generating an AI Overview result for me. But when I searched for the best steak sandwich in London, the top result is from Reddit, including a thread from four days ago titled “Which Angus Steakhouse do you recommend for their steak sandwich?” and one from two days ago titled “Had to see what all the hype was about, best steak sandwich I’ve ever had!” with a picture of an Angus Steakhouse.

Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers Read More »

taco-bell,-kfc,-pizza-hut,-burger-king-pull-onions-amid-mcdonald’s-outbreak

Taco Bell, KFC, Pizza Hut, Burger King pull onions amid McDonald’s outbreak

On Thursday, Yum Brands—owner of KFC, Pizza Hut, and Taco Bell—followed that lead, saying it, too, would remove fresh onions from its chains’ menus at some locations, according to Reuters. Restaurant Brands International, owner of Burger King, also did the same.

“We’ve been told by corporate to not use any onions going forward for the foreseeable future,” Maria Gonzales, the on-duty manager inside a Burger King in Longmont, Colorado, told Reuters on Wednesday. “They’re off our menu.”

As of Thursday, the case count in the E. coli outbreak remained at 49 people in 10 states. Of those, 10 were hospitalized, including a child with a life-threatening complication. One older person in Colorado has died.

The states with cases include: Colorado (26 cases), Nebraska (9), Utah (4), Wyoming (4), and one case each in Iowa, Kansas, Missouri, Montana, Oregon, and Wisconsin.

McDonald’s removed Quarter Pounders and slivered onions from restaurant menus in Colorado, Kansas, Utah, Wyoming, and portions of Idaho, Iowa, Missouri, Montana, Nebraska, Nevada, New Mexico, and Oklahoma. In a statement, McDonald’s said that for these restaurants, its onions are “sourced by a single supplier that serves three distribution centers. The fast-food giant continues to serve other beef burgers and diced onions at impacted locations.

Taco Bell, KFC, Pizza Hut, Burger King pull onions amid McDonald’s outbreak Read More »

apple-teases-“week-of-announcements”-about-the-mac-starting-on-monday

Apple teases “week of announcements” about the Mac starting on Monday

Apple has released new iPhones, new Apple Watches, a new iPad mini, and a flotilla of software updates this fall, but Mac hardware has gone unmentioned so far. That’s set to change next week, according to an uncharacteristically un-cryptic post from Apple Worldwide Marketing SVP Greg Joswiak earlier today.

Imploring readers to “Mac [sic] their calendars,” Joswiak’s post teases “an exciting week of announcements ahead, starting on Monday morning.” If the wordplay wasn’t enough, an attached teaser video with a winking neon Mac logo drives the point home.

Though Joswiak’s post was light on additional details, months of reliable rumors have told us the most likely things to expect: refreshed MacBook Pros and 24-inch iMacs with few if any external changes but new Apple M4-series chips on the inside, plus a new M4 Mac mini with a substantial design overhaul. The MacBook Pros and iMacs were refreshed with M3 chips almost exactly a year ago, but the Mac mini was last updated with the M2 in early 2023.

The new Mac mini will allegedly be closer in size to an Apple TV and is said to be slightly taller than current Mac minis but with a smaller footprint. The new design will continue to include a space-saving internal power supply rather than relying on an external power brick, but it will also rely more heavily on USB-C/Thunderbolt ports to save space, cutting down on the number of other ports. At least some models will also include USB-C ports on the front, a design change inherited from the Mac Studio.

Apple teases “week of announcements” about the Mac starting on Monday Read More »

phone-tracking-tool-lets-government-agencies-follow-your-every-move

Phone tracking tool lets government agencies follow your every move

Both operating systems will display a list of apps and whether they are permitted access always, never, only while the app is in use, or to prompt for permission each time. Both also allow users to choose whether the app sees precise locations down to a few feet or only a coarse-grained location.

For most users, there’s usefulness in allowing an app for photos, transit or maps to access a user’s precise location. For other classes of apps—say those for Internet jukeboxes at bars and restaurants—it can be helpful for them to have an approximate location, but giving them precise, fine-grained access is likely overkill. And for other apps, there’s no reason for them ever to know the device’s location. With a few exceptions, there’s little reason for apps to always have location access.

Not surprisingly, Android users who want to block intrusive location gathering have more settings to change than iOS users. The first thing to do is access Settings > Security & Privacy > Ads and choose “Delete advertising ID.” Then, promptly ignore the long, scary warning Google provides and hit the button confirming the decision at the bottom. If you don’t see that setting, good for you. It means you already deleted it. Google provides documentation here.

iOS, by default, doesn’t give apps access to “Identifier for Advertisers,” Apple’s version of the unique tracking number assigned to iPhones, iPads, and AppleTVs. Apps, however, can display a window asking that the setting be turned on, so it’s useful to check. iPhone users can do this by accessing Settings > Privacy & Security > Tracking. Any apps with permission to access the unique ID will appear. While there, users should also turn off the “Allow Apps to Request to Track” button. While in iOS Privacy & Security, users should navigate to Apple Advertising and ensure Personalized Ads is turned off.

Additional coverage of Location X from Haaretz and NOTUS is here and here. The New York Times, the other publication given access to the data, hadn’t posted an article at the time this Ars post went live.

Phone tracking tool lets government agencies follow your every move Read More »

ios-18.2-developer-beta-adds-chatgpt-and-image-generation-features

iOS 18.2 developer beta adds ChatGPT and image-generation features

Today, Apple released the first developer beta of iOS 18.2 for supported devices. This beta release marks the first time several key AI features that Apple teased at its developer conference this June are available.

Apple is marketing a wide range of generative AI features under the banner “Apple Intelligence.” Initially, Apple Intelligence was planned to release as part of iOS 18, but some features slipped to iOS 18.1, others to iOS 18.2, and a few still to future undisclosed software updates.

iOS 18.1 has been in beta for a while and includes improvements to Siri, generative writing tools that help with rewriting or proofreading, smart replies for Messages, and notification summaries. That update is expected to reach the public next week.

Today’s developer update, iOS 18.2, includes some potentially more interesting components of Apple Intelligence, including Genmoji, Image Playground, Visual Intelligence with Camera Control, and ChatGPT integration.

Genmoji and Image Playground allow users to generate images on-device to send to friends in Messages; there will be Genmoji and Image Playground APIs to allow third-party messaging apps to work with Genmojis, too.

ChatGPT integration allows Siri to pass off user queries that are outside Siri’s normal scope to be answered instead by OpenAI’s ChatGPT. A ChatGPT account is not required, but logging in with an existing account gives you access to premium models available as part of a ChatGPT subscription. If you’re using these features without a ChatGPT account, OpenAI won’t be able to retain your data or use it to train models. If you connect your ChatGPT account, though, then OpenAI’s privacy policies will apply for ChatGPT queries instead of Apple’s.

Genmoji and Image Playground queries will be handled locally on the user’s device, but other Apple Intelligence features may dynamically opt to send queries to the cloud for computation.

There’s no word yet on when iOS 18.2 will be released publicly.

iOS 18.2 developer beta adds ChatGPT and image-generation features Read More »

meta-quest-3s-is-a-disappointing-half-step-to-carmack’s-low-cost-vr-vision

Meta Quest 3S is a disappointing half-step to Carmack’s low-cost VR vision


Significant visual and comfort compromises make last year’s Quest 3 a better VR investment.

Look at all those dots. Credit: Kyle Orland

It’s been just over two years now since soon-to-depart CTO John Carmack told a Meta Connect audience about his vision for a super low-end VR headset that came in at $250 and 250 grams. “We’re not building that headset today, but I keep trying,” Carmack said at the time with some exasperation.

On the pricing half of the equation, the recently released Quest 3S headset is nearly on target for Carmack’s hopes and dreams. Meta’s new $299 headset is a significant drop from the $499 Quest 3 and the cheapest price point for a Meta VR headset since the company raised the price of the aging Quest 2 to $400 back in the summer of 2022. When you account for a few years of inflation in there, the Quest 3S is close to the $250 headset Carmack envisioned.

A new button on the underside of the Quest 3S lets you transition to pass-through mode at any time.

Credit: Kyle Orland

A new button on the underside of the Quest 3S lets you transition to pass-through mode at any time. Credit: Kyle Orland

Unfortunately, Meta must still seriously tackle the “250 grams” part of Carmack’s vision. The 514g Quest 3S feels at least as unwieldy on your face as the 515g Quest 3, and both are still quite far from the “super light comforts” Carmack envisioned. Add in all the compromises Meta made so the Quest 3S could hit that lower price point, and you have a cheap, half-measure headset that we can only really recommend to the most price-conscious of VR consumers.

Meta Quest 2 Plus

iFixit’s recent teardown of the Quest 3S shows that the new headset is more than just a spiritual successor to the cheap and popular Quest 2. On the contrary, iFixit found the Quest 3S optical stack uses the exact same parts as the Quest 2, right down to the LCD panels and fresnel lenses.

In 2020, the 1832×1920 per-eye resolution offered by that visual stack represented a significant upgrade from what had come before, especially at such a low price point. Today, though, that dated display technology invites direct comparisons to the 2604×2208 per-eye display on last year’s Quest 3. With the displays sitting just inches from your eyes, that difference represents a very noticeable 20 percent drop in apparent clarity, down from 25 pixels per degree to a mere 20.

Going back to the 3S after a year spent in the Quest 3 is a bit like walking around in glasses that suddenly have a thin layer of Vaseline smeared on them. Everything looks quite a bit fuzzier, especially near the borders of the display, and edges of objects look distinctly more jagged than on the Quest 3. The difference is especially noticeable when trying to read small text in VR or make out fine details in the real world through the headset’s array of passthrough cameras.

It’s not quite a retreat to the days of the infamous “screen door effect” that plagued early Oculus-era headsets, but the distinct visual downgrade makes virtual reality experiences that much less convincing and engrossing on the 3S.

It’s the little things

The visual downgrades on the Quest 3S extend to the field of view, which narrows from 110 horizontal degrees on the Quest 3 to a mere 97 degrees on the 3S (the vertical field of view sees a smaller reduction from 97 degrees to 93 degrees). This difference isn’t as apparent as the drop in resolution between the two headsets, but it does lead to a few more “tunnel vision” moments at the margins. In a game like Beat Saber, for instance, I noticed many of my swings were rendered effectively invisible by the larger black void taking up much of my peripheral vision.

A comparative side view shows the reduced depth of the pancake lens housing on the Quest 3 (top) compared to the Quest 3S (bottom).

Credit: Kyle Orland

A comparative side view shows the reduced depth of the pancake lens housing on the Quest 3 (top) compared to the Quest 3S (bottom). Credit: Kyle Orland

Going back to the fresnel-lens-based Quest 2 visual stack also means doing without the thinner pancake lenses introduced on the Quest 3. The result is an eyebox on the 3S that extends about an inch farther from your face than on the Quest 3. That might not sound like much, but having the lens’ center of gravity farther from your face makes the headset feel a bit heavier and the fit a bit less secure as you move your head around in VR.

Then there are the compromises when it comes to fine-tuning the distance between the Quest 3S’ lenses. On the Quest 3, an adjustment wheel on the bottom of the headset lets you adjust this interpupillary distance (IPD) continuously, down to the millimeter precision. On the Quest 3S, you instead manually shift the lenses into three preset grooves that are a full 5 millimeters apart. If your face’s actual IPD falls in the middle of those 5 mm windows, the result can be the kind of eye strain and trouble focusing that we complained about in our original Quest 2 review.

Meta has also done away with quite a few Quest 3 creature comforts in an apparent effort to keep the Quest 3S price down. The lack of an external depth sensor, for instance, can make things like pass-through video and hand tracking feel a bit more wonky than on the Quest 3. The Quest 3S is missing a standard headphone jack, too, for those still using wired headphones. And the new headset also lacks any automatic face detection, adding the small annoyance of physically tapping the power button to return from sleep mode when you put it back on.

Spend the extra money

From the front, the external cameras are the easiest way to tell the difference between the Quest 3S (left) and the Quest 3.

From the front, the external cameras are the easiest way to tell the difference between the Quest 3S (left) and the Quest 3.

I’ve been comparing the Quest 3S to the Quest 3 because that’s the decision consumers considering a Meta headset will face today (if they can get over the need for a Meta account to use the headset in the first place). But Meta’s discontinuation of the aging Quest 2 means millions of current Quest 2 owners will soon be faced with the prospect of upgrading or abandoning Meta’s VR ecosystem for good, just as original Quest owners did last year.

For those current Quest 2 owners, the Quest 3S represents the cheapest way to maintain continued access to Meta’s library of VR games and apps. And that library continues to expand with everything from mind-bending indie games to quirky multiplayer arenas to single-player adventures like Batman: Arkham Shadow, which now comes free with every Quest 3 or 3S headset.

But the move from a Quest 2 to a Quest 3S is relatively small, considering the four-year gap between the similarly priced headsets. Yes, you’ll notice some significant improvements in the newer headset’s full-color pass-through cameras and the headset’s maximum frame rate (up from 90 Hz to 120 Hz). The 3S also offers a slightly more future-proofed Qualcomm XR Gen 2 processor (over the Quest 2’s original XR processor) and slightly more precise Touch Plus controllers (which are missing the annoying tracking ring on the original Quest 2 controllers).

All told, though, the Quest 3S is far from the generational upgrade from the Quest 2 you might hope for. For that kind of apparent jump, you’re much better off shelling out a bit more money for the full-fledged Quest 3. The improvements in form factor, field of view, IPD adjustment, and especially resolution make the higher-end set well worth the extra money. That’s especially true if you can manage to track down the now-discontinued 128GB Quest 3, which is currently being closed out for just $430 (compared to $500 for the new 528GB version).

If you simply want the cheapest way to access Meta’s library of virtual reality games, the Quest 3S certainly fills that hole in the market. If you want a more robust VR experience that’s likely to suffice further into the future, though, the extra investment in a Quest 3 headset is probably worth it.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Meta Quest 3S is a disappointing half-step to Carmack’s low-cost VR vision Read More »