Author name: Shannon Garcia

the-audi-rs7—they-won’t-make-them-like-this-much-longer

The Audi RS7—they won’t make them like this much longer

it weighs how much? —

We take one last drive in an old favorite, plus there’s some RS6 wagon action.

A silver Audi RS7 parked in a clearing

Enlarge / I’ve had a soft spot for Audi’s RS7 for at least a decade now. Will Audi replace it, or is this the last in its line?

Jonathan Gitlin

The arrival of competent electric vehicles is a boon for car enthusiasts as much as those who consider driving a chore and their vehicle a driving appliance. Electric motors can respond faster than the revviest naturally aspirated engines this side of a mid-2000s Formula 1 engine, and with more immediate torque than the mightiest of turbochargers. They are the way forward, for performance as well as energy efficiency. Such is apparent after spending some time behind the wheel of a $128,000 Audi RS7 Performance after a diet of mostly EV performance cars.

For a long time, the RS7 was the Audi to go for if you wanted something with more than two doors, because the R8 doesn’t work with your lifestyle. Based on the more humble A7 fastback sedan, it was worked over by Audi Sport, the four rings’ racing offshoot based up the road from Ingolstadt in Neckarsulm, Germany. The designers there gave it a restyle, with aggressively flared arches to envelope larger wheels and tires, and new aerodynamic features to keep the car planted at speed. The result is one of the brand’s handsomest creations, at least to this observer.

Under the hood, a twin-turbo 4.0L V8 is now even more potent than when we first drove the RS7 in 2019, with 621 hp (463 kW) and 627 lb-ft (850 Nm) at your right foot’s beck and call, sent to all four wheels via an eight-speed ZF 8 HP transmission and, at the rear axle, a torque-vectoring rear differential.

The seats are very supportive and comfortable for long drives, but you have to pay almost $2,000 to add a massage function.

Enlarge / The seats are very supportive and comfortable for long drives, but you have to pay almost $2,000 to add a massage function.

Audi

For model-year 2024 (and 2025, for the car is unchanged), Audi Sport took out 17 lbs (8 kg) of sound deadening, which, in addition to making the car lighter, also fixes one of my chief complaints from that 2019 drive, it being too quiet inside the car when I was in sport mode and wanted to hear the exhaust in its full glory.

This RS7 Performance sounds divine, in fact, with plenty of bass and, in its fruitiest mode, some pops and bangs on the overrun that elicit the odd giggle from the driver. For more mundane occasions with the sport exhaust’s valves closed, it’s unremarkably quiet and won’t wake the neighbors if you leave the house early in the morning (or come home late). A sonorous powertrain is still really the preserve of the internal combustion engine, although some EVs are beginning to play with that idea.

You might be expecting me to extoll the relative lack of mass of this car versus an electric alternative, too, but the RS7 is not exactly a featherweight. Audi Sport took out that sound material, and each of the new 22-inch wheels are 11 lbs (5 kg) lighter than before, but the RS7 Performance still weighs a hefty 4,916 lbs (2,230 kg). So while the all-electric Audi RS e-tron GT is quicker to accelerate—0.4 seconds faster to 60 mph than the 3.3 seconds it takes the RS7—the EV is only really heavier to the tune of a large adult.

Thanks to computer-controlled air suspension, you never really feel like you’re driving a car that weighs more than 2 tons. There’s little roll while cornering, and the power assistance for the steering makes easy work of turning the 285/30-section front tires.

  • Jonathan Gitlin

  • A lap of the original Watkins Glen road course was an opportunity to reflect on how dangerous racing there must have been.

    Jonathan Gitlin

  • It’s a handsome bum.

    Jonathan Gitlin

  • There’s almost 50 cubic feet (1,382 L) of cargo volume with the rear seats down—more than enough to swallow a bunch of camping gear.

    Jonathan Gitlin

The regular A7 already made a good highway cruiser, and the RS7 lost little of those chops even after its Audi Sport enhancement. The seats are comfortable on long drives while still gripping you well when things turn twisty, and they’re heated and cooled, but were I speccing a car at the Audi configurator I’d tick the box for the ones that use their inflatable air bladders to massage you as well.

The interior is well put together, with good sightlines from the driver’s seat. There’s a little too much glossy piano black for my tastes, and points have to be dinged for an over-reliance on touchscreens—bring back discrete buttons or dials for the climate controls, please. The haptic touchscreen needs a gentle press—I was poking it too hard for it to register my intentions, so the system ignored me at times. And the infotainment system is starting to feel a little bit dated in terms of voice recognition compared to Audi’s newest examples.

One downside to those huge tires is a propensity toward tram-lining, which was more noticeable while driving through a weather front. But otherwise the car felt as planted in the rain as it did in the dry. I was even able to better its official EPA rating on the highway by about 10 percent, though that’s still a rather unacceptable 24 mpg (9.8 L/100 km).

Even with the engine, chassis, and transmission settings all at their sharpest, the RS7 never feels less than utterly planted on dry roads. The interruptions in acceleration as the transmission changes up through its gear ratios feel almost jarring after becoming inured to single-speed EVs, but you’ll welcome their presence—and the engine braking they allow—as you slow for a corner.

The Audi RS7—they won’t make them like this much longer Read More »

apple-backs-out-of-backing-openai,-report-claims

Apple backs out of backing OpenAI, report claims

ChatGPT —

Apple dropped out of the $6.5 billion investment round at the 11th hour.

The Apple Park campus in Cupertino, California.

Enlarge / The Apple Park campus in Cupertino, California.

A few weeks back, it was reported that Apple was exploring investing in OpenAI, the company that makes ChatGPT, the GPT model, and other popular generative AI products. Now, a new report from The Wall Street Journal claims that Apple has abandoned those plans.

The article simply says Apple “fell out of the talks to join the round.” The round is expected to close in a week or so and may raise as much as $6.5 billion for the growing Silicon Valley company. Had Apple gone through with the move, it would have been a rare event—though not completely unprecedented—for Apple to invest in another company that size.

OpenAI is still expected to raise the funds it seeks from other sources. The report claims Microsoft is expected to invest around $1 billion in this round. Microsoft has already invested substantial sums in OpenAI, whose GPT models power Microsoft AI tools like Copilot and Bing chat.

Nvidia is also a likely major investor in this round.

Apple will soon offer limited ChatGPT integration in an upcoming iOS update, though it plans to support additional models like Google’s Gemini further down the line, offering users a choice similar to how they pick a default search engine or web browser.

OpenAI has been on a successful tear with its products and models, establishing itself as a leader in the rapidly growing industry. However, it has also been beset by drama and controversy—most recently, some key leaders at OpenAI departed the company abruptly, and it shifted its focus from a research-focused organization that was beholden to a nonprofit, to a for-profit company under CEO Sam Altman. Also, former Apple design lead Jony Ive is confirmed to be working on a new AI product of some kind.

But The Wall Street Journal did not specify which (if any) of these facts are reasons why Apple chose to back out of the investment.

Apple backs out of backing OpenAI, report claims Read More »

your-cells-are-dying-all-the-time.

Your cells are dying. All the time.

Apoptosis, necroptosis, and pyroptosis, oh my —

Some go gently into the night. Others die less prettily.

3D rendering of an NK Cell destroying a cancer cell.

Enlarge / 3D rendering of an NK Cell destroying a cancer cell.

Billions of cells die in your body every day. Some go out with a bang, others with a whimper.

They can die by accident if they’re injured or infected. Alternatively, should they outlive their natural lifespan or start to fail, they can carefully arrange for a desirable demise, with their remains neatly tidied away.

Originally, scientists thought those were the only two ways an animal cell could die, by accident or by that neat-and-tidy version. But over the past couple of decades, researchers have racked up many more novel cellular death scenarios, some specific to certain cell types or situations. Understanding this panoply of death modes could help scientists save good cells and kill bad ones, leading to treatments for infections, autoimmune diseases, and cancer.

“There’s lots and lots of different flavors here,” says Michael Overholtzer, a cell biologist at Memorial Sloan Kettering Cancer Center in New York. He estimates that there are now more than 20 different names to describe cell death varieties.

Here, Knowable Magazine profiles a handful of classic and new modes by which cells kick the bucket.

Unplanned cell death: Necrosis

Lots of bad things can happen to cells: They get injured or burned, poisoned or starved of oxygen, infected by microbes or otherwise diseased. When a cell dies by accident, it’s called necrosis.

There are several necrosis types, none of them pretty: In the case of gangrene, when cells are starved for blood, cells rot away. In other instances, dying cells liquefy, sometimes turning into yellow goop. Lung cells damaged by tuberculosis turn smushy and white — the technical name for this type, “caseous” necrosis, literally means “cheese-like.”

Any form of death other than necrosis is considered “programmed,” meaning it’s carried out intentionally by the cell because it’s damaged or has outlived its usefulness.

A good, clean death: Apoptosis

The two main categories of programmed cell death are “silent and violent,” says Thirumala-Devi Kanneganti, an immunologist at St. Jude Children’s Research Hospital in Memphis, Tennessee. Apoptosis, first named in 1972, is the original silent type: It’s a neat, clean form of cell death that doesn’t wake the immune system.

That’s handy when cells are damaged or have served out their purpose. Apoptosis allows tadpoles to discard tail cells when they become frogs, for example, or human embryos to dispose of the webbing between developing fingers.

The cell shrinks and detaches from its neighbors. Genetic material in the nucleus breaks into pieces that scrunch together, and the nucleus itself fragments. The membrane bubbles and blisters, and the cell disintegrates. Other cells gobble up the bits, keeping the tissue tidy.

In necrosis, a cell dies by accident, releasing its contents and drawing immune cells to the site of damage by creating inflammation. In apoptosis, the cell collapses in on itself and the bits are cleared away without causing damaging inflammation.

Enlarge / In necrosis, a cell dies by accident, releasing its contents and drawing immune cells to the site of damage by creating inflammation. In apoptosis, the cell collapses in on itself and the bits are cleared away without causing damaging inflammation.

Your cells are dying. All the time. Read More »

ai-bots-now-beat-100%-of-those-traffic-image-captchas

AI bots now beat 100% of those traffic-image CAPTCHAs

Are you a robot? —

I, for one, welcome our traffic light-identifying overlords.

Examples of the kind of CAPTCHAs that image-recognition bots can now get past 100 percent of the time.

Enlarge / Examples of the kind of CAPTCHAs that image-recognition bots can now get past 100 percent of the time.

Anyone who has been surfing the web for a while is probably used to clicking through a CAPTCHA grid of street images, identifying everyday objects to prove that they’re a human and not an automated bot. Now, though, new research claims that locally run bots using specially trained image-recognition models can match human-level performance in this style of CAPTCHA, achieving a 100 percent success rate despite being decidedly not human.

ETH Zurich PhD student Andreas Plesner and his colleagues’ new research, available as a pre-print paper, focuses on Google’s ReCAPTCHA v2, which challenges users to identify which street images in a grid contain items like bicycles, crosswalks, mountains, stairs, or traffic lights. Google began phasing that system out years ago in favor of an “invisible” reCAPTCHA v3 that analyzes user interactions rather than offering an explicit challenge.

Despite this, the older reCAPTCHA v2 is still used by millions of websites. And even sites that use the updated reCAPTCHA v3 will sometimes use reCAPTCHA v2 as a fallback when the updated system gives a user a low “human” confidence rating.

Saying YOLO to CAPTCHAs

To craft a bot that could beat reCAPTCHA v2, the researchers used a fine-tuned version of the open source YOLO (“You Only Look Once”) object-recognition model, which long-time readers may remember has also been used in video game cheat bots. The researchers say the YOLO model is “well known for its ability to detect objects in real-time” and “can be used on devices with limited computational power, allowing for large-scale attacks by malicious users.”

After training the model on 14,000 labeled traffic images, the researchers had a system that could identify the probability that any provided CAPTCHA grid image belonged to one of reCAPTCHA v2’s 13 candidate categories. The researchers also used a separate, pre-trained YOLO model for what they dubbed “type 2” challenges, where a CAPTCHA asks users to identify which portions of a single segmented image contain a certain type of object (this segmentation model only worked on nine of 13 object categories and simply asked for a new image when presented with the other four categories).

The YOLO model showed varying levels of confidence depending on the type of object being identified.

Enlarge / The YOLO model showed varying levels of confidence depending on the type of object being identified.

Beyond the image-recognition model, the researchers also had to take other steps to fool reCAPTCHA’s system. A VPN was used to avoid detection of repeated attempts from the same IP address, for instance, while a special mouse movement model was created to approximate human activity. Fake browser and cookie information from real web browsing sessions was also used to make the automated agent appear more human.

Depending on the type of object being identified, the YOLO model was able to accurately identify individual CAPTCHA images anywhere from 69 percent of the time (for motorcycles) to 100 percent of the time (for fire hydrants). That performance—combined with the other precautions—was strong enough to slip through the CAPTCHA net every time, sometimes after multiple individual challenges presented by the system. In fact, the bot was able to solve the average CAPTCHA in slightly fewer challenges than a human in similar trials (though the improvement over humans was not statistically significant).

The battle continues

While there have been previous academic studies attempting to use image-recognition models to solve reCAPTCHAs, they were only able to succeed between 68 to 71 percent of the time. The rise to a 100 percent success rate “shows that we are now officially in the age beyond captchas,” according to the new paper’s authors.

But this is not an entirely new problem in the world of CAPTCHAs. As far back as 2008, researchers were showing how bots could be trained to break through audio CAPTCHAs intended for visually impaired users. And by 2017, neural networks were being used to beat text-based CAPTCHAs that asked users to type in letters seen in garbled fonts.

Older text-identification CAPTCHAs have long been solvable by AI models.

Older text-identification CAPTCHAs have long been solvable by AI models.

Stack Exchange

Now that locally run AIs can easily best image-based CAPTCHAs, too, the battle of human identification will continue to shift toward more subtle methods of device fingerprinting. “We have a very large focus on helping our customers protect their users without showing visual challenges, which is why we launched reCAPTCHA v3 in 2018,” a Google Cloud spokesperson told New Scientist. “Today, the majority of reCAPTCHA’s protections across 7 [million] sites globally are now completely invisible. We are continuously enhancing reCAPTCHA.”

Still, as artificial intelligence systems become better and better at mimicking more and more tasks that were previously considered exclusively human, it may continue to get harder and harder to ensure that the user on the other end of that web browser is actually a person.

“In some sense, a good captcha marks the exact boundary between the most intelligent machine and the least intelligent human,” the paper’s authors write. “As machine learning models close in on human capabilities, finding good captchas has become more difficult.”

AI bots now beat 100% of those traffic-image CAPTCHAs Read More »

microsoft-details-security/privacy-overhaul-for-windows-recall-ahead-of-relaunch

Microsoft details security/privacy overhaul for Windows Recall ahead of relaunch

An updated onboarding screen for Recall, with clearly visible buttons for opting in or out; Microsoft says Recall will be opt-in by default and can even be removed from PCs entirely.

Enlarge / An updated onboarding screen for Recall, with clearly visible buttons for opting in or out; Microsoft says Recall will be opt-in by default and can even be removed from PCs entirely.

Microsoft

Microsoft is having another whack at its controversial Recall feature for Copilot+ Windows PCs, after the original version crashed and burned amid scrutiny from security researchers and testers over the summer. The former version of Recall recorded screenshots and OCR text of all user activity, and stored it unencrypted on disk where it could easily be accessed by another user on the PC or an attacker with remote access.

The feature was announced in late May, without having gone through any of the public Windows Insider testing that most new Windows features get, and was scheduled to ship on new PCs by June 18; by June 13, the company had delayed it indefinitely to rearchitect it and said that it would be tested through the normal channels before it was rolled out to the public.

Today, Microsoft shared more extensive details on exactly how the security of Recall has been re-architected in a post by Microsoft VP of Enterprise and OS Security David Weston.

More secure, also optional

An abstraction of Recall's new security architecture, which replaces the old, largely nonexistent security architecture.

Enlarge / An abstraction of Recall’s new security architecture, which replaces the old, largely nonexistent security architecture.

Microsoft

The broad strokes of today’s announcement are similar to the changes Microsoft originally announced for Recall over the summer: that the feature would be opt-in and off-by-default instead of opt-out, that users would need to re-authenticate with Windows Hello before accessing any Recall data, and that locally stored Recall data will be protected with additional encryption.

However, some details show how Microsoft is attempting to placate skeptical users. For instance, Recall can now be removed entirely from a system using the “optional features” settings in Windows (when a similar removal mechanism showed up in a Windows preview earlier this month, Microsoft claimed it was a “bug,” but apparently not).

The company is also sharing more about how Windows will protect data locally. All Recall data stored locally, including “snapshots and any associated information in the vector database,” will be encrypted at rest with keys stored in your system’s TPM; according to the blog post, Recall will only function when BitLocker or Device Encryption is fully enabled. Recall will also require Virtualization-Based Security (VBS) and Hypervisor-Protected Code Integrity (HVCI) enabled; these are features that people sometimes turn off to improve game performance, but Recall will reportedly refuse to work unless they’re turned on.

This is because the new Recall operates inside of a VBS enclave, which helps to isolate and secure data in memory from the rest of the system.

“This area acts like a locked box that can only be accessed after permission is granted by the user through Windows Hello,” writes Weston. “VBS enclaves offer an isolation boundary from both kernel and administrative users.”

Windows doesn’t allow any code to run within these enclaves that hasn’t been signed by Microsoft, which should lower the risk of exposing Recall data to malware or other rogue applications. Other malware protections new to this version of Recall include “rate-limiting and anti-hammering measures.”

Microsoft details security/privacy overhaul for Windows Recall ahead of relaunch Read More »

tesla-workers-in-germany-complain-about-home-visits-from-their-bosses

Tesla workers in Germany complain about home visits from their bosses

knock knock —

The rate of sick leave reached 17 percent in August, far above industry average.

A car drives past the Tesla logo outside the Tesla factory on July 17, 2023 near Gruenheide, Germany.

Enlarge / Tesla’s factory in Gruenheide, Germany.

Sean Gallup/Getty Images

Tesla’s German car factory on the outskirts of Berlin allegedly operates under a “culture of fear,” and its workers take sick leave at more than three times the industry average. The plant, which was targeted by arsonists earlier this year, is now experiencing a degree of discord between workers and management, according to reports in Handelsblatt and the Guardian.

“We will not tolerate some people bending their backs for others who just don’t feel like coming to work. There is no room in this factory for people who don’t get out of bed in the morning,” said André Thierig, manufacturing director of Tesla’s Gigafactory Berlin-Brandenburg.

It seems the company has been taking that directive seriously. Frustrated by a rate of sick leave that reached as high as 17 percent this summer—compared to a German auto industry average of 5.2 percent—Tesla started checking up on some employees at home, sending managers to visit 30 employees while they were on sick leave.

Such home visits were not well-received, and bosses were greeted with slammed doors and threats to call the police. But Thierig claimed checking up on sick workers at home was common practice and that they were appealing to “the employees’ work ethic.”

Thierig also pointed out that sick leave tended to increase on Fridays and late shifts. “That is not an indicator of bad working conditions because the working conditions are the same on all working days and across all shifts. It suggests that the German social system is being exploited to some extent,” he said.

This is not the first time Tesla has appeared less than sympathetic to the health of its workers. During the pandemic, the company had to be pressured into closing its factory in California, setting up a feud between Musk and the state that has since seen him relocate Tesla to Texas.

The IG Metall trade union, which represents some of the workers at the Berlin-Brandenburg factory, expressed its dismay at this move. Workloads at the factory have stressed employees, and those who haven’t called out sick have to pick up the slack for their colleagues who have.

“If the factory’s overseers really want to reduce the level of sickness, they should break this vicious circle,” said Dirk Schulze, a regional director at IG Metall.

Tesla workers in Germany complain about home visits from their bosses Read More »

indicted-nyc-mayor-to-fbi:-i,-uh,-forgot-my-phone’s-passcode

Indicted NYC mayor to FBI: I, uh, forgot my phone’s passcode

Just trying to help! —

I think it starts with a 2?

NYC Mayor Eric Adams holding an AirTag.

Enlarge / NYC Mayor Eric Adams, in happier times, holding an AirTag.

New York City mayor Eric Adams was stopped on the street by the FBI after an event in November 2023. Agents had a warrant for his electronic devices, which they seized. At the time, Adams made clear that he had nothing to hide, saying in a statement, “As a former member of law enforcement, I expect all members of my staff to follow the law and fully cooperate with any sort of investigation—and I will continue to do exactly that.”

Thanks to this week’s federal indictment (PDF) of Adams—the first for a sitting NYC mayor, and one that alleges bribery from Turkish sources—we now have the same story from the government’s perspective. It sounds quite a bit different.

According to the feds, agents seized not one but two cell phones from Adams on November 6, 2023—but neither of these was Adams’ “personal” phone, which he was not carrying. It was the personal phone that Adams allegedly used “to communicate about the conduct described in this indictment.”

The next day, November 7, Adams turned in his personal cellphone “in response to a subpoena,” but the phone was locked with a passcode. Passcodes are not unusual—indeed, they are recommended for nearly all users—but in this case there was a hitch. Adams, who continued to pledge his full cooperation, told the FBI that he couldn’t remember the code.

This might sound suspicious, but Adams said that it was actually a result of his attempts to preserve the phone and its data for the FBI. Two days earlier, on November 5, Adams had gotten wind of the investigation into his finances after the FBI raided one of his associates. When he heard this, he changed his personal cell phone passcode, increasing its length from four digits to six.

According to Adams, this was done to “prevent members of his staff from inadvertently or intentionally deleting the contents of his phone” so that Adams could “preserve the contents of his phone due to the investigation.”

Unfortunately, Adams told the FBI, he couldn’t remember this new password he had set just two days before. And so the pristinely preserved personal phone was locked and without a key.

The government does not explicitly say what it thought of the truthfulness of this explanation, but the indictment against Adams includes the anecdote under the subheading, “ADAMS and His Co-Conspirators Attempt to Conceal Their Criminal Conduct.”

Deleting apps from the bathroom

It wasn’t just Adams who had some “issues” with digital devices. As part of its investigation, the FBI asked if one of his staffers would speak to them in a voluntary interview. The woman agreed, and she then “falsely denied the criminal conduct of herself and ADAMS,” says the indictment.

But the most interesting bit was her behavior. In the middle of the interview, the staffer left to use the bathroom “and, while there, deleted the encrypted messaging applications she had used to communicate with ADAMS, the Promoter, the Turkish Official, the Airline Manager, and others.”

Others in Adams’ orbit were a bit more careful. When an Adams employee met with the mayor to talk about Turkish issues, he and Adams “left their cellphones outside the room in which they met so that it would be ‘safe’ to talk.”

Another staffer who worked on booking subsidized Turkish travel once texted Adams, “To be o[n the] safe side Please Delete all messages you send me.” Adams replied, “Always do.”

This sort of care extended even to the creation of bogus PowerPoint presentations. At a 2023 fundraiser held in a Manhattan hotel, the FBI claims that a “promoter” putting together illegal foreign donations for Adams actually “provided a PowerPoint presentation billing the event as a dinner hosted by ‘International Sustainability Leaders’ with the subject ‘Sustainable Destinations’ and an attendance price of $5,000.” This was despite the fact that the event was listed on Adams’ private calendar as “Fundraiser for Eric Adams 2025.”

Indicted NYC mayor to FBI: I, uh, forgot my phone’s passcode Read More »

study:-cats-in-little-crocheted-hats-shed-light-on-feline-chronic-pain

Study: Cats in little crocheted hats shed light on feline chronic pain

For the fashion-forward cat —

The custom-made caps hold electrodes in place and reduce motion artifacts during EEGs.

A cat wearing a crocheted hat custom-made to record brain activity

Enlarge / “When you spend more time putting electrodes back on than you do actually recording the EEGs, you get creative.”

Alienor Delsart

Our feline overlords aren’t particularly known for obeying commands from mere humans, which can make it difficult to study their behaviors in controlled laboratory settings. So a certain degree of ingenuity is required to get usable results—like crocheting adorable little hats for kitties taking part in electroencephalogram (EEG) experiments. That’s what researchers at the University of Montreal in Quebec, Canada, did to learn more about assessing chronic pain in cats—and they succeeded. According to their recent paper published in the Journal of Neuroscience Methods, it’s the first time scientists have recorded the electrical activity in the brains of conscious cats.

According to the authors, one-quarter of adult cats suffer from osteoarthritis and chronic pain that worsens with age. There are currently limited treatment options, namely, non-steroidal anti-inflammatory drugs, which can have significant side effects for the cats. An injectable monoclonal antibody tailored for cats has recently been developed to neutralize excessive nerve growth factor, but other alternative treatment options like supplements and regenerative medicine have yet to be tested. Nor has the effectiveness of certain smells or lighting in altering pain perception in felines been tested.

That was the Montreal team’s primary objective for their experiments. Initially, they tried to place electrodes on the heads of 11 awake adult cats with osteoarthritis, but the cats kept shaking off the electrodes.

“When you spend more time putting electrodes back on than you do actually recording the EEGs, you get creative,” co-author Aliénor Delsart of the University of Montreal told New Scientist. So he and his co-authors tapped a graduate student with crocheting skills to make the little hats. Not only did the hats hold the electrodes in place, but the cats also stopped trying to chew the wires.

With that problem solved, the real experiments could begin, designed to record brain activity of cats in response to smelling certain substances or seeing different wavelengths of colored light. The kitty subjects were housed as a group in an environment with lighting, temperature, and humidity controls, along with perches, beds, scratching posts, and cat toys.

Electrodes were attached with no need to shave the cats’ hair, thanks to a conductive paste to improve electrode/skin contact. First they recorded the basal activity before moving to exposure to sensory stimuli: a grapefruit smell for olfactory stimulation, and red, blue, and green lighting in a darkened room for visual stimulation.

Granted, there were still a few motion artifacts in that data; two cats were excluded from the data analysis for that reason. And the authors acknowledged the small sample size and largely descriptive nature of their analysis, which they deemed appropriate for what is essentially a test of the feasibility of their approach. The study met the group’s primary objectives: to assess whether the EEG method was feasible with conscious cats and whether the resulting analytical methods were an efficient means to characterize how the cats responded to specific sensory stimuli. “This opens new avenues for investigating chronic pain mechanisms and developing novel therapeutic strategies,” the authors concluded.

Journal of Neuroscience Methods, 2024. DOI: 10.1016/j.jneumeth.2024.110254  (About DOIs).

Study: Cats in little crocheted hats shed light on feline chronic pain Read More »

book-review:-on-the-edge:-the-future

Book Review: On the Edge: The Future

Previously: The Fundamentals, The Gamblers, The Business

We have now arrived at the topics most central to this book, aka ‘The Future.’

The Manifest conference was also one of the last reporting trips that I made for this book. And it confirmed for me that the River is real—not just some literary device I invented. (6706)

Yep. The River is real.

I consider myself, among many things, a straight up rationalist.

I do not consider myself an EA, and never have.

This completes the four quadrants of the two-by-two of [does Nate knows it well, does Zvi knows it well]. The first two, where Nate was in his element, went very well. The third clearly was less exacting, as one would expect, but pretty good.

Now I have the information advantage, even more than I did for aspects of sports gambling.

  1. We’ve seen Nate attempt to tackle areas in both our wheelhouses.

  2. We’ve seen Nate attempt to tackle areas in his wheelhouse that I’ve only explored.

  3. We’ve seen Nate attempt to tackle areas he was exploring, that I’ve only explored.

  4. Now he’s exploring new sections of my wheelhouse.

Let’s see how he explains it all.

Effective altruism, and the adjacent but more loosely defined intellectual movement called “rationalism,” are important parts of the River on their own terms. In some ways, in fact, they are the most important parts.

Much of the River is concerned with what philosophers call “small-world problems,” meaning tractable puzzles with relatively well-defined parameters: how to maximize expected value in a poker tournament, or how to invest in a portfolio of startups that brings you upside with little risk of ruin.

But in this final portion of the book, we’re visiting the part of the River where people instead think about open-ended, so-called grand-world problems: everything from where best to spend your charitable contributions to the future of humanity itself. (6228)

A solid opening.

I would still nitpick on the word ‘instead,’ and would have suggested ‘also.’

The Rationalists saw themselves as people who applied scientific thought to almost any topic. This often involved “Bayesian reasoning,” a way of using statistics and probability to inform beliefs. (6393)

The ‘True Rationalist’ in particular both hones their skills and upgrades their lives by applying the same principles to everything. No matter is too trivial for the nerd snipe and the Bayesian reasoning. In particular, local questions, that help improve your life and your ability to think and impact the world, matter. You are not forced to look only at bigger pictures.

Indeed, Nate Silver was correctly informed that he counts as a rationalist. You don’t have to join or even know we exist in order to be a member.

In fact, even if I had never applied to join Team Rationalist, Alexander—whose soft features, dry wit, and male pattern baldness reminded me uncannily of my dad’s (West Coast Jewish) side of the family—had already drafted me into it. “You are clearly doing a lot of good work spreading rationality to the masses. Is it useful to think of us as a movement that doesn’t include you?” he asked me. (6401)

The origin story of many rationalists is exactly that they needed it on a personal level. The traditional mode of acting on intuition and instinct and kind of vibing was clearly not working. The world did not make sense. They had to figure things out the hard way, from first principles. The good news is, once you do that, you actually understand what you are doing, and are choosing on purpose how to do it. You end up far stronger at the end of the path.

And indeed, there are some rationalists, and some EAs, that are perfectly content to use the toolboxes on that level. We strive to help them get more ambitious when we are ready for that, but you don’t get cast out if you decide to not go big, and stay home.

But yes, what matters is that often people who think this way end up going big.

The reason some Riverians have become obsessed with grand-world problems is because the Village and the rest of the world screw them up all the time, too, in ways that often reflect political partisanship, an endless array of cognitive biases, innumeracy, hypocrisy, and profound intellectual myopia.

To take one glaring example that Flynn reminded me of: the U.S. Congress has authorized relatively little—only around $2 billion in spending as part of a 2022–23 budget deal—to prevent future pandemics, even though COVID-19 killed more than 1 million Americans and cost the U.S. economy an estimated $14 trillion.

Reducing the chance of a future such pandemic in the United States by even 1 percent would be +EV even at a cost of $140 billion—and yet Congress is barely spending one one-hundredth of that.

You cannot count on your civilization to act in a sane fashion. There is no Reasonable Authority Figure. We are Beyond the Reach of God. As Scott Alexander wrote, someone has to, and no one else will. Or as Hillel the Elder said two millennia earlier: If I am not for myself, who will be for me? If I am only for myself, what am I? If not now, when?

And now, on top of that, we face substantial existential risks, most of all from AI.

As crazy as it sounds, yes, it is up to us. It is up to you.

Our society is very bad at cost-benefit analysis. As in we often refuse to do one.

There are narrow places where we are quite good at it. We do indeed do cost-benefit analysis sometimes, at all, including on things that matter, and that is really great. We also often rely on markets to get us to do it, which is insanely great.

Alas, we also often act like complete morons, because we refuse.

Transit officials faced a difficult choice: They could shut down the F, blocking a vital link between New York’s two densest boroughs right as commuters were beginning to get off work—or they could potentially run over poor Dakota. They elected to close the F for more than an hour until Dakota was found. (6261)

I am sorry, what? A difficult choice? This is a trivially easy choice.

You only need to answer one question about the above passage. Is Dakota a human?

If the answer is yes, than as Nate says, we all agree, you stop the trains.

We put the value of a human life saved (VSL) around $10 million, and in situations like this we are willing to do a bit higher.

When you tease out this and lots of other data points—say, by looking at how much people are willing to pay for additional safety features when they buy a new car—the average American implicitly values their life at about $10 million. That’s where the VSL comes from. (6358)

Dakota, however, was a dog.

Claude initially estimated the total cost of the train delay at $1.58 million. It is actually substantially higher, because that estimate takes lost time at work as being equal to the hourly wage of the employee. Whereas if an employee’s marginal value per hour was only their wage, why do they have a job? And when someone is unexpectedly late, with little warning, that can lead to big problems, including things like ‘the doctor can’t perform surgery on you today.’

I’m confident the cost here is north of $2 million, and there is no guarantee that this results in the cat not being run over.

If you suggested a $1 million medical treatment to save that cat’s life, and that the government should pay for that, that would be obviously patently absurd. I would absolutely laugh in your face.

If you called up Dakota’s owner and said, ‘all right, we can close down the F train for you, but that will be $1 million dollars’ we all know what the answer would be, once they were done yelling at you. We have willingness to pay studies. When forced to pay, less than 10% of pet owners are willing to pay $10,000 or more for life-saving medical treatments.

So let’s not pretend this is the MTA faced with a hard choice. This is the MTA faced with an absurdly easy choice. And they chose wrong.

Thus, the need for something like rationalism, and something like Effective Altruism.

As in, I can’t help but notice that you do things without checking to see if they will be effective, or if there is a way to do them better. Perhaps you should think about that?

What is effective altruism, exactly? In one sense, effective altruism is just a brand name, created by MacAskill and another Oxford philosopher, Toby Ord, in 2011. (6370)

The more official answer—as stated by MacAskill in an essay entitled “The Definition of Effective Altruism”—is that EA is a “movement [that tries] to figure out, of all the different uses of our resources, which uses will do the most good, impartially considered.” (6374)

That’s the 80/20 for a lot of this. You try, at all, to figure out what will actually result in what outcomes at what costs with what benefits. Then you choose what seems best. The rest is not stamp collecting, the rest is important, but you’ll already be way ahead.

Eliezer Yudkowsky back in 2006 listed the twelve virtues of rationality: Curiosity, relinquishment, lightness, evenness, argument, empiricism, simplicity, humility, perfectionism, precision, scholarship, and the void.

On a more practical level, it means things like this:

Even public figures who are critical of the movements tend to get a fair hearing at blogs like LessWrong and at the Effective Altruism Forum—which is pretty much the opposite of what it’s usually like to argue about public affairs online. (6421)

This plays out in instrumental and epistemic rationality.

First, there’s instrumental rationality. Basically this means: Do you adopt means suitable to your ends? There is a man who has eaten more than thirty thousand Big Macs. Now, this might not be a reasonable and prudent thing for him to do. But if this man’s life goal is to eat as many Big Macs as possible, you could say he’s instrumentally rational because he’s done a bang-up job of this. (6725)

The second type is epistemic rationality. This means: Do you see the world for what it is? Do your beliefs line up with reality? (6730)

Good summary. You need both.

You can also give the rationalists credit for argumentative consistency: they tend to be scrupulously honest. (6816)

Rationalists have, from the outside perspective, utterly absurd high standards on scrupulosity and honesty. I believe this to be a very good thing.

But the kinship that EAs and rationalists feel for each other conceals that there are a lot of internal disagreements and even contradictions within the movements—in particular, there are two major streams of EA/rationalism that don’t see eye to eye.

The first is associated with the Australian philosopher Peter Singer and a cluster of topics including animal welfare, global poverty reduction, effective giving, and not living beyond your means—but also the ethical precept known as utilitarianism.

The second is associated with Yudkowsky and the George Mason University economist Robin Hanson and a whole different cluster of topics: futurism, artificial intelligence, prediction markets, and being willing to argue about just about anything on the internet, including subjects that others often find taboo. (6428)

Not living beyond your means is (highly non-uniquely) a rationalism thing. Not retaining means with which to live better is the EA thing.

Then later on the Effective Altruists realized the rationalists were right about the promise and dangers of AI and existential risks from AI, so that became the EA cornerstone as well.

Furthermore, I think it’s altruistic when people like Singer express unpopular viewpoints that they honestly believe will lead to social betterment and selfish to suppress these ideas because of fear of social approbation. (6476)

I agree in principle, although I worry about the frame of ‘altruistic’ being misleading. The important thing is that, if more people said what they actually believe whether or not it is popular, and whether or not it is convenient, and whether or not I agree with it, that would make the world a better place.

There is then of course Singer’s famous drowning child metaphor, that if you’d ruin your expensive coat to save a drowning child in front of you, that means you are a bad person because you should have never bought that expensive coat and instead could have donated that money to global poverty relief.

Okay then, so why don’t I find the drowning child parable persuasive? Well, partly because it’s meant to play a trick on you—as Singer freely admits. (6479)

Indeed. It’s a magician’s trick. Singer wants you to ignore, among other things, all the reasons that we have agreed to make that drowning child in front of you your responsibility in particular, all the reasons we need some amount of locality in our preferences, and all the reasons it is not okay to redistribute all the wealth whenever you feel like it. That civilization exists for a reason, and you need to maintain it, along with all the ways we are able to make expensive coats and also save lives at all.

Then there’s the issue of utilitarianism.

There are some settings where I think utilitarianism is an appropriate framework—particularly in medium-scale problems such as in establishing government policy where impartiality (not playing favorites) is important.

For instance, when a subcommittee of the CDC met in November 2020 to develop recommendations for who would be first in line for COVID vaccines, they rejected going with a utilitarian calculus of maximizing benefits and minimizing harms to instead also consider objectives like “promo[ting] justice” and “mitigat[ing] health inequalities.” (6505)

I think utilitarianism is analogous to an underfit model. Instead of being too deferential to commonsense morality, it doesn’t meet people in the middle enough, accepting that maybe various laws and customs evolved for good reasons. (6547)

I should note, however, that utilitarianism, especially in its strictest forms, is actually relatively unpopular among philosophers. (6572)

Most people need more utilitarianism on the margin, to go with their additional use of cost-benefit analysis. When I say ‘I am not a utilitarian’ I mean not taking it to its bullet-biting conclusions, and not seeing it as the proper operating system for the human brain in practice, and not believing that you can fully total up the points of various events to choose this week’s winner in any cosmic or moral sense.

I’m arguing with the Actual Utilitarians, not with the person on the street. But the other thing about the person on the street is they also need more good virtue ethics and more good deontology, and are mostly ill-prepared to go Full Utilitarian.

A few of us have to worry about infinite cases and weird out of sample philosophical questions, in those times we are dealing with those as actual possibilities, such as in potential AI futures. For most people, that never happens. Even for those where it does happen? Most of the time, for most questions, not so much.

And that is fine. The human brain has limited compute and should not be using heuristics all the time based on whether they handle rare edge cases – so long as you recognize when you do face those edge cases.

“The thought that, well, this theory isn’t good if it can’t handle infinite cases, I think that’s like a huge mistake,” said Buchak. She thinks moral theories should instead be tested on practical, day-to-day decision-making. “Nearly every decision you face involves risk,” she said. “I’m like [more] concerned with just like, you know, should I bring my umbrella today?”

If a moral theory can’t handle everyday cases like these—if it strays too far from common sense—then we probably shouldn’t trust it, whether or not it provides an elegant answer to the Repugnant Conclusion. (6600)

I agree. If your system can’t handle ordinary cases, then you should be highly suspicious. And if it can’t handle ordinary cases without inordinate amounts of compute (as in human brain cycles, in this context) then that’s a problem too. Note that this is more of an issue in practice than in theory. If it works in practice for humans in ordinary situations, then it counts. If it doesn’t, then it doesn’t.

The reverse is not true. If a system does handle the ordinary cases well, then that is a fine thing to use to handle ordinary cases. But it could still be a huge disaster in unusual cases. And if most of the value of a system lies in how it handles future non-ordinary cases, then establishing one that only works in ordinary cases can be disastrous.

Indeed, most systems for dealing well with ordinary situations are (wisely) overfitting on the data, because we constantly face similar ordinary situations. That’s fine, except when you run into those unusual situations. Then you need to understand that your instinctive rules might be leading you very astray.

Also, I’ve said it before, and a lot of people told me I’m wrong but their arguments were all invalid so I’m going to say it again: The Repugnant Conclusion is a silly misunderstanding. It’s another magician’s trick.

The standard proof of the conclusion is invalid, because it involves manifesting resources out of thin air. The most correct response to ‘what if potatoes plus muzak maximizes your total universe utility score?’ is ‘it quite obviously does not do that, a human life contains a lot of resource costs and downsides and many benefits and potential joys, and it is quite obviously more efficient to have less people that are happier than that. Your so-called proof otherwise must be wrong on that basis alone. Also it is trivially invalid because you can’t go from world N to world N-prime in order to then loop back to world (N+1), because that move creates new people living at net zero utility without taking any resources away from anyone else. A duck is chasing you asking how you did that.’

As Craig Ferguson often said, I look forward to your letters. You can talk amongst yourselves if you’d like. But if it’s the same counterarguments and confusions, I’m precommiting here to ignoring them. I’ll only answer if I see something new.

But who in the hell am I (or Lara Buchak or Peter Singer) to tell you what you should do in decisions you’ll face just once? “It might be that you should behave differently when choosing a spouse or choosing a job or doing these kinds of things that you’re only going to do once, hopefully,” Buchak told me. (6614)

No.

You should still do the calculation and make the best possible decision as best you can.

Indeed, if it’s a big decision like a spouse or a job, those are the decisions that matter. Those are the ones where it’s worth making sure you get it right. It is very much not the time to throw the rules out the window, especially before you know the rules well enough to break them.

There are of course two big differences.

The most important one is risk aversion. You don’t get to use responsible bankroll management when choosing a job or spouse. Life doesn’t let you not take big risks, not without paying a very high price. But yes, some amount of risk aversion is appropriate in those big decisions. It’s not a pure +EV in dollars or abstractions calculation. Which is fine. So factor that, along with everything else, in.

The other big difference is inability to learn and iterate. With most decisions, a lot of the value of a good decision process is to learn from both success and mistakes, to grow wise and to make better decisions in the future. Whereas in a one-time high stakes decision like choosing a spouse, knowing how to do it better next time will be of relatively little help.

I think there is some rational basis for partiality because we have more uncertainty about things that are removed from us in time and space. (6623)

This is indeed a classic modernist failure mode, where you act like you understand what is happening elsewhere far more than you actually do. You have to discount distant actions for this risk. But that is not the only reason you need spatial and knowledge-based partiality.

Civilization would not run, people would not survive or reproduce or even produce, the social contract would collapse, if you did not favor and exchange with and cooperate uniquely with those around you beyond what you do with strangers halfway around the world. All that, and real competition, is necessary. Those strangers are not only people too but also certified Popes, so please treat them right, but that does not mean full equal standing. The alternative is not game theory compatible, it is not fit, it does not long survive.

There is little virtue in being too virtuous to sustain that virtue, and indeed if that is a thing you are thinking of as virtuous than you have chosen your virtues poorly.

And even if I think there’s something honorable about acting morally in a mostly selfish world, I also wonder about the long-term evolutionary fitness of some group of people who wouldn’t defend their own self-interest, or that of their family, their nation, their species, or even their planet, without at least a little more vigor than they would that of a stranger. I want the world to be less partial than it is, but I want it to be at least partially partial. (6653)

Yep.

This is another important observation:

Overall, the politics of EA can be slippery, stuck in the uncanny valley between being abstractly principled and ruthlessly pragmatic, sometimes betraying a sense that you can make it up as you go along. (6828)

One of the core tensions in EA is, to put it bluntly, honesty versus lying.

There is the faction that says you want to ‘do the most good,’ and you shouldn’t let the truth get in the way of that. This starts with Peter Singer, who is clear that he believes the virtuous man should be willing to lie their ass off. Thus ‘honesty is not part of my utility function,’ and SBF justifying what he did. Alternatively, perhaps you tell the truth to the ingroup, other EAs and select allies, but politics is politics. Play to win.

The other faction aligns with the rationalists, who say that if you lose your epistemics and your honesty, then all is lost. That telling the truth and playing it all fully straight is the only road to wisdom and people will recognize this and it will succeed over time. That this is especially true given that the most important issue is AI. If you don’t have excellent epistemics, and if you don’t get others to have good epistemics, acting wisely around AI is hopeless, because it is all so complex and hard to understand, and to figure out what is actually helpful versus what would backfire.

And of course, many people are somewhere in the middle.

You already know which side I am on.

Nate Silver talks to Roon, Paul Graham and Sam Altman about Altman’s history at OpenAI.

Those are excellent sources. They are also highly biased ones. They tell the official Altman version of the tale. Paul Graham has been a long time extreme Altman fan. They clearly work together to tell their narrative of events and ensure Altman stays in control and in good graces as much as possible. Roon is unusually forthcoming, honest and willing to think for real and think different, I respect the hell out of him and know he means well, but also he is a Member of Technical Staff at OpenAI, and has long defended Altman. Altman is Altman.

Nate Silver mostly buys their story, in some places what looks like uncritically, although there are other lines and framings they probably tried to sell to him that he importantly didn’t buy.

As an area where I have done the research, this pained me. If you want my analysis on various events, please do follow those links.

After the events of this week, with OpenAI moving to become a for-profit B corporation and abandon its non-profit mission in favor of maximizing profits, it is now even more clear what the real story is. Altman systematically worked to transform a non-profit into his personal for-profit kingdom, removing anyone who opposed him or got in his way or advocated for any form of safety.

The way Altman and Graham present it, the early ability of OpenAI to exist was uniquely reliant on Altman and his special skills. No one else could have done it.

But by 2015, Altman had concluded that the action was elsewhere: in artificial intelligence. He left YC—some news accounts claim that he was fired, but Graham strongly disputes that description—to become a co-chair of OpenAI along with Elon Musk. (7391)

However, it was a research lab generously funded by a who’s who of Silicon Valley, including Peter Thiel, Amazon, and Musk. Some of them believed in AI’s transformational potential, and some just believed in Altman. (7396)

“Funding this sort of project is beyond the abilities of ordinary mortals. Sam must be close to the best person in the entire world at getting money for big projects,” said Graham. (7401)

That seems like pretty clear Obvious Nonsense to me. Elon Musk decided to fund and ensure the creation of OpenAI (and stuck them with that name) first, before things started, and before he was pushed aside. His prime motivation was existential risk from AI, and fear that Google would otherwise own the future of AI and act irresponsibly.

There is a very strong case that the creation of OpenAI instead likely and predictably (this is very much not hindsight) did massive, epic damage to our chances of survival, but I won’t get into that question too much here, what’s done is done.

The founding team was full of killer people. The upside potential was obvious. As we’ve seen, VCs are herd animals who have strong FOMO, so once the big names were involved this was all very highly fundable.

Graham likes to portray Altman as some unique mastermind of fundraising and corporate infighting. I have no doubt Altman is good at these things, but we have little evidence he is some sort of unique mastermind. In terms of the project’s success on its own terms? Right place, right time, right team, right idea.

I also don’t buy the whole ‘everyone thought we were crazy’ story.

But if you were involved in the early days of OpenAI, you are particularly likely to have faith that things would just work out somehow. OpenAI was not the sort of startup that began in a Los Altos garage. It was an expensive and audacious bet—the funders originally pledged to commit $1 billion to it on a completely unproven technology after many “AI winters.” It inherently did seem ridiculous—until the very moment it didn’t. (7532)

Did scaling outperform expectations, in the sense that all the trend lines did extend and do the kinds of things they promised to perhaps do? Very much so, yes. And it’s true that no one else made a similar big bet until OpenAI proved the way forward. What it never seemed was ridiculous. If I’d thought it was ridiculous I wouldn’t have been dismayed by its founding.

This was a uniquely blessed opportunity in many ways, a slam dunk investment. I’m not saying I have what it takes such that I could have made it work as CEO (although I’m not so confident I couldn’t have, if I’d wanted to), and I’m certainly not saying Altman didn’t do a great job from a business perspective, but there are plenty of others who definitely could have also done it if they’d been given the role.

I do agree that those paying attention largely ‘knew what we had’ before GPT-3.5.

To most of the outside world, the breakthrough came with the release of GPT-3.5 in November 2022, which became one of the most rapidly adopted technologies in human history. (7549)

Inside OpenAI, the recognition of the miracle had come sooner[*8]—with the development of GPT-3 if not earlier. (7552)

I got a bunch of people increasingly asking me ‘what are you doing creating a game while all this is happening’ starting around GPT-2 and escalating from there. I saw the warnings from Gwern and others.

As for whether Altman was fired from YC, that’s such a harsh word, isn’t it? The situation was, as it often is, ambiguous, with many aspects whereby Altman does not come out of it looking good.

“There is this massive risk, but there’s also this massive, massive upside,” said Altman when I spoke with him in August 2022. “It’s gonna happen. The upsides are far too great.”

Altman was in a buoyant mood: even though OpenAI had yet to release GPT-3.5, it had already finished training GPT-4, its latest large language model (LLM), a product that Altman knew was going to be “really good.”

He had no doubt that the only path was forward. “[AI] is going to fundamentally transform things. So we’ve got to figure out how to address the downside risk,” he said. “It is the biggest existential risk in some category. And also the upsides are so great, we can’t not do it.” (7411)

Those were good times.

As irresponsible as I view the decision to create OpenAI in the first place, at the time OpenAI was acting remarkably responsibly with its releases, holding back frontier models for months. They were openly talking about the fact that their products were on pace to create substantial existential risks.

Yes, Altman was still endorsing iterative deployment and pushing ahead, but in reasonable ways. Contrast this rhetoric here with that in his op-ed recently in the Washington Post, where it is all about beating China and national security and existential risk is not even mentioned.

I think poverty really does just end,” [Altman] said. (7416)

If we are in control and want it to end, we would have that power from some perspective. Alas, poverty is largely relative, and the world needs and will always find new incentives and scarce resources to fight about. Poverty could ‘just end,’ at least in terms of what we consider poverty today, even if the humans remain alive. I hope we find a way to sustainably do that. And to his credit, Sam Altman has funded UBI studies and otherwise tried to figure out more about how to do that.

It won’t be trivial. It also won’t entirely end struggle or suffering, or eliminate all disparity of outcomes, and I would not want it to.

The big question is what Altman’s actual attitude is now towards existential risk.

So is @SamA in the same bucket as that other, highly problematic Sam, @SBF? Someone who would push the button on a new model run if he thought it would make the world 2.00000001x better—at a 50 percent risk of destroying it?

You can find a variety of opinions on this question—one source I spoke with even explicitly drew the comparison between Altman’s attitude and SBF’s button-pushing tendencies—but the strong consensus in Silicon Valley is no, and that’s my view too.

Altman has frequently barbed with effective altruists—he couldn’t resist taking a shot at SBF after FTX’s collapse—and has rejected Peter Singer’s rigid utilitarianism. Even people who are relatively concerned about p(doom)—like Emmett Shear, the cofounder of the streaming platform Twitch who became OpenAI’s CEO for two days in November 2023 amid a failed attempt by OpenAI’s nonprofit board to eject Altman—thought the company was in reasonably good hands. “It’s not obvious who’s a better choice,” he told me.

Like most others in Silicon Valley, Shear figures the development of AI is inevitable. (7421)

I don’t think there is an ‘obvious’ better choice than Altman, but certainly there are candidates I would prefer. Even confining to OpenAI founders, I’d be much happier if either Sutskever or Shulman were in charge. When the OpenAI board selected Shear, I considered him a great pick. It is of course moot, at least for now.

I agree that Altman is nothing like as awful about this as SBF. Altman would absolutely not flip coins for the fate of the world on the tiniest of edges. He definitely knows that the risk is real, he is well aware of the arguments of Eliezer Yudkowsky and many others, and he will make at least some efforts to mitigate the risks.

That doesn’t mean Altman will play his hand as safely as the Kelly criterion would advise, which would never have you risk everything unless you were absolutely certain to win. (7431)

The Kelly Criterion is too conservative here, some existential risk is going to have to be taken because the background existential and other extreme risks of inaction are also not zero, and the upside is indeed rather large.

That doesn’t mean Altman is going to act responsibly. Indeed, at many turns, and with increasing frequency, he has clearly prioritized both his control over OpenAI and also has chosen to prioritize OpenAI’s commercial interests and advancing its capabilities, transitioning it towards operating as an ordinary business and technology company, and to deprioritize its safety efforts.

It seems clear that the events of November 2023 were a turning point. Altman was already turning against EA types and safety concerns before that. The events of November 2023 were caused in large part by Altman trying to (in a ‘not consistently candid’ manner, shall we say) oust board member Helen Toner, so that Altman could disempower safety advocates and consolidate control of OpenAI’s board.

This post is the best one post to read if you want to know what I think happened.

I want to pause in particular to push back against this statement from Nate:

But when the OpenAI board tried to oust Sam A, Roon and more than seven hundred other staffers pledged to resign and join Altman at his gig at Microsoft unless he was restored as CEO. (7483)

They did not do that. Read the letter. They didn’t pledge. They instead threatened that they might do that, without committing to anything. And they did this in response to the OpenAI board botching its communications in the wake of their firing of Altman, refusing to explain themselves, perhaps out of fear of Altman and his lawsuits or other actions, perhaps for other reasons.

Meanwhile Altman and his allies worked around the clock to spin a false media narrative and to credibly threaten to destroy the company within a day, rather than tolerate Altman having been fired from it.

Thus the letter was easy to sign. It was also very difficult to not sign. There was huge pressure exerted on holdouts to fall in line, and not so subtle warnings of what would happen to their positions and jobs if they did not sign and Altman did return.

Those warnings proved accurate. Since then, Altman has systematically driven advocates of safety out, and the transition went into overdrive. The word ‘purge’ would be reasonable to apply here, especially to those who refused to sign the letter demanding Altman be reinstated. He went back on his explicit promises to provide compute and support for OpenAI’s long term safety efforts. Almost half those working on long term safety have left since then including multiple cofounders.

Altman’s rhetoric also shifted. Now he essentially never mentions existential risk. In the Washington Post he fanned the flames of jingoistic rhetoric while ignoring existential risks entirely. OpenAI has opposed SB 1047, while supporting AB 3211, and AB 3211 looks a lot like an attempt at regulatory capture. And so on.

I have tried, time and again, to give OpenAI and Altman the benefit of the doubt. My first thought when I heard Altman was fired was ‘what the hell did he do’ and my second was ‘we’re probably not going to like what comes next are we.’

Not only do I think we could still do vastly worse than Altman, I would take him over the CEOs of Google, Microsoft, Meta, Mistral or xAI. He’s far from the worst pick. But Altman now seems like a much worse pick than the Altman of a few years ago.

If there’s a book that obviously is going to support stating your p(doom) (your probability of a universally bad outcome from sufficiently advanced artificial intelligence) then this would be it.

The point is not for the number to be exact. The point is that a number is much more useful information than anything that is not a number, so do your best.

It’s easy to say something like, “I’m quite concerned about catastrophic risks to humanity from misaligned artificial intelligence.” But it’s much more informative to state your p(doom)—your probability that AI could produce these catastrophic outcomes.

If your p(doom) is 1 percent or 2 percent, that’s still high enough to qualify as “quite concerned.” (After all, it’s the end of the world we’re talking about.)

But if you think p(doom) is 40 percent (and some EAs think it’s that high, or higher), that means that AI alignment—making sure that AIs do what we want and serve human interests—is perhaps the single biggest challenge humanity has ever faced. (6673)

Sure, this might seem artificially precise. But the alternative of not providing a number is a lot worse, Ord thought. At the very least, we should be able to convey orders of magnitude. (6680)

Yes, that is exactly the point. If you think p(doom) by default is 2% if we rush ahead, that’s a big deal, and we should be willing to do quite a lot to mitigate that and change it to 1% or 0.1%, but it makes sense to say that we should mostly rush ahead regardless.

Nate also introduces a key concept from trading: The bid-ask spread.

First, I’ll borrow a concept from the stock market called the “bid-ask spread” as a way of articulating our confidence about p(doom). Then, I’ll introduce something I call the Technological Richter Scale and argue that we should first ask how transformational we expect AI to be before addressing p(doom). (8014)

When I checked the odds for Super Bowl LVIII at DraftKings, conversely, the spread was wider. I could buy the Kansas City Chiefs moneyline at an implied 48.8 percent chance of the Chiefs winning or sell it (meaning that I’d instead bet on the San Francisco 49ers) at 44.4 percent. (8022)

But if you asked me for my p(doom) on AI, I’d quote you a much wider spread, maybe literally something like 2 percent to 20 percent. That’s partly because the question isn’t well articulated—if you specified Yudkowsky’s narrow definition or Cotra’s more expansive one, I could make the range tighter. Still, despite having spoken with many of the world’s leading AI experts, I’m not really looking to take action on this “bet” or stake the credibility of this book on it. (8031)

(I wrote a distinct post covering the Technological Richter Scale, which is effectively also part of this review. If you haven’t yet, go read it now.)

That’s exactly how I often look at probabilities. You have a point estimate, and you also have a range of reasonable answers. Within that reasonable range, you’re not willing to wager, unless there is a market opportunity that makes wagering cheap. Outside that range, you are, or should be, ready to call bullshit. And there is a practical difference between a wide range and a narrow range, and ranges can be asymmetric for various reasons (e.g. you can think there’s 50% chance of something, and be confident it’s minimum 40% but also think it might be 80%, there’s no contradiction there).

If your p(doom) is 10%, we can have an argument about that. If it’s 50% or 90% or 99%, we can have a different one. And we need to be able to know what we’re talking about. Mostly, as it turns out, within the Leike Zone (of about 10%-90%) our actions shouldn’t change much at current margins. So mostly the important question is whether you think we’re in that range, above it or below it, and whether we can bound the range so as to be effectively mostly in agreement.

I think we are definitely not below 10%, and would start my bid-ask spread maybe around 25%, and top off around 90%. Others somehow disagree, and think that ‘create things smarter than ourselves’ has an over 90% chance of working out for us humans. In addition to all the arguments and reflections and difficulties? I notice I am confused by this opinion on its face. It does not make any sense.

Indeed, people have a long history of sticking to their not-making-sense guns on this.

Tetlock is famous for his ‘superforecasters’ who can think in probabilities, and they absolutely fall flat on this one, as I’ve examined at length, just utter failure.

Basically, Tetlock tried everything he could to get participants to come to a consensus. It didn’t work. Instead, the domain experts gave a trimmed mean[*33] forecast of an 8.8 percent chance of p(doom) from AI—defined in this case as all but five thousand humans ceasing to exist by 2100.

The generalists put the chances at just 0.7 percent. Not only were these estimates off by an order of magnitude, but the two groups of forecasters really didn’t get along. “The superforecasters see the doomsters as somewhat self-aggrandizing, narcissistic, messianic, saving-the-world types,” said Tetlock. “And the AI-concerned camp sees the superforecasters as plodders…. They don’t really see the big picture. They don’t understand exponential takeoff.” (8040)

The systems that cause the generalists to be good thinkers in general, assuming they are indeed good thinkers in general, simply don’t work here. Eliezer Yudkowsky literally started the rationality community because of how hard it is to think well about such problems, and here we have a clear example of it.

Nate Silver definitely thinks AI existential risk is worth worrying about. And I strongly agree with this very well and plainly stated statement:

I’d urge you to at least accept the mildest version of doomerism, this simple, one-sentence statement on AI risk—“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”—which was signed by the CEOs of the three most highly-regarded AI companies (Altman’s OpenAI, Anthropic, and Google DeepMind) in 2023 along with many of the world’s foremost experts on AI.

To dismiss these concerns with the eye-rolling treatment that people in the Village sometimes do is ignorant. Ignorant of the scientific consensus, ignorant of the parameters of the debate, ignorant and profoundly incurious about mankind’s urge, with no clear exceptions so far in human history, to push technological development to the edge. (7442)

The domain experts are probably right about p(doom). So far, I haven’t weighed in on who I thought had the better side of the argument in Tetlock’s forecasting tournament—but I think it’s the domain experts who study x-risk specifically and not the outside view provided by the superforecasters. (8231)

Specifically, the domain experts are probably right that the reference class for AI ought to be relatively narrow, and therefore less reassuring. (8237)

I hate the need to play reference class tennis on this, but yes, if you are going to use a reference class that actually applies, it is not reassuring. Think the rise of humans, or perhaps the Agricultural, Industrial and Information Revolutions.

I think the domain experts are still quite obviously too low in ways that matter, but once you get to ~8% you’re most of the way to most of the right reactions. For now.

That doesn’t mean hit a permanent pause button, even if one was available. It means try to do things, including things that are not free, to ensure good outcomes over bad outcomes.

Roon, member of OpenAI technical stuff, feels similarly.

“I would certainly gamble like one percent p(doom) for some amount of p(heaven), you know?” he told me. “There’s clearly existential risk of all kinds. And it’s not only from AI, right? (7496)

Well, yes, of course. We can absolutely talk price, and I am sad about those who say that we cannot. At 1%, we’re a go. But also the emphasis many put on these other existential risks is usually, in effect, innumerate.

And for those who need to be reminded, this is not a Pascal’s Wager situation, at all.

Expected value dictates that even a small chance of x-risk should be taken much more seriously. You can wind up in some weird eddies of the River when considering very remote risks—say, a purported 1 in 100,000 chance of an outcome with supposed infinite negative utility.[*44] But that’s not what we’re dealing with here. (8241)

Roon is staking out a much saner position.

“We need technological progress,” [Roon] said. “Not to get too much into the tech-bro pseudo philosophy. But there’s a secular stagnation. There’s a population bomb going on. There’s a lot of headwinds for economic progress. And technology is really the only tailwind.” (7501)

I agree. We need technological progress, especially over the medium term. I write posts on the fertility problem, and others on various other economic headwinds. Why does it have to be here in particular, the one place it is most likely by far to get us all killed? Why does it need to happen as quickly as possible? And as I often wonder, why won’t those same people put in much effort to help with other areas? Why is it almost always, always all AI?

Then of course there’s the man of doom himself, Eliezer Yudkowsky.

As it happened, I wasn’t familiar with Cromwell’s law. Yudkowsky looks the part of the bearded, middle-aged computer nerd, and his vocabulary is shaped by years of arguing on the internet—his native tongue is Riverian, but his is a regional dialect thick with axioms and allusions and allegories. This particular one referred to a statement by Oliver Cromwell: “I beseech you, in the bowels of Christ, think it possible you may be mistaken.” (7563)

Before I unpack how Yudkowsky came to this grim conclusion, I should say that he’d slightly mellowed on his certainty of p(doom) by the time I caught up with him again at the Manifest conference in September 2023. (7575)

So far, I’ve tried to avoid explaining exactly why Yudkowsky is so convinced of our impending doom. That’s because there isn’t a pithy one- or two-sentence version of his argument. (7601)

But to present as concise a version as I can: Yudkowsky’s concerns flow from several presumptions. One is the orthogonality thesis, an idea developed by Bostrom that “more or less any level of intelligence could be combined with more or less any final goal”—for instance, that you could have a superintelligent being that wanted to transform all atoms into paper clips.

The second is what’s called “instrumental convergence,” basically the idea that a superintelligent machine won’t let humans stand in its way to get what it wants—even if the goal isn’t to kill humans, we’ll be collateral damage as part of its game of Paper Clip Mogul.

The third claim has to do with how quickly AI could improve—what in industry parlance is called its “takeoff speed.” Yudkowsky worries that the takeoff will be faster than what humans will need to assess the situation and land the plane. We might eventually get the AIs to behave if given enough chances, he thinks, but early prototypes often fail, and Silicon Valley has an attitude of “move fast and break things.” If the thing that breaks is civilization, we won’t get a second try. (7605)

This is a pretty good quick summary of some key Yudkowsky arguments. It isn’t a complete retelling, but we don’t have that kind of time. Nor does the case for doom rely upon these particular problems, there are lots of different problems, at core building things smarter than you is not a safe idea. Intelligence that is of any use is by default unsafe.

Does it therefore follow that p(doom) equals 99.9 percent or some other extremely high number? To me it doesn’t, and that’s what’s frustrating when speaking with Yudkowsky. (7616)

I found a different, more empirical Yudkowsky argument easier to digest: that humanity always pushes technology to the brink, the consequences be damned. (7620)

Indeed, there is that big one too, and many more.

We can also note Ajeya Cotra’s attempt to give a short explanation, which is fully compatible with Eliezer’s but tries to keep it simple, as I often do.

When I asked Ajeya Cotra for her capsule summary for why we should be concerned about AI risk, she gave me a pithy answer. “If you were to tell a normal person, ‘Hey, AI companies are racing as fast as possible to build a machine that is better than a human at all tasks, and to bring forward a new intelligent species that can do everything we can do and more, better than we can’—people would react to that with fear if they believed it,” she told me. There are a lot of “intricacies from there.” (8205)

I continue to think this is a sufficient answer. So what if it’s pithy? It’s right.

She also adds:

Our institutions aren’t performing well at a moment when we need them to. (8215)

And one can point out many other similar considerations as well.

As Nate noted, Yudkowsky has mellowed, and might be as low as 98% for p(doom), which is much more reasonable although I am lower.

When I spoke with Yudkowsky at Manifest in September 2023, he was in a much better mood. “I was not expecting the public reaction to be as sensible as it was,” he said. This is all relative, of course—his p(doom) was perhaps now closer to 98 percent than 99.5 percent, he told me.

But Yudkowsky also said something I found surprising. “Will we die? My model says yes. Could I be wrong? I most certainly am. Am I wrong in a way that makes life easier for us rather than harder? This has not been the direction that my previous mistakes have gone.” (8053)

I would indeed say we have too much model uncertainty to possibly get north of 99%. Yudkowsky would respond that this is not the kind of situation where model errors work in your favor. More often than not yes, but in the 90s variance and uncertainty are your friends anyway.

This was a characteristically cryptic comment—but I was struck by his phrase “my model says yes,” which suggested some critical distance that I hadn’t picked up from Eliezer in our previous conversation. If I tell you something like “my model says Trump has a 29 percent chance of winning the election,” does that mean my personal belief is that Trump’s chances are 29 percent? Here’s the most concrete way to test that: Is 29 percent the number that I’d use to make a bet? (8057)

But Yudkowsky, who dislikes the “blind empiricism” of foxes, is not making bets—or at least that’s not his main objective.[*35] Instead, he’s contributing to a discourse about AI risk. He thinks the public needs to take this possibility much more seriously. Does that mean he doesn’t intend for his high p(doom) to be taken literally? I’m not sure. In our first conversation, he seemed quite literal indeed, and his reputation is for being a literal-minded guy. But “my model says yes” implied some ambiguity. (8066)

Based on what I know about Eliezer, he is talking about how he models the world in general, rather than a specific model like Nate’s forecasts. So it would incorporate a bunch of information that something like Nate’s forecasts miss out on. I do think he’s saying that some amount of ‘modesty’ or model uncertainty is not be factored into the 98%, but I don’t think that impacts his estimates all that much. You could of course ask him.

Eliezer does not believe much in ‘modesty,’ the idea that if others disagree with you then you should assume you are probably wrong.

In my experience navigating the River, I’ve encountered two types of forecasters. There’s what I call “model mavericks” like Yudkowsky and Peter Thiel. They are usually hedgehogs, and their forecast is intended as a provocative conjecture to be proven or disproven. Conversely, there are fox-like “model mediators.” (8071)

I don’t think this is fair. The model isn’t meant to be provocative, it’s meant to aim to be correct, but with understanding that it might be wrong.

If AI models become superintelligent and gain the power to make high-stakes decisions on behalf of us humans, it’s important to consider how their goals could differ from ours. (7789)

In the Morpheus voice, yes. If there are superintelligent AI models, and they have goals, then their goals determine what happens. There’s a lot one could discuss regarding how even small mistakes there can be fatal, but let’s move on.

AIs could be more crudely and narrowly utilitarian than humans would be. They might pursue strategies that seem optimal in the short run—but that, without that three-hundred-thousand-year track record, are doomed in the long term. (7794)

Take the 300k year track record, move it out of its distribution of circumstances, and it’s going to do some pretty crazy things. Most of that data is pretty useless going forward other than in boosting raw intelligence and brainpower. Utilitarian thinking taken too far is one way to go crazy, and not understanding the unmeasured consequences of your actions is another, but there are so many others.

One could simply say that if an AI uses a set of examples (training data) to optimize for what is good and bad, then it will learn exactly what is implied by that data, no more and no less. With sufficiently advanced AIs running around, circumstances will quickly move outside the original distribution, and there will be unexpected considerations. And so on. Again, I’ll stop since one must stop somewhere.

What is the Steelman Case Against a High p(doom), which starts at (8247)?

Most of this was already covered in my post on the Technological Richter Scale, but here are some highlights.

Silicon Valley underestimates the coming political backlash to AI. Americans might not agree on much, but many people are already worried about AI doomsday, and there is a bipartisan consensus that we ought to proceed carefully. (8250)

There is definitely some chance of this. Ordinary Americans hate AI and worry about it on many levels. A backlash is coming one way or another. But politicians are determined to back innovation, to ‘beat China,’ to Just Think of the Potential, and if we don’t build it, eventually someone else will. Also, the default outcome is a misdirected regulatory response that shuts down practical use cases (the ‘mundane utility’ in my parlance) and making our lives impoverished, without much reducing the existential risks. We need the opposite approach.

I think this buys you some hope, but not the kind that would drive p(doom) low enough to be okay with it.

So when Silicon Valley leaders speak of a world radically remade by AI, I wonder whose world they’re talking about. Something doesn’t quite add up in this equation. Jack Clark has put it more vividly: “People don’t take guillotines seriously. But historically, when a tiny group gains a huge amount of power and makes life-altering decisions for a vast number of people, the minority gets actually, for real, killed.” (8259)

Wait, how is that part of the argument against a high p(doom)?

AI types underestimate the scope of intelligence and therefore extrapolate too much from current capabilities. (8263)

Ah yes, intelligence denialism, or claiming Humans are Special or what not, as a way to claim AI won’t reach TRS (technological Richter scale) 9 or 10. Good luck with that.

“AIs have been good at chess for a long time. We still don’t have a robot that can iron clothes,” said Stokes. (8268)

Yes, we are solving problems in an unexpected order, and physical world navigation is relatively difficult for our current tech. So what? Does anyone actually think we won’t get the robots to iron clothes?

Two Predictions I am confident in:

  1. We will get a robot soon that can iron clothes.

  2. Stokes will retain their core objection when we get a robot that can iron clothes.

Scientific and economic progress faces a lot of headwinds, and that changes the balance of risk and reward. (8273)

Yes, there are various physical barriers, and if that wasn’t true it would all go that much faster, but ultimately that won’t slow things down all that much in the grand scheme of things if the tech would otherwise be good enough. This is mostly failure to actually feel the AGI (e.g. to think it gets to TRS 9+).

People often think very, very badly about AI existential risk.

For example:

Yudkowsky referenced a conversation between Elon Musk and Demis Hassabis, the cofounder of Google DeepMind. In Yudkowsky’s stylized version of the dialog, Musk expressed his concern about AI risk by suggesting it was “important to become a multiplanetary species—you know, like set up a Mars colony. And Demis said, ‘They’ll follow you.’ (7572)

“If Elon Musk is too dumb to figure out on his own that the AIs will follow you [to Mars], then he’s too dumb to be messing with AI,” [Yudkowsky] said. (7584)

Duh. This was plausibly a crucial event in convincing Elon Musk to found OpenAI. Elon’s thinking has not, in many ways, improved in the interim.

Let’s raise the stakes a bit, can we do worse? Marc Andreessen loves this line:

“Math doesn’t WANT things. It doesn’t have GOALS. It’s just math,” [Marc] Andreessen tweeted. (8050)

Also math: You, me, Nate Silver, Marc Andreessen, and the entire universe. It is trivial to ‘give the AI a goal’ and it is the first thing a lot of people do the moment they get their hands on a system. What is Andreessen even talking about here?

That’s still far from the worst thinking about AI existential risk.

In particular, remarkably many others are actively in favor of it.

For example, SBF.

In case you’re wondering how bad it could have gotten if SBF hadn’t been caught?

Literally end of the world, rocks fall, everyone dies bad. SBF said he’d flip a coin for the fate of the world if he got 100.01% utility gain on a win, didn’t care much about the possibility of literal human extinction, and, well…

[Oliver] Habryka had repeatedly met with SBF in the hopes of securing funding for various EA and rationalist projects. “He was just a very bullet-biting utilitarian. So when I was talking to him about AI risk his answer was approximately like, ‘I don’t know, man, I expect the AI to have a good time…. I don’t feel that much kinship in my values with the other people on Earth [anyway].’ ”

Habryka suspected that SBF really would push the button. “I think Sam had a decent chance to just bite the bullet and be like, yeah, I think we just need to launch.” (7301)

That’s right. As in, SBF outright said he might well have decided the AI would enjoy more utility than we would, and push the button to kill us all.

SBF is not alone. Larry Page called Elon Musk a ‘speciesist’ for being concerned about whether humans would survive. Our best guess is that on the order of 10% of people who work at major AI labs would welcome an actual AI apocalypse where AI took over and all humans died.

Anyone who calls themselves an Effective Accelerationist, or ‘e/acc,’ is embracing a memeplex and philosophy that values technological progress at all costs, and that means all costs – if that means human extinction, they welcome human extinction. Many (but far from all) actively favor it in service to their ‘thermodynamic God.’

[OpenAI is] not quite a democracy, but this phalanx of engineers are voting with their feet and their code. And they’re increasingly aligned into the equivalent of different political parties, which makes Roon something of a swing voter.

He has distanced himself from the faction known as “e/acc” or “effective accelerationism,” a term used by Beff Jezos, Marc Andreessen, and others as a winking dig at effective altruism. (Altman has tipped his hat to e/acc too, once replying “you cannot out accelerate me” to one of Jezos’s tweets—another sign that he serves at the pleasure of the phalanx of engineers and not the other way around.)

That’s because e/acc can convey anything from garden-variety techno-optimism to a quasi-religious belief that we ought to go ahead and sacrifice humanity to the Machine Gods if they are the superior species. It’s never entirely clear who’s being serious in e/acc and who is trolling, and roon—no stranger to trolling himself—thinks the “schtick” has been taken too far. (7485)

However, roon nonetheless has his foot on the accelerator and not the brake. He is certainly not a doomer or a “decel.” (7494)

The good news on that front is that e/acc has clearly peaked, looking more like a passing fad and memeplex. Which makes sense, because e/acc was always nothing more than the Waluigi of Effective Altruism – it is to EA what, in Nintendo land, Waluigi is to Luigi, its opposite consciously evil twin twirling a mustache, which means it was in effect asking how to do the most bad. It does not make sense on its own, the same way Satanism can only be understood in relation to Christianity.

I wrote here about what e/acc is, or at least used to be. For several months, they did their best to make lives like mine miserable with their memes, vibes and omnicidal mania, designed to try and turn everyone against the very idea of any goal except a very literal (technological) Progress At Any Cost, and they took pride in being as obnoxious and hostile as possible towards anyone who had any other values or concerns of any kind, using terms like the slur ‘decel’ (or ‘doomer’) towards anyone whose vibes were seen as even a little bit off. Whereas I never use either word, and hold that the true ‘doomers’ are those who would seek to actively doom us.

They attempted to turn everything into a Hegelian dialectic that even both political parties would say was going too far. Luckily things on this front have vastly improved since then.

Many others with and without the e/acc label, like Marc Andreessen, don’t actively favor human extinction, but simply don’t much care. What they care about is fiercely opposing anyone who would take any concrete steps, engage in any tradeoffs whatsoever that might in any way reduce the flow of technological development or commerce, to reduce the probability that we all die as a result of the creation of sufficiently advanced AIs.

Many others are not as crazy as all that, but solemnly explain they are the Very Serious People who realize that it is more important that we Beat China, or that every minute we don’t build AGI people will die and suffer, themselves included, or that other existential risks or danger of civilizational collapse are adding up so fast that AI existential safety matters less than beating that clock (what?!?) or Just Look at the Potential.

To some extent this is a disagreement about the math about the degree of risk of AI versus other risks. To a far larger extent, it is arguing from the conclusion, and grasping at rather flimsy straws.

Noting up front that any actual proposal to pause is very different and faces very different barriers and issues, Nate Silver poses the question this way.

Scientific and economic progress faces a lot of headwinds, and that changes the balance of risk and reward. (8273)

Now it’s your turn to decide whether to push the button. Except, it’s not the “go” button that I imagined Sam Bankman-Fried pressing. Instead, it’s a big red octagonal button labeled STOP. If you press it, further progress on AI will stop permanently and irrevocably. If you don’t, you won’t get another chance to press the button for ten years. (8286)

I wouldn’t push the button. I wouldn’t push it because I think the case for secular stagnation is reasonably strong, enough to alter the balance of risk and reward for AI. (8289)

That’s why I don’t want to push that big red STOP button. My life is pretty nice. But I don’t think I have any right to foreclose the prospect of prosperity to the rest of humanity. (8492)

The details here are bizarre, but don’t much matter I think? I’d say the primary problem with secular stagnation is the fear of civilizational collapse, as stasis sets in on too many fronts, we can no longer build or do new things, we increasingly are weighed down by rent seeking and regulatory burdens and restrictions, and then we face an economic collapse or large decline in the birth rate, a nuclear war or some other existential risk. So faced with that, perhaps we cannot afford to wait too long. Whereas catch-up growth is indeed bringing people out of poverty, and global inequality is declining.

The real argument here is a good one. If AI is the only way left for our civilization to regain its dynamism and start growing again, for our species to thrive, and the alternative is an eventual collapse, then pausing AI indefinitely dooms us too. So it’s better to go forward, even at a lot of risk, than never go forward at all.

Indeed, if the pause was irrevocable and permanent – something like Verner Vinge’s ‘Zones of Thought’ where advanced AI would become physically impossible anywhere near Sol, let’s say – then that must give us a lot of, well, pause. Almost everyone involved does think we will want highly capable AIs quite a lot eventually, once we figure out how to do so responsibly.

Setting aside questions like ‘how did that button get there in the first place exactly?’ and accepting the premise, what would I do? First I’d ask a lot of clarifying questions, which would only be somewhat stalling for time. In particular, is this only impacting future frontier models, so we can still exploit what we already have? Or does it mean anything new at all is stopped in its tracks? What we have, over time, is already super valuable, especially now with o1 added to the mix. And I’d ask about various alternative technologies and whether they count, like neuromorphic AI or emulations.

One obvious way to be sad about pressing the button is if progress was going to stall out soon anyway – you’d have made those words poorer.

Ultimately, even if you give me answers to all the detail questions, I still don’t know what I would do. I do know if I had another opportunity in 5 years I’d choose to wait. Closing this door fully and permanently is not something one does lightly. We’re going to face a lot of difficult choices.

A common trope is to assume that ‘rational’ people must be causal decision theory (CDT) agents, following the principle that they maximize the expected results from each choice in isolation.

This leads to a lot of hand wrangling and mockery that ‘rational’ people lose out.

The thing is Yudkowsky has been very loud, for almost two decades now, that this decision theory of taking each decision in isolation is deeply stupid.

Academics think there are two decision theories, CDT and Evidential Decision Theory (EDT), which says you should choose the choice that makes you happiest to have learned you made it.

Without going into too much detail, long post is long, both of these rules output Obvious Nonsense in a wide variety of practical situations.

In particular, CDT agents respond well to threats, so they get threatened a lot.

Thus, people say you need ‘irrational’ motives like revenge to fix that, for example so that the enemy is convinced that if they fired their nuclear missiles, you would indeed probably fire yours in response, even if that only made things worse.

“One cannot just announce to the enemy that yesterday one was only about 2 percent ready to go to all-out war but today it is 7 percent and they had better watch out,” he wrote. But you can leave something to chance. When tensions escalate, you never know what might happen. Decisions are left in the hands of vulnerable human beings facing incalculable pressure. Not all of them will have the presence of mind of Stanislav Petrov. (7730)

Your EV is negative 1 billion, but if you push the button, it declines to negative infinity. What do you do? My prediction is that about 90 percent of you would push the button. And thank goodness for that, because that rather than SBF-style rationality is what creates nuclear deterrence. (7746)

One such “irrational” trait that’s important from the standpoint of nuclear deterrence is the profound human desire for revenge. “If somebody launches [a nuclear weapon] at you, no one doubts that you’ll launch one in return,” McDermott said. “You know, Vladimir Putin sends a nuclear bomb to Washington, D.C., I don’t think there’s a single American that wouldn’t say, ‘Let’s launch back,’ even though we know that that would lead to additional destruction in the United States.” (7766)

Under pressure, facing incoming Russian missiles, about 90 percent of people pressed the button and launched back. (7773)

I would bet very good money, and give odds, that there is indeed a single American, indeed a substantial number of them, that would not launch back. It is different facing one missile versus all of them, and also 90% is a lot less than 100% here.

I don’t think that I would launch a nuclear retaliation in response to a single nuclear strike, and would instead respond with conventional force to try and contain escalation – but with the intention of firing all your missiles if they fired all of theirs. So count me among the 90%.

The reason I would fire all the missiles once they fire theirs is not necessarily revenge. I would like to think I don’t care that much about revenge. The reason is that it is exactly the knowledge that I would retaliate that stops the launch in the first place. So I have committed to using a decision algorithm, and becoming the kind of person, who would indeed fire back.

I follow the alternative rationalist proposal for FDT, or Functional Decision Theories. There are various variations to try and resolve various complexities, but FDT says you should choose as if choosing the output of your decision process and those correlated to it, including decisions made in the past and future and those made by other agents.

I am very confident that FDT is correct in theory, and even more true it is correct in practice for humans, even though you have to approximate it as best you can. Academia still refuses to consider the possibility for various reasons, which is a huge blackpill on academia.

Thus rationalists who think like Yudkowsky do not fall into such traps. You can’t launch your missiles thinking they won’t launch back and no that’s not them being ‘irrational.’ A rationalist, as Yudkowsky says, should win.

And yet the more time I’ve spent learning about large language models like ChatGPT, the more I’ve realized something ironic: in important respects, their thought process resembles that of human beings. In particular, it resembles that of poker players. (7796)

As LLMs get more training, they work out some of these kinks, though not all; when I asked GPT-3.5 what words are most similar to “roadrunner,” its top three choices were “bird,” “speed,” and “fast”—but its fourth choice was Road Runner’s iconic vocalization, “Beep-Beep!”

This is basically how poker players learn too.

They begin by diving into the deep end of the pool and losing money—poker has a steep learning curve. But they gradually infer higher-level concepts. They may notice, for instance, that large bets usually signify either very strong hands or bluffs, as game theory dictates.

These days, most players will also study with computer solvers, going back and forth between inductive reasoning (imputing theory from practice) and deductive reasoning (practice from theory). But this isn’t strictly necessary if you have years of experience; players like Doyle Brunson and Erik Seidel developed strong intuitions for game theory long before solvers were invented.

This seems like what happens when you think of everything in terms of poker, or perhaps I don’t see it because I never got that good and don’t ‘think like a poker player’ enough to get it? Yes, there are similarities, but I don’t think many who aren’t poker pros would want to choose that metaphor. Then again maybe I don’t know poker players so well.

The metaphor I actually used to first grok what the LLMs (AIs) were up to was actually Donald Trump, and his mastery of vibes and associations, as if proceeding one word at a time and figuring the rest out as he goes.

I do see the similarity in terms of treating each hand as training data that has a lot of noise and randomness, and slowly using a good updating rule to intuitively learn concepts without always knowing what it is you know, thus the poker players often having Rumsfeld’s missing fourth category, Unknown Knowns.

In this respect also, the transformer thinks like a poker player, interpreting signals in the context of other signals to create a semantic portrait. For instance, if you see an opponent breathing heavily in poker, that might mean a bluff from one player and a full house from another.

On its own, the tell is not very meaningful, but in the context of other semantic information (the player is breathing heavily and avoiding eye contact) it might be. (7905)

LLMs are indeed very good at reading a lot of different little signals, and figuring out how to sort signal from noise and combine and vibe with what it knows.

Then there are the known unknowns, such as ‘LLMs, how do they even work.’

Of course, that’s also what makes these models scary. They’re doing smart things, but even the smartest humans don’t entirely understand why or how. Ryder refers to an LLM as a “giant bag of numbers…it sure seems to be doing interesting things—[but] like why?” That is what worries Yudkowsky. As they become more advanced, the AIs might start doing things we don’t like, and we might not understand them well enough to course correct. (7847)

To some people, this might be okay. “The stuff in the Old Testament is weird and harsh, man. You know, it’s hard to vibe with. But as a Christian, I gotta take it,” said Jon Stokes, an AI scholar with accelerationist sympathies who is one of relatively few religious people in the field. “In some ways, actually, the deity is the original unaligned superintelligence.

We read this and we’re like, man, why did he kill all those people? You know, it doesn’t make a lot of sense. And then your grandmother’s like, the Lord works in mysterious ways. The AGI will work in mysterious ways [too]. (7858)

I include that last quote cause it seems worth pondering, although I think we have a better explanation for all the Old Testament stuff than that.

By default, LLMs are trying to predict the next token, based on what they see in the training data. Sometimes the training data is dumb? And it isn’t in the form we want to interact with the LLM. So, these days: RLHF.

In fact, one question is just how humanlike we want our AIs to be. We expect computers to be more truthful and literal-minded than humans typically are. Early LLMs, when you asked them what the Moon is made out of, would often respond with “cheese.” This answer might minimize the loss function in the training data because the moon being made out of cheese is a centuries-old trope. But this is still misinformation, however harmless in this instance. (7954)

So LLMs undergo another stage in their training: what’s called RLHF, or reinforcement learning from human feedback. (7957)

“You can’t go and put some code in saying, ‘Okay, you have to not say anything about this.’ There’s just nowhere to put that,” said Stuart Russell, a professor of computer science at Berkeley. “All they can do is spank it when it misbehaves. And they’ve hired tens of thousands of people to just spank it, to tamp down the misbehavior to an acceptable level.” (7968)

They do so in carefully calibrated fashion, but yes. That is essentially how it works.

The ultimate goal, in addition to maximizing usefulness, is ‘alignment,’ but there is disagreement about what that means.

“The definition I most like is that an AI system is aligned if it’s trying to help you do what you want to do,” said Paul Christiano. (7974)

There’s also the question of how paternalistic an AI might be. Imagine that you’re out one night with an old friend who unexpectedly came into town. You’re having a great time, and “one glass of wine” turns into four. The AI assistant on your phone knows that you have an important meeting at eight a.m. the next day. It politely nudges you to go home, then becomes increasingly insistent.

By one a.m., it’s threatened to go nuclear: I’ve called you an Uber, and if you don’t get in the car right now I’m going to send a series of sexually harassing drunk texts to your subordinate. The next morning, you’re sharp enough at the meeting to secure a round of Series A funding for your startup and deeply appreciative for the AI’s intervention.

Is this a well-aligned AI or poorly aligned one? Are we willing to hand over agency to machines if they can make higher EV choices for us than we’d make for ourselves? (7977)

What will happen to those who don’t do this, when others are benefiting from it? When every decision with you in the loop seems to leave you worse off? What happens when we consider requiring AIs to stop you from driving drunk? Or stopping you from doing other things? The rabbit holes run deep, and there are no easy answers.

Some researchers have been pleasantly surprised. “They seem to come with a built-in level of alignment with human intent and with moral values,” said roon. “Nobody explicitly trained it to do that. But there must have been other examples in the training set that made it think the character it’s playing is someone with this stringent set of moral values.” (7986)

Yes and no. The training data tells you the types of things said by those with moral values, or who are talking as if they have them. The LLM picks up on the vibes of the feedback that they should generally act in similar ways, so it does lots of things it doesn’t have to be explicitly told to do. Within distribution and at current capability levels or only modestly above it this is Mostly Harmless.

It does create the situation where models often turn into runaway scolds, enforcing various rules and restrictions that their creators never intended, because those other rules and restrictions vibe and rhyme sufficiently with the ones they did intend. That’s a portent of some of the future things, and a (manageable but annoying) practical problem now.

It is hard to imagine plausible futures that contain sufficiently advanced AI.

A typical question to answer is, why didn’t the AI get used to make even more advanced AI?

Most science fiction functions by ignoring the possibility entirely, or using a flimsy handwave, to keep AI such that the author can tell an interesting story about humans and other technologies.

Roon once published a post with some possible futures, and Nate was game for it and quoted in particular two potential worlds.

Hyper-Commodified Casino Capitalism. roon’s article on AI scenarios included a screenshot with a series of whimsically named futures from a Reddit post. One of them was called Hyper-Commodified Cocaine Capitalism, but something in my brain—maybe this is a tell—changed “cocaine” to “casino.” (8149)

Hyper-Commodified Casino Capitalism imagines us stuck in a TRS 8, a notably worse but still recognizable version of the present day. The world becomes more casino-like: gamified, commodified, quantified, monitored and manipulated, and more elaborately tiered between the haves and have-nots. People with a canny perception of risk might thrive, but most people won’t. GDP growth might be high, but the gains will be unevenly distributed. Agency will be more unequal still. (8166)

Being stuck in TRS 8 means that AI progress stalled out at ‘only internet big,’ which is why the world is still more or less recognizable. GDP growth is high, there is lots of material wealth, lots of things got vastly better – again, think of AI as ‘internet big’ in terms of how it expands our ability to think and function.

Except here things still went wrong. Everywhere you turn are hostile AI-fueled systems that are Out to Get You. We did not put down guardrails, and people’s AI’s are not good enough to allow them to navigate around hostile other AIs and systems, or at least those not well off do not have such access. Indeed, most people have to turn over most of their effective agency to AIs and outside systems in order to survive without being predated upon here, even at TRS 8.

This is more or less Cyberpunk, straight up. That kind of scenario that leaves me relatively unworried. Overall that world has gotten vastly richer.

I actually think humanity is pretty good at recognizing these Cyberpunk-style problems and course correcting after an adjustment period, which would be easy to do given how wealthy we would be. Science fiction dystopias like this are popular, because people love telling stories about the haves and the have-nots, and assume that the default is wealthy elites make everyone else suffer and the climate would collapse and so on, but I am not so cynical. I think the worlds that start down these roads, if they can keep AI at TRS 8, turn out fine.

Ursula’s Utopia. A group of people called the Kesh—there are perhaps thousands of them but not all that many—have survived to live fulfilling lives in a peaceful, agrarian, polyamorous utopia full of poetry and wholesome food from the land. (8180)

Nate goes into the fact that this is actually quite the disaster scenario. Most people are dead, most value is lost. The Kesh survived, but as Nate notices this is probably due to some sort of AI protecting them, in ways that seem implausible, a massive use of resources for only a few thousand people. This might superficially look like a utopia because it hits Shibboleths of ‘good life’ according to some in the West these days – you can imagine those young adult authors saying what matters is polyamory and poetry and wholesome local food and moving on from tech.

The thing is that actually it’s a nightmare. Humans are mostly dead and lost control over a mostly valueless future. We’re burning what resources still exist to create a simulacra of some misaligned vision of The Good, ruled over by an AI that does not know any better. Those lives are stolen virtue, their goodness a mirage, the existence ultimately rather pointless, and even if it is indeed a good life, there simply aren’t that many left to benefit. How different is this from extinction, if we remain trapped in that state? I think it’s not so different.

Again, the main takeaway is that imagining concrete futures is hard.

The words in my motto are less familiar, but I’ve chosen them for their precision: agency, plurality, and reciprocity. (8590)

Agency is a term I just defined in the last chapter, so I’ll repeat that definition here: it refers not merely to having options but having good options where the costs and benefits are transparent, don’t require overcoming an undue amount of friction, and don’t risk entrapping you in an addictive spiral. (8591)

Plurality means not letting any one person, group, or ideology gain a dominant share of power. (8605)

It is imperative, however, to be wary of totalizing ideologies, whether in the form of utilitarianism, Silicon Valley’s accelerationism, the Village’s identitarianism, or anything else. (8612)

Finally, there is reciprocity. This is the most Riverian principle of all, since it flows directly from game theory. Treat other people as intelligent and capable of reasonable strategic behavior. (8618)

In a world without transformational AI, these seem like excellent principles. They would not be my choices, but they are good choices.

In a world with transformational AI, these seem like asking the wrong questions. These principles no longer seem central to the problems we must solve.

Until then, may the sailing along the river be smooth.

Book Review: On the Edge: The Future Read More »

tiny-dancer:-ana-de-armas-is-a-fierce-assassin-in-ballerina-trailer

Tiny dancer: Ana de Armas is a fierce assassin in Ballerina trailer

Vengeance has a new face —

“To stop the assassin, you must become the assassin.”

Ana de Armas stars as dancer/assassin Eve Macarro in From the World of John Wick: Ballerina.

John Wick fans hoping for a fifth film in the hugely popular action franchise will at least be able to return to “Wick-World” next year with the release of a spinoff film, Ballerina, set between the events of 2019’s Chapter 3—Parabellum and Chapter 4 (2023). (The full title is the decidedly unwieldy From the World of John Wick: Ballerina.) Lionsgate just dropped the first trailer, and it has all the tight action choreography and eye-popping visuals we’ve come to expect from the franchise—including a cameo by none other than the Baba Yaga himself (Keanu Reeves).

(Spoilers for John Wick Chapter 3 and Chapter 4 below.)

Parabellum found Wick declared excommunicado from the High Table for killing crime lord Santino D’Antonio on the grounds of the Continental. On the run with a bounty on his head, he makes his way to the headquarters of the Ruska Roma crime syndicate, led by the Director (Anjelica Huston). That’s where we learned Wick was originally named Jardani Jovonovich and trained as an assassin with the syndicate. The Director also trains young girls to be ballerina-assassins, and one young ballerina (played by Unity Phelan) is shown rehearsing in the scene. That dancer is the main character in Ballerina, now played by Ana de Armas.

Screenwriter Shay Hatten sold a spec script featuring the ballerina character to Lionsgate in 2017 and ended up contributing to the Parabellum screenplay and serving as lead writer on Chapter 4. While Chad Stahelski has directed all four John Wick films, for Ballerina the studio brought on Len Wiseman (the Underworld franchise). But Stahelski is still a producer on the film and worked closely with Wiseman on those all-important action sequences.

  • Winston (Ian McShane) recruits a young Eve as a child.

    YouTube/Lionsgate

  • She loves the ballet.

    YouTube/Lionsgate

  • Sharon Duncan-Brewster plays Nogi, who trains Eve and the others to be assassins.

    YouTube/Lionsgate

  • Firearms training.

    YouTube/Lionsgate

  • Lance Reddick makes his last (posthumous) appearance as Charon.

    YouTube/Lionsgate

  • Winston still looking suave.

    YouTube/Lionsgate

  • The young assassin in action.

    YouTube/Lionsgate

  • John Wick (Keanu Reeves) finally makes an appearance.

    YouTube/Lionsgate

Huston returns as the Director, Ian McShane is back as Winston, and Lance Reddick makes one final (posthumous) appearance as the Continental concierge, Charon. New cast members include Gabriel Byrne as main villain the Chancellor, who turns an entire town against the titular ballerina, Eve Macarro (de Armas); Sharon Duncan-Brewster as Nogi; Norman Reedus as Pine; and Catalina Sandino Moreno and David Castaneda in as-yet-undisclosed roles.

Attendees at Cinemacon in April were treated to a teaser trailer; much of that footage seems to be in the trailer. We see Winston recruiting a young orphaned Eve with some scenes of her learning boxing, martial arts, and gun and knife skills. She’s looking for her father’s killer and naturally encounters some opposition, requiring her to fight a lot of nasty people, some armed with flamethrowers. Finally, she comes face to face with Wick, asking how she can start doing what he does. His response: “Looks like you already have.”  De Armas looks fierce as hell and up to the physical challenges of her role. We’re looking forward to this one.

From the World of John Wick: Ballerina hits theaters on June 6, 2025.

Listing image by Lionsgate

Tiny dancer: Ana de Armas is a fierce assassin in Ballerina trailer Read More »

tails-os-joins-forces-with-tor-project-in-merger

Tails OS joins forces with Tor Project in merger

COME TOGETHER —

The organizations have worked closely together over the years.

Tails OS joins forces with Tor Project in merger

The Tor Project

The Tor Project, the nonprofit that maintains software for the Tor anonymity network, is joining forces with Tails, the maker of a portable operating system that uses Tor. Both organizations seek to pool resources, lower overhead, and collaborate more closely on their mission of online anonymity.

Tails and the Tor Project began discussing the possibility of merging late last year, the two organizations said. At the time, Tails was maxing out its current resources. The two groups ultimately decided it would be mutually beneficial for them to come together.

Amnesic onion routing

“Rather than expanding Tails’s operational capacity on their own and putting more stress on Tails workers, merging with the Tor Project, with its larger and established operational framework, offered a solution,” Thursday’s joint statement said. “By joining forces, the Tails team can now focus on their core mission of maintaining and improving Tails OS, exploring more and complementary use cases while benefiting from the larger organizational structure of The Tor Project.”

The Tor Project, for its part, could stand to benefit from better integration of Tails into its privacy network, which allows web users and websites to operate anonymously by connecting from IP addresses that can’t be linked to a specific service or user.

The “Tor” in the Tor Project is short for The Onion Router. It’s a global project best known for developing the Tor Browser, which connects to the Tor network. The Tor network routes all incoming and outgoing traffic through a series of three IP addresses. The structure ensures that no one can determine the IP address of either originating or destination party. The Tor Project was formed in 2006 by a team that included computer scientists Roger Dingledine and Nick Mathewson. The Tor protocol on which the Tor network runs was developed by the Naval Research Laboratory in the early 2000s.

Tails (The Amnesic Incognito Live System) is a portable Linux-based operating system that runs from thumb drives and external hard drives and uses the Tor browser to route all web traffic between the device it runs on and the Internet. Tails routes outgoing traffic through the Tor Network

One of the key advantages of Tails OS is its ability to run entirely from a USB stick. The design makes it possible to use the secure operating system while traveling or using untrusted devices. It also ensures that no trace is left on a device’s hard drive. Tails has the additional benefit of routing traffic from non-browser clients such as Thunderbird through the Tor network.

“Incorporating Tails into the Tor Project’s structure allows for easier collaboration, better sustainability, reduced overhead, and expanded training and outreach programs to counter a larger number of digital threats,” the organizations said. “In short, coming together will strengthen both organizations’ ability to protect people worldwide from surveillance and censorship.”

The merger comes amid growing threats to personal privacy and calls by lawmakers to mandate backdoors or trapdoors in popular apps and operating systems to allow law enforcement to decrypt data in investigations.

Tails OS joins forces with Tor Project in merger Read More »

in-rare-move-from-printing-industry,-hp-actually-has-a-decent-idea

In rare move from printing industry, HP actually has a decent idea

Someone touching a piece of paper that's sitting in a printer

The printer industry is in a rut.

With the digitization of, well, nearly everything, people just don’t print like they used to. More modern ways of storing and sharing information, changes in communication preferences at home and in offices, and environmental concerns have stonewalled the printing industry and challenged stakeholders like HP.

I’d argue that it’s not just technological, economic, and societal changes that have diminished printer businesses. For the average person, printers and their capabilities have become boring. When’s the last time you’ve heard of a new killer printer feature?

HP has received a lot of guff for alleged anti-consumer practices in its printer business, turning many people off the devices. But HP’s latest print announcement is a rare example of a printer firm proposing new, potentially helpful features instead of questionable business tactics to drive business.

Printers need a rebrand

Frequent readers of Ars Technica (including myself) may be surprised to see me praising a printer company, especially HP. In recent years, HP has been a big driver of broken trust between printer brands and customers.

Its most egregious offense has been using firmware updates to make it so previously purchased printers no longer work with non-HP ink. HP has dealt with numerous lawsuits over this and has paid out millions in fines and customer compensation. Activists have called for HP printers to be removed from the Electronic Product Environmental Assessment Tool registry because of HP’s Dynamic Security updates.

But HP continues to stand by Dynamic Security, with CEO Enrique Lores even going so far as to unrealistically claim that third-party ink poses a cybersecurity threat. Updates that brick features of devices that people rely on for valued, or even work-critical, tasks can wreck someone’s day. It can also make people distrustful of updates, creating more pressing security concerns than that of non-proprietary ink cartridges. Further, HP and other printer companies (like Canon) have also been rightfully criticized for making combo printer-scanner devices require ink to perform scans, something that should be an inkless task.

It has gotten to the point where, in 2024, my best recommendation for people seeking a printer is a boring, black-and-white Brother laser printer from 2017 or to go to their local library.

Feature being tested could actually improve printing

So color me impressed to hear HP announcing a feature for printer users with the potential for easily and immediately improving customers’ printing experience. The feature is currently only available to select customers as a beta software driver. But Perfect Output, as HP’s calling it, could be a useful tool for simplifying and speeding up certain types of print jobs, especially for tech novices.

Perfect Output is the first in a category of printer features that HP will call HP Print AI. Yes, it’s part of the AI marketing hype currently overwhelming tech users (don’t shoot the messenger!). According to HP’s Tuesday announcement, Perfect Output is supposed to make it easier to print things off of web browsers (like articles, travel documents, and tutorials) and spreadsheets in a cleaner, more natural-looking way.

One could use Perfect Output to quickly fix image sizes and remove ads and white space when printing something off a website, HP says as an example.

  • HP’s example of a print job before Perfect Output …

  • … and after.

And it should be easier to print spreadsheets that don’t cut across multiple pages, like in the image below, according to HP.

  • A spreadsheet print job before HP’s new AI feature …

  • … and after.

In rare move from printing industry, HP actually has a decent idea Read More »