Author name: Beth Washington

here’s-what-nasa-would-like-to-see-spacex-accomplish-with-starship-this-year

Here’s what NASA would like to see SpaceX accomplish with Starship this year


Iterate, iterate, and iterate some more

The seventh test flight of Starship is scheduled for launch Thursday afternoon.

SpaceX’s upgraded Starship rocket stands on its launch pad at Starbase, Texas. Credit: SpaceX

SpaceX plans to launch the seventh full-scale test flight of its massive Super Heavy booster and Starship rocket Thursday afternoon. It’s the first of what might be a dozen or more demonstration flights this year as SpaceX tries new things with the most powerful rocket ever built.

There are many things on SpaceX’s Starship to-do list in 2025. They include debuting an upgraded, larger Starship, known as Version 2 or Block 2, on the test flight preparing to launch Thursday. The one-hour launch window opens at 5 pm EST (4 pm CST; 22: 00 UTC) at SpaceX’s launch base in South Texas. You can watch SpaceX’s live webcast of the flight here.

SpaceX will again attempt to catch the rocket’s Super Heavy booster—more than 20 stories tall and wider than a jumbo jet—back at the launch pad using mechanical arms, or “chopsticks,” mounted to the launch tower. Read more about the Starship Block 2 upgrades in our story from last week.

You might think of next week’s Starship test flight as an apéritif before the entrées to come. Ars recently spoke with Lisa Watson-Morgan, the NASA engineer overseeing the agency’s contract with SpaceX to develop a modified version of Starship to land astronauts on the Moon. NASA has contracts with SpaceX worth more than $4 billion to develop and fly two Starship human landing missions under the umbrella of the agency’s Artemis program to return humans to the Moon.

We are publishing the entire interview with Watson-Morgan below, but first, let’s assess what SpaceX might accomplish with Starship this year.

There are many things to watch for on this test flight, including the deployment of 10 satellite simulators to test the ship’s payload accommodations and the performance of a beefed-up heat shield as the vehicle blazes through the atmosphere for reentry and splashdown in the Indian Ocean.

If this all works, SpaceX may try to launch a ship into low-Earth orbit on the eighth flight, expected to launch in the next couple of months. All of the Starship test flights to date have intentionally flown on suborbital trajectories, bringing the ship back toward reentry over the sea northwest of Australia after traveling halfway around the world.

Then, there’s an even bigger version of Starship called Block 3 that could begin flying before the end of the year. This version of the ship is the one that SpaceX will use to start experimenting with in-orbit refueling, according to Watson-Morgan.

In order to test refueling, two Starships will dock together in orbit, allowing one vehicle to transfer super-cold methane and liquid oxygen into the other. Nothing like this on this scale has ever been attempted before. Future Starship missions to the Moon and Mars may require 10 or more tanker missions to gas up in low-Earth orbit. All of these missions will use different versions of the same basic Starship design: a human-rated lunar lander, a propellant depot, and a refueling tanker.

Artist’s illustration of Starship on the surface of the Moon. Credit: SpaceX

Questions for 2025

Catching Starship back at its launch tower and demonstrating orbital propellant transfer are the two most significant milestones on SpaceX’s roadmap for 2025.

SpaceX officials have said they aim to fly as many as 25 Starship missions this year, allowing engineers to more rapidly iterate on the vehicle’s design. SpaceX is constructing a second launch pad at its Starbase facility near Brownsville, Texas, to help speed up the launch cadence.

Can SpaceX achieve this flight rate in 2025? Will faster Starship manufacturing and reusability help the company fly more often? Will SpaceX fly its first ship-to-ship propellant transfer demonstration this year? When will Starship begin launching large batches of new-generation Starlink Internet satellites?

Licensing delays at the Federal Aviation Administration have been a thorn in SpaceX’s side for the last couple of years. Will those go away under the incoming administration of President-elect Donald Trump, who counts SpaceX founder Elon Musk as a key adviser?

And will SpaceX gain a larger role in NASA’s Artemis lunar program? The Artemis program’s architecture is sure to be reviewed by the Trump administration and the nominee for the agency’s next administrator, billionaire businessman and astronaut Jared Isaacman.

The very expensive Space Launch System rocket, developed by NASA with Boeing and other traditional aerospace contractors, might be canceled. NASA currently envisions the SLS rocket and Orion spacecraft as the transportation system to ferry astronauts between Earth and the vicinity of the Moon, where crews would meet up with a landing vehicle provided by commercial partners SpaceX and Blue Origin.

Watson-Morgan didn’t have answers to all of these questions. Many of them are well outside of her purview as Human Landing System program manager, so Ars didn’t ask. Instead, Ars discussed technical and schedule concerns with her during the half-hour interview. Here is one part of the discussion, lightly edited for clarity.

Ars: What do you hope to see from Flight 7 of Starship?

Lisa Watson-Morgan: One of the exciting parts of working with SpaceX are these test flights. They have a really fast turnaround, where they put in different lessons learned. I think you saw many of the flight objectives that they discussed from Flight 6, which was a great success. I think they mentioned different thermal testing experiments that they put on the ship in order to understand the different heating, the different loads on certain areas of the system. All that was really good with each one of those, in addition to how they configure the tiles. Then, from that, there’ll be additional tests that they will put on Flight 7, so you kind of get this iterative improvement and learning that we’ll get to see in Flight 7. So Flight 7 is the first Version 2 of their ship set. When I say that, I mean the ship, the booster, all the systems associated with it. So, from that, it’s really more just understanding how the system, how the flaps, how all of that interacts and works as they’re coming back in. Hopefully we’ll get to see some catches, that’s always exciting.

Ars: How did the in-space Raptor engine relight go on Flight 6 (on November 19)?

Lisa Watson-Morgan: Beautifully. And that’s something that’s really important to us because when we’re sitting on the Moon… well, actually, the whole path to the Moon as we are getting ready to land on the Moon, we’ll perform a series of maneuvers, and the Raptors will have an environment that is very, very cold. To that, it’s going to be important that they’re able to relight for landing purposes. So that was a great first step towards that. In addition, after we land, clearly the Raptors will be off, and it will get very cold, and they will have to relight in a cold environment (to get off the Moon). So that’s why that step was critical for the Human Landing System and NASA’s return to the Moon.

A recent artist’s illustration of two Starships docked together in low-Earth orbit. Credit: SpaceX

Ars: Which version of the ship is required for the propellant transfer demonstration, and what new features are on that version to enable this test?

Lisa Watson-Morgan: We’re looking forward to the Version 3, which is what’s coming up later on, sometime in ’25, in the near term, because that’s what we need for propellant transfer and the cryo fluid work that is also important to us… There are different systems in the V3 set that will help us with cryo fluid management. Obviously, with those, we have to have the couplers and the quick-disconnects in order for the two systems to have the right guidance, navigation, trajectory, all the control systems needed to hold their station-keeping in order to dock with each other, and then perform the fluid transfer. So all the fluid lines and all that’s associated with that, those systems, which we have seen in tests and held pieces of when we’ve been working with them at their site, we’ll get to see those actually in action on orbit.

Ars: Have there been any ground tests of these systems, whether it’s fluid couplers or docking systems? Can you talk about some of the ground tests that have gone into this development?

Lisa Watson-Morgan: Oh, absolutely. We’ve been working with them on ground tests for this past year. We’ve seen the ground testing and reviewed the data. Our team works with them on what we deem necessary for the various milestones. While the milestone contains proprietary (information), we work closely with them to ensure that it’s going to meet the intent, safety-wise as well as technically, of what we’re going to need to see. So they’ve done that.

Even more exciting, they have recently shipped some of their docking systems to the Johnson Space Center for testing with the Orion Lockheed Martin docking system, and that’s for Artemis III. Clearly, that’s how we’re going to receive the crew. So those are some exciting tests that we’ve been doing this past year as well that’s not just focused on, say, the booster and the ship. There are a lot of crew systems that are being developed now. We’re in work with them on how we’re going to effectuate the crew manual control requirements that we have, so it’s been a great balance to see what the crew needs, given the size of the ship. That’s been a great set of work. We have crew office hours where the crew travels to Hawthorne [SpaceX headquarters in California] and works one-on-one with the different responsible engineers in the different technical disciplines to make sure that they understand not just little words on the paper from a requirement, but actually what this means, and then how systems can be operated.

Ars: For the docking system, Orion uses the NASA Docking System, and SpaceX brings its own design to bear on Starship?

Lisa Watson-Morgan: This is something that I think the Human Landing System has done exceptionally well. When we wrote our high-level set of requirements, we also wrote it with a bigger picture in mind—looked into the overall standards of how things are typically done, and we just said it has to be compliant with it. So it’s a docking standard compliance, and SpaceX clearly meets that. They certainly do have the Dragon heritage, of course, with the International Space Station. So, because of that, we have high confidence that they’re all going to work very well. Still, it’s important to go ahead and perform the ground testing and get as much of that out of the way as we can.

Lisa Watson-Morgan, NASA’s HLS program manager, is based at Marshall Space Flight Center in Huntsville, Alabama. Credit: ASA/Aubrey Gemignani

Ars: How far along is the development and design of the layout of the crew compartment at the top of Starship? Is it far along, or is it still in the conceptual phase? What can you say about that?

Lisa Watson-Morgan: It’s much further along there. We’ve had our environmental control and life support systems, whether it’s carbon dioxide monitoring fans to make sure the air is circulating properly. We’ve been in a lot of work with SpaceX on the temperature. It’s… a large area (for the crew). The seats, making sure that the crew seats and the loads on that are appropriate. For all of that work, as the analysis work has been performed, the NASA team is reviewing it. They had a mock-up, actually, of some of their life support systems even as far back as eight-plus months ago. So there’s been a lot of progress on that.

Ars: Is SpaceX planning to use a touchscreen design for crew displays and controls, like they do with the Dragon spacecraft?

Lisa Watson-Morgan: We’re in talks about that, about what would be the best approach for the crew for the dynamic environment of landing.

Ars: I can imagine it is a pretty dynamic environment with those Raptor engines firing. It’s almost like a launch in reverse.

Lisa Watson-Morgan: Right. Those are some of the topics that get discussed in the crew office hours. That’s why it’s good to have the crew interacting directly, in addition to the different discipline leads, whether it’s structural, mechanical, propulsion, to have all those folks talking guidance and having control to say, “OK, well, when the system does this, here’s the mode we expect to see. Here’s the impact on the crew. And is this condition, or is the option space that we have on the table, appropriate for the next step, with respect to the displays.”

Ars: One of the big things SpaceX needs to prove out before going to the Moon with Starship is in-orbit propellant transfer. When do you see the ship-to-ship demonstration occurring?

Lisa Watson-Morgan: I see it occurring in ’25.

Ars: Anything more specific about the schedule for that?

Lisa Watson-Morgan: That’d be a question for SpaceX because they do have a number of flights that they’re performing commercially, for their maturity. We get the benefit of that. It’s actually a great partnership. I’ll tell you, it’s really good working with them on this, but they’d have to answer that question. I do foresee it happening in ’25.

Ars: What things do you need to see SpaceX accomplish before they’re ready for the refueling demo? I’m thinking of things like the second launch tower, potentially. Do they need to demonstrate a ship catch or anything like that before going for orbital refueling?

Lisa Watson-Morgan: I would say none of that’s required. You just kind of get down to, what are the basics? What are the basics that you need? So you need to be able to launch rapidly off the same pad, even. They’ve shown they can launch and catch within a matter of minutes. So that is good confidence there. The catching is part of their reuse strategy, which is more of their commercial approach, and not a NASA requirement. NASA reaps the benefit of it by good pricing as a result of their commercial model, but it is not a requirement that we have. So they could theoretically use the same pad to perform the propellant transfer and the long-duration flight, because all it requires is two launches, really, within a specified time period to where the two systems can meet in a planned trajectory or orbit to do the propellant transfer. So they could launch the first one, and then within a week or two or three, depending on what the concept of operations was that we thought we could achieve at that time, and then have the propellant transfer demo occur that way. So you don’t necessarily need two pads, but you do need more thermal characterization of the ship. I would say that is one of the areas (we need to see data on), and that is one of the reasons, I think, why they’re working so diligently on that.

Ars: You mentioned the long-duration flight demonstration. What does that entail?

Lisa Watson-Morgan: The simple objectives are to launch two different tankers or Starships. The Starship will eventually be a crewed system. Clearly, the ones that we’re talking about for the propellant transfer are not. It’s just to have the booster and Starship system launch, and within a few weeks, have another one launch, and have them rendezvous. They need to be able to find each other with their sensors. They need to be able to come close, very, very close, and they need to be able to dock together, connect, do the quick connect, and make sure they are able, then, to flow propellant and LOX (liquid oxygen) to another system. Then, we need to be able to measure the quantity of how much has gone over. And from that, then they need to safely undock and dispose.

Ars: So the long-duration flight demonstration is just part of what SpaceX needs to do in order to be ready for the propellant transfer demonstration?

Lisa Watson-Morgan: We call it long duration just because it’s not a 45-minute or an hour flight. Long duration, obviously, that’s a relative statement, but it’s a system that can stay up long enough to be able to find another Starship and perform those maneuvers and flow of fuel and LOX.

Ars: How much propellant will you transfer with this demonstration, and do you think you’ll get all the data you need in one demonstration, or will SpaceX need to try this several times?

Lisa Watson-Morgan: That’s something you can ask SpaceX (about how much propellant will be transferred). Clearly, I know, but there’s some sensitivity there. You’ve seen our requirements in our initial solicitation. We have thresholds and goals, meaning we want you to at least do this, but more is better, and that’s typically how we work almost everything. Working with commercial industry in these fixed-price contracts has worked exceptionally well, because when you have providers that are also wanting to explore commercially or trying to make a commercial system, they are interested in pushing more than what we would typically ask for, and so often we get that for an incredibly fair price.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Here’s what NASA would like to see SpaceX accomplish with Starship this year Read More »

this-pdf-contains-a-playable-copy-of-doom

This PDF contains a playable copy of Doom

Here at Ars, we’re suckers for stories about hackers getting Doom running on everything from CAPTCHA robot checks and Windows’ notepad.exe to AI hallucinations and fluorescing gut bacteria. Despite all that experience, we were still thrown for a loop by a recent demonstration of Doom running in the usually static confines of a PDF file.

On the Github page for the quixotic project, coder ading2210 discusses how Adobe Acrobat included some robust support for JavaScript in the PDF file format. That JS coding support—which dates back decades and is still fully documented in Adobe’s official PDF specs—is currently implemented in a more limited, more secure form as part of PDFium, the built-in PDF-rendering engine of Chromium-based browsers.

In the past, hackers have used this little-known Adobe feature to code simple games like Breakout and Tetris into PDF documents. But ading220 went further, recompiling a streamlined fork of Doom‘s open source code using an old version of Emscripten that outputs optimized asm.js code.

With that code loaded, the Doom PDF can take inputs via the user typing in a designated text field and generate “video” output in the form of converted ASCII text fed into 200 individual text fields, each representing a horizontal line of the Doom display. The text in those fields is enough to simulate a six-color monochrome display at a “pretty poor but playable” 13 frames per second (about 80 ms per frame).

This PDF contains a playable copy of Doom Read More »

ban-on-chinese-connected-car-software-is-almost-ready

Ban on Chinese connected-car software is almost ready

However, the ban, as written, is not absolute. Companies can seek authorization to import software or hardware that would otherwise be outlawed, but the request would need to satisfy the US government and possibly be subject to conditions.

There are also exemptions for software for vehicles older than model year 2027 and hardware for vehicles older than model year 2030, including parts imported for warranty or repair work. (The government points out that retroactively applying the new rule would be a little pointless as any harm would already be done by vehicles that had compromised systems that predate it going into effect.)

And the final rule would only apply to light-duty vehicles. Anything with a gross vehicle weight rating of more than 10,000 lbs is exempt but will be dealt with in “a separate regulation tailored to the commercial sector in the coming months.”

Auto industry suppliers probably face the most disruption as a result of the new rule—just the presence of a Chinese-made module in a larger system is enough to trigger the import ban. But there should be little disruption to the US car market, at least for now.

Since the rules only go into effect from model year 2027, the few Chinese-made vehicles on sale in the US—models from Polestar, Volvo, Lincoln, and Buick—may remain on sale. However, Polestar’s Chinese ownership may prove somewhat of a sticking point compared to Ford and GM. Ars notes that lawyers representing Polestar met with the Commerce Department last week—we reached out to the automaker for a comment and will update this piece should we hear back.

Ban on Chinese connected-car software is almost ready Read More »

mastodon’s-founder-cedes-control,-refuses-to-become-next-musk-or-zuckerberg

Mastodon’s founder cedes control, refuses to become next Musk or Zuckerberg

And perhaps in a nod to Meta’s recent changes, Mastodon also vowed to “invest deeply in trust and safety” and ensure “everyone, especially marginalized communities,” feels “safe” on the platform.

To become a more user-focused paradise of “resilient, governable, open and safe digital spaces,” Mastodon is going to need a lot more funding. The blog called for donations to help fund an annual operating budget of $5.1 million (5 million euros) in 2025. That’s a massive leap from the $152,476 (149,400 euros) total operating expenses Mastodon reported in 2023.

Other social networks wary of EU regulations

Mastodon has decided to continue basing its operations in Europe, while still maintaining a separate US-based nonprofit entity as a “fundraising hub,” the blog said.

It will take time, Mastodon said, to “select the appropriate jurisdiction and structure in Europe” before Mastodon can then “determine which other (subsidiary) legal structures are needed to support operations and sustainability.”

While Mastodon is carefully getting re-settled as a nonprofit in Europe, Zuckerberg this week went on Joe Rogan’s podcast to call on Donald Trump to help US tech companies fight European Union fines, Politico reported.

Some critics suggest the recent policy changes on Meta platforms were intended to win Trump’s favor, partly to get Trump on Meta’s side in the fight against the EU’s strict digital laws. According to France24, Musk’s recent combativeness with EU officials suggests Musk might team up with Zuckerberg in that fight (unlike that cage fight pitting the wealthy tech titans against each other that never happened).

Experts told France24 that EU officials may “perhaps wrongly” already be fearful about ruffling Trump’s feathers by targeting his tech allies and would likely need to use the “full legal arsenal” of EU digital laws to “stand up to Big Tech” once Trump’s next term starts.

As Big Tech prepares to continue battling EU regulators, Mastodon appears to be taking a different route, laying roots in Europe and “establishing the appropriate governance and leadership frameworks that reflect the nature and purpose of Mastodon as a whole” and “responsibly serve the community,” its blog said.

“Our core mission remains the same: to create the tools and digital spaces where people can build authentic, constructive online communities free from ads, data exploitation, manipulative algorithms, or corporate monopolies,” Mastodon’s blog said.

Mastodon’s founder cedes control, refuses to become next Musk or Zuckerberg Read More »

new-york-starts-enforcing-$15-broadband-law-that-isps-tried-to-kill

New York starts enforcing $15 broadband law that ISPs tried to kill

1.7 million New York households lost FCC discount

The order said quick implementation of the law is important because of “developments at the federal level impacting the affordability of broadband service.” About 1.7 million New York households, and 23 million nationwide, used to receive a monthly discount through an FCC program that expired in mid-2024 after Congress failed to provide more funding.

“For this reason, consumer benefit programs assisting low-income households—such as the ABA—are even more critical to ensure that the digital divide for low-income New Yorkers is being addressed,” the New York order said.

New York ISPs can obtain an exemption from the low-cost broadband law if they “provide service to no more than 20,000 households and the Commission determines that compliance with such requirements would result in ‘unreasonable or unsustainable financial impact on the broadband service provider,'” the order said.

Over 40 small ISPs filed for exemptions in 2021 before the law was blocked by a judge. Those ISPs and potentially others will be given one-month exemptions if they file paperwork by Wednesday stating that they meet the subscriber threshold. ISPs must submit detailed financial information by February 15 to obtain longer-term exemptions.

“All other ISPs (i.e., those with more than 20,000 subscribers) must comply with the ABA by January 15, 2025,” the order said. Failure to comply can be punished with civil penalties of up to $1,000 per violation. The law applies to wireline, fixed wireless, and satellite providers.

Charter Spectrum currently advertises a $25-per-month plan with 50Mbps speeds for low-income households. Comcast and Optimum have $15 plans. Verizon has a low-income program reducing the cost of some home Internet plans to as low as $20 a month.

Disclosure: The Advance/Newhouse Partnership, which owns 12.3 percent of Charter, is part of Advance Publications, which also owns Ars Technica parent Condé Nast.

New York starts enforcing $15 broadband law that ISPs tried to kill Read More »

report:-after-many-leaks,-switch-2-announcement-could-come-“this-week”

Report: After many leaks, Switch 2 announcement could come “this week”

Nintendo may be getting ready to make its Switch 2 console official. According to “industry whispers” collected by Eurogamer, as well as reporting from The Verge’s Tom Warren, the Switch 2 could be formally announced sometime this week. Eurogamer suggests the reveal is scheduled for this Thursday, January 16.

The reporting also suggests that the reveal will focus mostly on the console’s hardware design, with another game-centered announcement coming later. Eurogamer reports that the console won’t be ready to launch until April; this would be similar to Nintendo’s strategy for the original Switch, which was announced in mid-January 2017 but not launched until March.

Many things about the Switch 2’s physical hardware design have been thoroughly leaked at this point, thanks mostly to accessory makers who have been showing off their upcoming cases. Accessory maker Genki was at CES last week with a 3D-printed replica of the console based on the real thing, suggesting a much larger but still familiar-looking console with a design and button layout similar to the current Switch.

On the inside, the console is said to sport a new Nvidia-designed Arm processor with a much more powerful GPU and more RAM than the current Switch. Dubbed “T239,” Eurogamer reports that the chip includes 1,536 CUDA cores based on the Ampere architecture, the same used in 2020’s GeForce RTX 30-series graphics cards on the PC.

Report: After many leaks, Switch 2 announcement could come “this week” Read More »

supreme-court-lets-hawaii-sue-oil-companies-over-climate-change-effects

Supreme Court lets Hawaii sue oil companies over climate change effects

On Monday, the Supreme Court declined to decide whether to block lawsuits that Honolulu filed to seek billions in damages from oil and gas companies over allegedly deceptive marketing campaigns that hid the effects of climate change.

Now those lawsuits can proceed, surely frustrating the fossil fuel industry, which felt that SCOTUS should have weighed in on this key “recurring question of extraordinary importance to the energy industry” raised in lawsuits seeking similarly high damages in several states, CBS News reported.

Defendants Sunoco and Shell, along with 15 other energy companies, had asked the court to intervene and stop the Hawaii lawsuits from proceeding. They had hoped to move the cases out of Hawaii state courts by arguing that interstate pollution is governed by federal law and the Clean Air Act.

The oil and gas companies continue to argue that greenhouse gas emissions “flow from billions of daily choices, over more than a century, by governments, companies, and individuals about what types of fuels to use, and how to use them.” Because of this, the companies believe Honolulu was wrong to demand damages based on the “cumulative effect of worldwide emissions leading to global climate change.”

“In these cases, state and local governments are attempting to assert control over the nation’s energy policies by holding energy companies liable for worldwide conduct in ways that starkly conflict with the policies and priorities of the federal government,” oil and gas companies unsuccessfully argued in their attempt to persuade SCOTUS to grant review. “That flouts this court’s precedents and basic principles of federalism, and the court should put a stop to it.”

Supreme Court lets Hawaii sue oil companies over climate change effects Read More »

the-8-most-interesting-pc-monitors-from-ces-2025

The 8 most interesting PC monitors from CES 2025


Monitors worth monitoring

Here are upcoming computer screens with features that weren’t around last year.

Yes, that’s two monitors in a suitcase.

Yes, that’s two monitors in a suitcase.

Plenty of computer monitors made debuts at the Consumer Electronics Show (CES) in Las Vegas this year, but many of the updates at this year’s event were pretty minor. Many could have easily been a part of 2024’s show.

But some brought new and interesting features to the table for 2025—in this article, we’ll tell you all about them.

LG’s 6K monitor

Pixel addicts are always right at home at CES, and the most interesting high-resolution computer monitor to come out of this year’s show is the LG UltraFine 6K Monitor (model 32U990A).

People seeking more than 3840×2160 resolution have limited options, and they’re all rather expensive (looking at you, Apple Pro Display XDR). LG’s 6K monitor means there’s another option for professionals needing extra pixels for things like developing, engineering, and creative work. And LG’s 6144×3456, 32-inch display has extra oomph thanks to something no other 6K monitor has: Thunderbolt 5.

This is the only image LG provided for the monitor. Credit: LG

LG hasn’t confirmed the refresh rate of its 6K monitor, so we don’t know how much bandwidth it needs. But it’s possible that pairing the UltraFine with a Thunderbolt 5 PC could trigger Bandwidth Boost, a Thunderbolt 5 feature that automatically increases bandwidth from 80Gbps to 120Gbps. For comparison, Thunderbolt 4 maxes out at 40Gbps. Thunderbolt 5 also requires 140 W power delivery and maxes out at 240 W. That’s a notable bump from Thunderbolt 4’s 100–140 W.

Considering that Apple’s only 6K monitor has Thunderbolt 3, Thunderbolt 5 is a differentiator. With this capability, the LG UltraFine is ironically better equipped in this regard for use with the new MacBook Pros and Mac Mini (which all have Thunderbolt 5) compared to Apple’s own monitors. LG may be aware of this, as the 32U990A’s aesthetic could be considered very Apple-like.

Inside the 32U990A’s silver chassis is a Nano IPS panel. In recent years, LG has advertised its Nano IPS panels as having “nanometer-sized particles” applied to their LED backlight to absorb “excess, unnecessary light wavelengths” for “richer color expression.” LG’s 6K monitor claims to cover 98 percent of DCI-P3 and 99.5 percent of Adobe RGB. IPS Black monitors, meanwhile, have higher contrast ratios (up to 3,000:1) than standard IPS panels. However, LG has released Nano IPS monitors with 2,000:1 contrast, the same contrast ratio as Dell’s 6K, IPS Black monitor.

LG hasn’t shared other details, like price or a release date. But the monitor may cost more than Dell’s Thunderbolt 4-equipped monitor, which is currently $2,480.

Brelyon’s multi-depth monitor

Brelyon Ultra Reality Extend.

Someone from CNET using the Ultra Reality Extend. Credit: CNET/YouTube

Brelyon is headquartered in San Mateo, California, and was founded by scientists and executives from MIT, IMAX, UCF, and DARPA. It’s been selling display technology for commercial and defense applications since 2022. At CES, the company unveiled the Ultra Reality Extend, describing it as an “immersive display line that renders virtual images in multiple depths.”

“As the first commercial multi-focal monitor, the Extend model offers multi-depth programmability for information overlay, allowing users to see images from 0.7 m to as far as 2.5 m of depth virtually rendered behind the monitor; organizing various data streams at different depth layers, or triggering focal cues to induce an ultra immersive experience akin to looking out through a window,” Brelyon’s announcement said.

Brelyon says the monitor runs 4K at 60 Hz with 1 bit of monocular depth for an 8K effect. The monitor includes “OLED-based curved 2D virtual images, with the largest stretching to 122 inches and extending 2.5 meters deep, viewable through a 30-inch frame,” according to the firm’s announcement. The closer you sit, the greater the field of view you get.

The Extend leverages “new GPU capabilities to process light and video signals inside our display platforms,” Brelyon CEO Barmak Heshmat said in a statement this week. He added: “We are thinking beyond headsets and glasses, where we can leverage GPU capabilities to do real-time driving of higher-bandwidth display interfaces.”

Brelyon says this was captured from the Extend, with its camera lens focus changing from 70 cm to 2,500 cm. Credit: Brelyon

Advancements in AI-based video processing, as well as other software advancements and hardware improvements, purportedly enable the Extend to upscale lower-dimension streams to multiple, higher-dimension ones. Brelyon describes its product as a “generative display system” that uses AI computation and optics to assign different depth values to content in real time for rendering images and information overlays.

The idea of a virtual monitor that surpasses the field of view of typical desktop monitors while allowing users to see the real world isn’t new. Tech firms (including many at CES) usually try to accomplish this through AR glasses. But head-mounted displays still struggle with problems like heat, weight, computing resources, battery, and aesthetics.

Brelyon’s monitor seemingly demoed well at CES. Sam Rutherford, a senior writer at Engadget, watched a clip from the Marvel’s Spider-Man video game on the Extend and said that “trees and light poles whipping past in my face felt so real I started to flinch subconsciously.” He added that the monitor separated “different layers of the content to make snow in the foreground look blurry as it whipped across the screen, while characters in the distance” still looked sharp.

The monitor costs $5,000 to $8,000 depending on how you’ll use it and whether you have other business with Brelyon, per Engadget, and CES is one of the few places where people could actually see the display in action.

Samsung’s 3D monitor

Samsung Odyssey 3D

Samsung’s depiction of the 3D effect of its 3D PC monitor. Credit: Samsung

It’s 2025, and tech companies are still trying to convince people to bring a 3D display into their homes. This week, Samsung took its first swing since 2009 at 3D screens with the Odyssey 3D monitor.

In lieu of 3D glasses. the Odyssey 3D achieves its 3D effect with a lenticular lens “attached to the front of the panel and its front stereo camera,” Samsung says, as well eye tracking and view mapping. Differing from other recent 3D monitors, the Odyssey 3D claims to be able to make 2D content look three-dimensional even if that content doesn’t officially support 3D.

You can find more information in our initial coverage of Samsung’s Odyssey 3D, but don’t bet on finding 3D monitors in many people’s homes soon. The technology for quality 3D displays that work without glasses has been around for years but still has never taken off.

Dell’s OLED productivity monitor

With improvements in burn-in, availability, and brightness, finding OLED monitors today is much easier than it was two years ago. But a lot of the OLED monitors released recently target gamers with features like high refresh rates, ultrawide panels, and RGB. These features are unneeded or unwanted by non-gamers but contribute to OLED monitors’ already high pricing. Numerous smaller OLED monitors were announced at CES, with 27-inch, 4K models being a popular addition. Most of them are still high-refresh gaming monitors, though.

The Dell 32-inch QD-OLED, on the other hand, targets “play, school, and work,” Dell’s announcement says. And its naming (based on a new naming convention Dell announced this week that kills XPS and other longstanding branding) signals that this is a mid-tier monitor from Dell’s entry-level lineup.

Dell 32-inch QD-OLED,

OLED for normies. Credit: Dell

The monitor’s specs, which include a 120 Hz refresh rate, AMD FreeSync Premium, and USB-C power delivery at up to 90 W, make it a good fit for pairing with many mainstream laptops.

Dell also says this is the first QD-OLED with spatial audio, which uses head tracking to alter audio coming from the monitor’s five 5 W speakers. This is a feature we’ve seen before, but not on an OLED monitor.

For professionals and/or Mac users that prefer the sleek looks, reputation, higher power delivery and I/O hubs associated with Dell’s popular UltraSharp line, Dell made two more notable announcements at CES: an UltraSharp 32 4K Thunderbolt Hub Monitor (U3225QE) coming out in February 25 for $950 and an UltraSharp 27 4K Thunderbolt Hub Monitor (U2725QE) coming out that same day for $700.

The suitcase monitors

Before we get into the Base Case, please note that this product has no release date because its creators plan to go to market via crowdfunding. Base Case says it will launch its Indiegogo campaign next month, but even then, we don’t know if the project will be funded, if any final product will work as advertised, or if customers will receive orders in a timely fashion. Still, this is one of the most unusual monitors at CES, and it’s worth discussing.

The Base Case is shaped like a 24x14x16.5-inch rolling suitcase, but when you open it up, you’ll find two 24-inch monitors for connecting to a laptop. Each screen reportedly has a 1920×1080 resolution, a 75 Hz refresh rate, and a max brightness claim of 350 nits. Base Case is also advertising PC and Mac support (through DisplayLink), as well as HDMI, USB-C, USB-A, Thunderbolt, and Ethernet ports. Telescoping legs allow the case to rise 10 inches so the display can sit closer to eye level.

Ultimately, the Base Case would see owners lug around a 20-pound product for the ability to quickly create a dual-monitor setup equipped with a healthy amount of I/O. Tom’s Guide demoed a prototype at CES and reported that the monitors took “seconds to set up.”

In case you’re worried that the Base Case prioritizes displays over storage, note that its makers plan on adding a front pocket to the suitcase that can fit a laptop. The pocket wasn’t on the prototype Tom’s Guide saw, though.

Again, this is far from a finalized product, but Base Case has alluded to a $2,400 starting price. For comparison to other briefcase-locked displays—and yes, doing this is possible—LG’s StanbyME Go (27LX5QKNA) tablet in a briefcase currently has a $1,200 MSRP.

Corsair’s PC-mountable touchscreen

A promotional image of the touchscreen.

If the Base Case is on the heftier side of portable monitors, Corsair’s Xeneon Edge is certainly on the minute side. The 14.5-inch LCD touchscreen isn’t meant to be a primary display, though. Corsair built it as a secondary screen for providing quick information, like the song your computer is playing, the weather, the time, and calendar events. You could also use the 2560×720 pixels to display system information, like component usage and temperatures.

Corsair says its iCue software will be able to provide system information on the Xeneon, but because the Xeneon Edge works like a regular monitor, you could (and likely would prefer to) use your own methods. Still, the Xeneon Edge stands out from other small, touchscreen PC monitors with its clean UI that can succinctly communicate a lot of information on the tiny display at once.

Specs-wise, this is a 60 Hz IPS panel with 5-point capacitive touch. Corsair says the monitor can hit 350 nits of brightness.

You can connect the Xeneon Edge to a computer via USB-C (DisplayPort Alt mode) or HDMI. There are also screw holes, so PC builders could install it via a 360 mm radiator mounting point inside their PC case.

Alternatively, Corsair recommends attaching the touchscreen to the outside of a PC case through the monitor’s 14 integrated magnets. Corsair said in a blog post that the “magnets are underneath the plastic casing so the metal surface you stick it to won’t get scratched.” Or, in traditional portable monitor style, the Xeneon Edge could also just sit on a desk with its included stand.

Corsair Xeneon Edge

Corsair demos different ways the screen could attach to a case. Credit: TechPowerUp/YouTube

Corsair plans to release the Xeneon Edge in Q2. Expected pricing is “around $249,” Tom’s Hardware reported.

MSI’s side panel display panel

Why attach a monitor to your PC case when you can turn your PC case into a monitor instead?

MSI says that the touchscreen embedded into this year’s MEG Vision X AI 2nd gaming desktop’s side panel can work like a regular computer monitor. Similar to Corsair’s monitor, the MSI’s display has a corresponding app that can show system information and other customizations, which you can toggle with controls on the front of the case, PCMag reported.

MSI used an IPS panel with 1920×1080 resolution for the display, which also has an integrated mic and speaker. MSI says “electric vehicle control centers” inspired the design. We’ve seen similar PC cases, like iBuyPower’s more translucent side panel display and the touchscreen on Hyte’s pentagonal PC case, before. But MSI is bringing the design to a more mainstream form factor by including it in a prebuilt desktop, potentially opening the door for future touchscreen-equipped desktops.

Considering the various locations people place their desktops and the different angles at which they may try to look at this screen, I’m curious about the monitor’s viewing angles and brightness. IPS seems like a good choice since it tends to have strong image quality when viewed from different angles. A video PC Mag shot from the show floor shows images on the monitor appearing visible and lively:

Hands on with MSI’s MEG Vision X AI Desktop: Now, your PC tower’s a monitor, too.

World’s fastest monitor

There’s a competitive air at CES that lends to tech brands trying to one-up each other on spec sheets. Some of the most heated competition concerns monitor refresh rates; for years, we’ve been meeting the new world’s fastest monitor at CES. This year is no different.

The brand behind the monitor is Koorui, a three-year-old Chinese firm whose website currently lists monitors and keyboards. Koorui hasn’t confirmed when it will make its 750 Hz display available, where it will sell it, or what it will cost. That should bring some skepticism about this product actually arriving for purchase in the US. However, Koorui did bring the display to the CES show floor.

The speedy display had a refresh rate test running at CES, and according to several videos we’ve seen from attendees, the monitor appeared to consistently hit the 750 Hz mark.

World’s first 750Hz monitor???

For those keeping track, high-end gaming monitors—namely ones targeting professional gamers—hit 360 Hz in 2020. Koorui’s announcement means max monitor speeds have increased 108.3 percent in four years.

One CES attendee noticed, however, that the monitor wasn’t showing any gameplay. This could be due to the graphical and computing prowess needed to demonstrate the benefits of a 750 Hz monitor. A system capable of 750 frames per second would give people a chance to see if they could detect improved motion resolution but would also be very expensive. It’s also possible that the monitor Koorui had on display wasn’t ready for that level of scrutiny yet.

Like many eSports monitors, the Koorui is 24.5 inches, with a resolution of 1920×1080. Perhaps more interesting than Koorui taking the lead in the perennial race for higher refresh rates is the TN monitor’s claimed color capabilities. TN monitors aren’t as popular as they were years ago, but OEMs still employ them sometimes for speed.

They tend to be less colorful than IPS and VA monitors, though. Most offer sRGB color gamuts instead of covering the larger DCI-P3 color space. Asus’ 540 Hz ROG Swift Pro PG248QP, for example, is a TN monitor claiming 125 percent sRGB coverage. Koorui’s monitor claims to cover 95 percent of DCI-P3, due to the use of a quantum dot film. Again, there’s a lot that prospective shoppers should confirm about this monitor if it becomes available.

For those seeking the fastest monitors with more concrete release plans, several companies announced 600 Hz monitors coming out this year. Acer, for example, has a 600 Hz Nitro XV240 F6 (also a TN monitor) that it plans to release in North America this quarter at a starting price of $600.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

The 8 most interesting PC monitors from CES 2025 Read More »

new-glenn-rocket-is-at-the-launch-pad,-waiting-for-calm-seas-to-land

New Glenn rocket is at the launch pad, waiting for calm seas to land

COCOA BEACH, Fla.—As it so often does in the final days before the debut of a new rocket, it all comes down to weather. Accordingly, Blue Origin is only awaiting clear skies and fair seas for its massive New Glenn vehicle to lift off from Florida.

After the company completed integration of the rocket this week, and rolled the super heavy lift rocket to its launch site at Cape Canaveral, the focus turned toward the weather. Conditions at Cape Canaveral Space Force Base have been favorable during the early morning launch windows available to the rocket, but there have been complications offshore.

That’s because Blue Origin aims to recover the first stage of the New Glenn rocket, and sea states in the Atlantic Ocean have been unsuitable for an initial attempt to catch the first stage booster on a drone ship. The company has already waived one launch attempt set for 1 am ET (06: 00 UTC) on Friday, January 10.

Conditions have improved a bit since then, but on Saturday evening the company’s launch officials canceled a second attempt planned for 1 am ET on Sunday. The new launch time is now 1 am ET on Monday, January 13, when better sea states are expected. There is a three-hour launch window. The company will provide a webcast of proceedings at this link beginning one hour before liftoff.

Seeking a nominal flight

According to a mission timeline shared by Blue Origin on Saturday, it will take several hours to fuel the New Glenn rocket. Second stage hydrogen loading will begin 4.5 hours before liftoff, followed by the booster stage and second stage liquid oxygen at 4 hours, and methane for the booster stage at 3.5 hours to go. Fueling should be complete about an hour before liftoff.

New Glenn rocket is at the launch pad, waiting for calm seas to land Read More »

everyone-agrees:-2024-the-hottest-year-since-the-thermometer-was-invented

Everyone agrees: 2024 the hottest year since the thermometer was invented


An exceptionally hot outlier, 2024 means the streak of hottest years goes to 11.

With very few and very small exceptions, 2024 was unusually hot across the globe. Credit: Copernicus

Over the last 24 hours or so, the major organizations that keep track of global temperatures have released figures for 2024, and all of them agree: 2024 was the warmest year yet recorded, joining 2023 as an unusual outlier in terms of how rapidly things heated up. At least two of the organizations, the European Union’s Copernicus and Berkeley Earth, place the year at about 1.6° C above pre-industrial temperatures, marking the first time that the Paris Agreement goal of limiting warming to 1.5° has been exceeded.

NASA and the National Oceanic and Atmospheric Administration both place the mark at slightly below 1.5° C over pre-industrial temperatures (as defined by the 1850–1900 average). However, that difference largely reflects the uncertainties in measuring temperatures during that period rather than disagreement over 2024.

It’s hot everywhere

2023 had set a temperature record largely due to a switch to El Niño conditions midway through the year, which made the second half of the year exceptionally hot. It takes some time for that heat to make its way from the ocean into the atmosphere, so the streak of warm months continued into 2024, even as the Pacific switched into its cooler La Niña mode.

While El Niños are regular events, this one had an outsized impact because it was accompanied by unusually warm temperatures outside the Pacific, including record high temperatures in the Atlantic and unusual warmth in the Indian Ocean. Land temperatures reflect this widespread warmth, with elevated temperatures on all continents. Berkeley Earth estimates that 104 countries registered 2024 as the warmest on record, meaning 3.3 billion people felt the hottest average temperatures they had ever experienced.

Different organizations use slightly different methods to calculate the global temperature and have different baselines. For example, Copernicus puts 2024 at 0.72° C above a baseline that will be familiar to many people since they were alive for it: 1991 to 2000. In contrast, NASA and NOAA use a baseline that covers the entirety of the last century, which is substantially cooler overall. Relative to that baseline, 2024 is 1.29° C warmer.

Lining up the baselines shows that these different services largely agree with each other, with most of the differences due to uncertainties in the measurements, with the rest accounted for by slightly different methods of handling things like areas with sparse data.

Describing the details of 2024, however, doesn’t really capture just how exceptional the warmth of the last two years has been. Starting in around 1970, there’s been a roughly linear increase in temperature driven by greenhouse gas emissions, despite many individual years that were warmer or cooler than the trend. The last two years have been extreme outliers from this trend. The last time there was a single comparable year to 2024 was back in the 1940s. The last time there were two consecutive years like this was in 1878.

A graph showing a curve that increases smoothly from left to right, with individual points on the curve hosting red and blue lines above and below. The red line at 2024 is larger than any since 1978.

Relative to the five-year temperature average, 2024 is an exceptionally large excursion. Credit: Copernicus

“These were during the ‘Great Drought’ of 1875 to 1878, when it is estimated that around 50 million people died in India, China, and parts of Africa and South America,” the EU’s Copernicus service notes. Despite many climate-driven disasters, the world at least avoided a similar experience in 2023-24.

Berkeley Earth provides a slightly different way of looking at it, comparing each year since 1970 with the amount of warming we’d expect from the cumulative greenhouse gas emissions.

A graph showing a reddish wedge, growing from left to right. A black line traces the annual temperatures, which over near the top edge of the wedge until recent years.

Relative to the expected warming from greenhouse gasses, 2024 represents a large departure. Credit: Berkeley Earth

These show that, given year-to-year variations in the climate system, warming has closely tracked expectations over five decades. 2023 and 2024 mark a dramatic departure from that track, although it comes at the end of a decade where most years were above the trend line. Berkeley Earth estimates that there’s just a 1 in 100 chance of that occurring due to the climate’s internal variability.

Is this a new trend?

The big question is whether 2024 is an exception and we should expect things to fall back to the trend that’s dominated since the 1970s, or it marks a departure from the climate’s recent behavior. And that’s something we don’t have a great answer to.

If you take away the influence of recent greenhouse gas emissions and El Niño, you can focus on other potential factors. These include a slight increase expected due to the solar cycle approaching its maximum activity. But, beyond that, most of the other factors are uncertain. The Hunga Tonga eruption put lots of water vapor into the stratosphere, but the estimated effects range from slight warming to cooling equivalent to a strong La Niña. Reductions in pollution from shipping are expected to contribute to warming, but the amount is debated.

There is evidence that a decrease in cloud cover has allowed more sunlight to be absorbed by the Earth, contributing to the planet’s warming. But clouds are typically a response to other factors that influence the climate, such as the amount of water vapor in the atmosphere and the aerosols present to seed water droplets.

It’s possible that a factor that we missed is driving the changes in cloud cover or that 2024 just saw the chaotic nature of the atmosphere result in less cloud cover. Alternatively, we may have crossed a warming tipping point, where the warmth of the atmosphere makes cloud formation less likely. Knowing that will be critical going forward, but we simply don’t have a good answer right now.

Climate goals

There’s an equally unsatisfying answer to what this means for our chance of hitting climate goals. The stretch goal of the Paris Agreement is to limit warming to 1.5° C, because it leads to significantly less severe impacts than the primary, 2.0° target. That’s relative to pre-industrial temperatures, which are defined using the 1850–1900 period, the earliest time where temperature records allow a reconstruction of the global temperature.

Unfortunately, all the organizations that handle global temperatures have some differences in the analysis methods and data used. Given recent data, these differences result in very small divergences in the estimated global temperatures. But with the far larger uncertainties in the 1850–1900 data, they tend to diverge more dramatically. As a result, each organization has a different baseline, and different anomalies relative to that.

As a result, Berkeley Earth registers 2024 as being 1.62° C above preindustrial temperatures, and Copernicus 1.60° C. In contrast, NASA and NOAA place it just under 1.5° C (1.47° and 1.46°, respectively). NASA’s Gavin Schmidt said this is “almost entirely due to the [sea surface temperature] data set being used” in constructing the temperature record.

There is, however, consensus that this isn’t especially meaningful on its own. There’s a good chance that temperatures will drop below the 1.5° mark on all the data sets within the next few years. We’ll want to see temperatures consistently exceed that mark for over a decade before we consider that we’ve passed the milestone.

That said, given that carbon emissions have barely budged in recent years, there’s little doubt that we will eventually end up clearly passing that limit (Berkeley Earth is essentially treating it as exceeded already). But there’s widespread agreement that each increment between 1.5° and 2.0° will likely increase the consequences of climate change, and any continuing emissions will make it harder to bring things back under that target in the future through methods like carbon capture and storage.

So, while we may have committed ourselves to exceed one of our major climate targets, that shouldn’t be viewed as a reason to stop trying to limit greenhouse gas emissions.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Everyone agrees: 2024 the hottest year since the thermometer was invented Read More »

on-dwarkesh-patel’s-4th-podcast-with-tyler-cowen

On Dwarkesh Patel’s 4th Podcast With Tyler Cowen

Dwarkesh Patel again interviewed Tyler Cowen, largely about AI, so here we go.

Note that I take it as a given that the entire discussion is taking place in some form of an ‘AI Fizzle’ and ‘economic normal’ world, where AI does not advance too much in capability from its current form, in meaningful senses, and we do not get superintelligence [because of reasons]. It’s still massive additional progress by the standards of any other technology, but painfully slow by the ‘AGI is coming soon’ crowd.

That’s the only way I can make the discussion make at least some sense, with Tyler Cowen predicting 0.5%/year additional RGDP growth from AI. That level of capabilities progress is a possible world, although the various elements stated here seem like they are sometimes from different possible worlds.

I note that this conversation was recorded prior to o3 and all the year end releases. So his baseline estimate of RGDP growth and AI impacts has likely increased modestly.

I go very extensively into the first section on economic growth and AI. After that, the podcast becomes classic Tyler Cowen and is interesting throughout, but I will be relatively sparing in my notes in other areas, and am skipping over many points.

This is a speed premium and ‘low effort’ post, in the sense that this is mostly me writing down my reactions and counterarguments in real time, similar to how one would do a podcast. It is high effort in that I spent several hours listening to, thinking about and responding to the first fifteen minutes of a podcast.

As a convention: When I’m in the numbered sections, I’m reporting what was said. When I’m in the secondary sections, I’m offering (extensive) commentary. Timestamps are from the Twitter version.

[EDIT: In Tyler’s link, he correctly points out a confusion in government spending vs. consumption, which I believe is fixed now. As for his comment about market evidence for the doomer position, I’ve given my answer before, and I would assert the market provides substantial evidence neither in favor or against anything but the most extreme of doomer positions, as in extreme in a way I have literally never heard one person assert, once you control for its estimate of AI capabilities (where it does indeed offer us evidence, and I’m saying that it’s too pessimistic). We agree there is no substantial and meaningful ‘peer-reviewed’ literature on the subject, in the way that Tyler is pointing.]

They recorded this at the Progress Studies conference, and Tyler Cowen has a very strongly held view that AI won’t accelerate RGDP growth much that Dwarkesh clearly does not agree with, so Dwarkesh Patel’s main thrust is to try comparisons and arguments and intuition pumps to challenge Tyler. Tyler, as he always does, has a ready response to everything, whether or not it addresses the point of the question.

  1. (1: 00) Dwarkesh doesn’t waste any time and starts off asking why we won’t get explosive economic growth. Tyler’s first answer is cost disease, that as AI works in some parts of the economy costs in other areas go up.

    1. That’s true in relative terms for obvious reasons, but in absolute terms or real resource terms the opposite should be true, even if we accept the implied premise that AI won’t simply do everything anyway. This should drive down labor costs and free up valuable human capital. It should aid in availability of many other inputs. It makes almost any knowledge acquisition, strategic decision or analysis, data analysis or gathering, and many other universal tasks vastly better.

    2. Tyler then answers this directly when asked at (2: 10) by saying cost disease is not about employees per se, it’s more general, so he’s presumably conceding the point about labor costs, saying that non-intelligence inputs that can’t be automated will bind more and thus go up in price. I mean, yes, in the sense that we have higher value uses for them, but so what?

    3. So yes, you can narrowly define particular subareas of some areas as bottlenecks and say that they cannot grow, and perhaps they can even be large areas if we impose costlier bottlenecks via regulation. But that still leaves lots of room for very large economic growth for a while – the issue can’t bind you otherwise, the math doesn’t work.

  2. Tyler says government consumption [EDIT: I originally misheard this as spending, he corrected me, I thank him] at 18% of GDP (government spending is 38% but a lot of that is duplicative and a lot isn’t consumption), health care at 20%, education is 6% (he says 6-7%, Claude says 6%), the nonprofit sector (Claude says 5.6%) and says together that is half of the economy. Okay, sure, let’s tackle that.

    1. Healthcare is already seeing substantial gains from AI even at current levels. There are claims that up to 49% of half of doctor time is various forms of EMR and desk work that AIs could reduce greatly, certainly at least ~25%. AI can directly substitute for much of what doctors do in terms of advising patients, and this is already happening where the future is distributed. AI substantially improves medical diagnosis and decision making. AI substantially accelerates drug discovery and R&D, will aid in patient adherence and monitoring, and so on. And again, that’s without further capability gains. Insurance companies doubtless will embrace AI at every level. Need I go on here?

    2. Government spending at all levels is actually about 38% of GDP, but that’s cheating, only ~11% is non-duplicative and not transfers, interest (which aren’t relevant) or R&D (I’m assuming R&D would get a lot more productive).

    3. The biggest area is transfers. AI can’t improve the efficiency of transfers too much, but it also can’t be a bottleneck outside of transaction and administrative costs, which obviously AI can greatly reduce and are not that large to begin with.

    4. The second biggest area is provision of healthcare, which we’re already counting, so that’s duplicative. Third is education, which we count in the next section.

    5. Third is education. Fourth is national defense, where efficiency per dollar or employee should get vastly better, to the point where failure to be at the AI frontier is a clear national security risk.

    6. Fifth is interest on the debt, which again doesn’t count, and also we wouldn’t care about if GDP was growing rapidly.

    7. And so on. What’s left to form the last 11% or so? Public safety, transportation and infrastructure, government administration, environment and natural resources and various smaller other programs. What happens here is a policy choice. We are already seeing signs of improvement in government administration (~2% of the 11%), the other 9% might plausibly stall to the extent we decide to do an epic fail.

    8. Education and academia is already being transformed by AI, in the sense of actually learning things, among anyone who is willing to use it. And it’s rolling through academia as we speak, in terms of things like homework assignments, in ways that will force change. So whether you think growth is possible depends on your model of education. If it’s mostly a signaling model then you should see a decline in education investment since the signals will decline in value and AI creates the opportunity for better more efficient signals, but you can argue that this could continue to be a large time and dollar tax on many of us.

    9. Nonprofits are about 20%-25% education, and ~50% is health care related, which would double count, so the remainder is only ~1.3% of GDP. This also seems like a dig at nonprofits and their inability to adapt to change, but why would we assume nonprofits can’t benefit from AI?

    10. What’s weird is that I would point to different areas that have the most important anticipated bottlenecks to growth, such as housing or power, where we might face very strong regulatory constraints and perhaps AI can’t get us out of those.

  3. (1: 30) He says it will take ~30 years for sectors of the economy that do not use AI well to be replaced by those that do use AI well.

    1. That’s a very long time, even in an AI fizzle scenario. I roll to disbelieve that estimate in most cases. But let’s even give it to him, and say it is true, and it takes 30 years to replace them, while the productivity of the replacement goes up 5%/year above incumbents, which are stagnant. Then you delay the growth, but you don’t prevent it, and if you assume this is a gradual transition you start seeing 1%+ yearly GDP growth boosts even in these sectors within a decade.

  4. He concludes by saying some less regulated areas grow a lot, but that doesn’t get you that much, so you can’t have the whole economy ‘growing by 40%’ in a nutshell.

    1. I mean, okay, but that’s double Dwarkesh’s initial question of why we aren’t growing at 20%. So what exactly can we get here? I can buy this as an argument for AI fizzle world growing slower than it would have otherwise, but the teaser has a prediction of 0.5%, which is a whole different universe.

  1. (2: 20) Tyler asserts that value of intelligence will go down because more intelligence will be available.

    1. Dare I call this the Lump of Intelligence fallacy, after the Lump of Labor fallacy? Yes, to the extent that you are doing the thing an AI can do, the value of that intelligence goes down, and the value of AI intelligence itself goes down in economic terms because its cost of production declines. But to the extent that your intelligence complements and unlocks the AI’s, or is empowered by the AI’s and is distinct from it (again, we must be in fizzle-world), the value of that intelligence goes up.

    2. Similarly, when he talks about intelligence as ‘one input’ in the system among many, that seems like a fundamental failure to understand how intelligence works, a combination of intelligence denialism (failure to buy that much greater intelligence could meaningfully exist) and a denial of substitution or ability to innovate as a result – you couldn’t use that intelligence to find alternative or better ways to do things, and you can’t use more intelligence as a substitute for other inputs. And you can’t substitute the things enabled more by intelligence much for the things that aren’t, and so on.

    3. It also assumes that intelligence can’t be used to convince us to overcome all these regulatory barriers and bottlenecks. Whereas I would expect that raising the intelligence baseline greatly would make it clear to everyone involved how painful our poor decisions were, and also enable improved forms of discourse and negotiation and cooperation and coordination, and also greatly favor those that embrace it over those that don’t, and generally allow us to take down barriers. Tyler would presumably agree that if we were to tear down the regulatory state in the places it was holding us back, that alone would be worth far more than his 0.5% of yearly GDP growth, even with no other innovation or AI.

  1. (2: 50) Dwarkesh challenges Tyler by pointing out that the Industrial Revolution resulted in a greatly accelerated rate of economic growth versus previous periods, and asks what Tyler would say to someone from the past doubting it was possible. Tyler attempts to dodge (and is amusing doing so) by saying they’d say ‘looks like it would take a long time’ and he would agree.

    1. Well, it depends what a long time is, doesn’t it? 2% sustained annual growth (or 8%!) is glacial in some sense and mind boggling by ancient standards. ‘Take a long time’ in AI terms, such as what is actually happening now, could still look mighty quick if you compared it to most other things. OpenAI has 300 million MAUs.

  2. (3: 20) Tyler trots out the ‘all the financial prices look normal’ line, that they are not predicting super rapid growth and neither are economists or growth experts.

    1. Yes, the markets are being dumb, the efficient market hypothesis is false, and also aren’t you the one telling me I should have been short the market? Well, instead I’m long, and outperforming. And yes, economists and ‘experts on economic growth’ aren’t predicting large amounts of growth, but their answers are Obvious Nonsense to me and saying that ‘experts don’t expect it’ without arguments why isn’t much of an argument.

  3. (3: 40) Aside, since you kind of asked: So who am I to say different from the markets and the experts? I am Zvi Mowshowitz. Writer. Son of Solomon and Deborah Mowshowitz. I am the missing right hand of the one handed economists you cite. And the one warning you about what is about to kick Earth’s sorry ass into gear. I speak the truth as I see it, even if my voice trembles. And a warning that we might be the last living things this universe ever sees. God sent me.

  4. Sorry about that. But seriously, think for yourself, schmuck! Anyway.

What would happen if we had more people? More of our best people? Got more out of our best people? Why doesn’t AI effectively do all of these things?

  1. (3: 55) Tyler is asked wouldn’t a large rise in population drive economic growth? He says no, that’s too much a 1-factor model, in fact we’ve seen a lot of population growth without innovation or productivity growth.

    1. Except that Tyler is talking here about growth on a per capita basis. If you add AI workers, you increase the productive base, but they don’t count towards the capita.

  2. Tyler says ‘it’s about the quality of your best people and institutions.’

    1. But quite obviously AI should enable a vast improvement in the effective quality of your best people, it already does, Tyler himself would be one example of this, and also the best institutions, including because they are made up of the best people.

  3. Tyler says ‘there’s no simple lever, intelligence or not, that you can push on.’ Again, intelligence as some simple lever, some input component.

    1. The whole point of intelligence is that it allows you to do a myriad of more complex things, and to better choose those things.

  4. Dwarkesh points out the contradiction between ‘you are bottlenecked by your best people’ and asserting cost disease and constraint by your scarce input factors. Tyler says Dwarkesh is bottlenecked, Dwarkesh points out that with AGI he will be able to produce a lot more podcasts. Tyler says great, he’ll listen, but he will be bottlenecked by time.

    1. Dwarkesh’s point generalizes. AGI greatly expand the effective amount of productive time of the best people, and also extend their capabilities while doing so.

    2. AGI can also itself become ‘the best people’ at some point. If that was the bottleneck, then the goose asks, what happens now, Tyler?

  5. (5: 15) Tyler cites that much of sub-Saharan Africa still does not have clean reliable water, and intelligence is not the bottleneck there. And that taking advantage of AGI will be like that.

    1. So now we’re expecting AGI in this scenario? I’m going to kind of pretend we didn’t hear that, or that this is a very weak AGI definition, because otherwise the scenario doesn’t make sense at all.

    2. Intelligence is not directly the bottleneck there, true, but yes quite obviously Intelligence Solves This if we had enough of it and put those minds to that particular problem and wanted to invest the resources towards it. Presumably Tyler and I mostly agree on why the resources aren’t being devoted to it.

    3. What it mean for similar issues to that to be involved in taking advantage of AGI? Well, first, it would mean that you can’t use AGI to get to ASI (no I can’t explain why), but again that’s got to be a baseline assumption here. After that, well, sorry, I failed to come up with a way to finish this that makes it make sense to me, beyond a general ‘humans won’t do the things and will throw up various political and legal barriers.’ Shrug?

  6. (5: 35) Dwarkesh speaks about a claim that there is a key shortage of geniuses, and that America’s problems come largely from putting its geniuses in places like finance, whereas Taiwan puts them in tech, so the semiconductors end up in Taiwan. Wouldn’t having lots more of those types of people eat a lot of bottlenecks? What would happen if everyone had 1000 times more of the best people available?

  7. Tyler Cowen, author of a very good book about Talent and finding talent and the importance of talent, says he didn’t agree with that post, and returns to IQ in the labor market are amazingly low, and successful people are smart but mostly they have 8-9 areas where they’re an 8-9 on a 1-10 scale, with one 11+ somewhere, and a lot of determination.

    1. All right, I don’t agree that intelligence doesn’t offer returns now, and I don’t agree that intelligence wouldn’t offer returns even at the extremes, but let’s again take Tyler’s own position as a given…

    2. But that exactly describes what an AI gives you! An AI is the ultimate generalist. An AGI will be a reliable 8-9 on everything, actual everything.

    3. And it would also turn everyone else into an 8-9 on everything. So instead of needing to find someone 11+ in one area, plus determination, plus having 8-9 in ~8 areas, you can remove that last requirement. That will hugely expand the pool of people in question.

    4. So there’s two obvious very clear plans here: You can either use AI workers who have that ultimate determination and are 8-9 in everything and 11+ in the areas where AIs shine (e.g. math, coding, etc).

    5. Or you can also give your other experts an AI companion executive assistant to help them, and suddenly they’re an 8+ in everything and also don’t have to deal with a wide range of things.

  8. (6: 50) Tyler says, talk to a committee at a Midwestern university about their plans for incorporating AI, then get back to him and talk to him about bottlenecks. Then write a report and the report will sound like GPT-4 and we’ll have a report.

    1. Yes, the committee will not be smart or fast about its official policy for how to incorporate AI into its existing official activities. If you talk to them now they will act like they have a plagiarism problem and that’s it.

    2. So what? Why do we need that committee to form a plan or approve anything or do anything at all right now, or even for a few years? All the students are already using AI. The professors are rapidly forced to adapt AI. Everyone doing the research will soon be using AI. Half that committee, three years from now, prepared for that meeting using AI. Their phones will all work based on AI. They’ll be talking to their AI phone assistant companions that plan their schedules. You think this will all involve 0.5% GDP growth?

  9. (7: 20) Dwarkesh asks, won’t the AIs be smart, super conscientious and work super hard? Tyler explicitly affirms the 0.5% GDP growth estimate, that this will transform the world over 30 years but ‘over any given year we won’t so much notice it.’ Things like drug developments that would have taken 20 years now take 10 years, but you won’t feel it as revolutionary for a long time.

    1. I mean, it’s already getting very hard to miss. If you don’t notice it in 2025 or at least 2026, and you’re in the USA, check your pulse, you might be dead, etc.

    2. Is that saying we will double productivity in pharmaceutical R&D, and that it would have far more than doubled if progress didn’t require long expensive clinical trials, so other forms of R&D should be accelerated much more?

    3. For reference, according to Claude, R&D in general contributes about 0.3% to RGDP growth per year right now. If we were to double that effect in roughly half the current R&D spend that is bottlenecked in similar fashion, and the other half would instead go up by more.

    4. Claude also estimates that R&D spending would, if returns to R&D doubled, go up by 30%-70% on net.

    5. So we seem to be looking at more than 0.5% RGDP growth per year from R&D effects alone, between additional spending on it and greater returns. And obviously AI is going to have additional other returns.

This is a plausible bottleneck, but that implies rather a lot of growth.

  1. (8: 00) Dwarkesh points out that Progress Studies is all about all the ways we could unlock economic growth, yet Tyler says that tons more smart conscientious digital workers wouldn’t do that much. What gives? Tyler again says bottlenecks, and adds on energy as an important consideration and bottleneck.

    1. Feels like bottleneck is almost a magic word or mantra at this point.

    2. Energy is a real consideration, yes the vision here involves spending a lot more energy, and that might take time. But also we see rapidly declining costs, including energy costs, to extract the same amount of intelligence, things like 10x savings each year.

    3. And for inference purposes we can outsource our needs elsewhere, which we would if this was truly bottlenecking explosive growth, and so on. So while I think energy will indeed be an important limiting factor and be strained, and this will be especially important in terms of pushing the frontier or if we want to use o3-style very expensive inference a lot.

    4. I don’t expect it to bind medium-term economic growth so much in a slow growth scenario, and the bottlenecks involved here shouldn’t compound with others. In a high growth takeoff scenario, I do think energy could bind far more impactfully.

    5. Another way of looking at this is that if the price of energy goes substantially up due to AI, or at least the price of energy outside of potentially ‘government-protected uses,’ then that can only happen if it is having a large economic impact. If it doesn’t raise the price of energy a lot, then no bottleneck exists.

Tyler Cowen and I think very differently here.

  1. (9: 25) Fascinating moment. Tyler says he goes along with the experts in general, but agrees that ‘the experts’ on basically everything but AI are asleep at the wheel when it comes to AI – except when it comes to their views on diffusions of new technology in general, where the AI people are totally wrong. His view is, you get the right view by trusting the experts in each area, and combining them.

    1. Tyler seems to be making an argument from reference class expertise? That this is a ‘diffusion of technology’ question, so those who are experts on that should be trusted?

    2. Even if they don’t actually understand AI and what it is and its promise?

    3. That’s not how I roll. At all. As noted above in this post, and basically all the time. I think that you have to take the arguments being made, and see if you agree with them, and whether and how much they apply to the case of AI and especially AGI. Saying ‘the experts in area [X] predict [Y]’ is a reasonable placeholder if you don’t have the ability to look at the arguments and models and facts involved, but hey look, we can do that.

    4. Simply put, while I do think the diffusion experts are pointing to real issues that will importantly slow down adaptation, and indeed we are seeing what for many is depressingly slow apadation, they won’t slow it down all that much, because this is fundamentally different. AI and especially workers ‘adapt themselves’ to a large extent, the intelligence and awareness involved is in the technology itself, and it is digital and we have a ubiquitous digital infrastructure we didn’t have until recently.

    5. It is also way too valuable a technology, even right out of the gate on your first day, and you will start to be forced to interact with it whether you like it or not, both in ways that will make it very difficult and painful to ignore. And the places it is most valuable will move very quickly. And remember, LLMs will get a lot better.

    6. Suppose, as one would reasonably expect, by 2026 we have strong AI agents, capable of handling for ordinary people a wide variety of logistical tasks, sorting through information, and otherwise offering practical help. Apple Intelligence is partly here, Claude Alexa is coming, Project Astra is coming, and these are pale shadows of the December 2025 releases I expect. How long would adaptation really take? Once you have that, what stops you from then adapting AI in other ways?

    7. Already, yes, adaptation is painfully slow, but it is also extremely fast. In two years ChatGPT alone has 300 million MAU. A huge chunk of homework and grading is done via LLMs. A huge chunk of coding is done via LLMs. The reason why LLMs are not catching on even faster is that they’re not quite ready for prime time in the fully user-friendly ways normies need. That’s about to change in 2025.

Dwarkesh tries to use this as an intuition pump. Tyler’s not having it.

  1. (10: 15) Dwarkesh asks, what would happen if the world population would double? Tyler says, depends what you’re measuring. Energy use would go up. But he doesn’t agree with population-based models, too many other things matter.

    1. Feels like Tyler is answering a different question. I see Dwarkesh as asking, wouldn’t the extra workers mean we could simply get a lot more done, wouldn’t (total, not per capita) GDP go up a lot? And Tyler’s not biting.

  2. (11: 10) Dwarkesh tries asking about shrinking the population 90%. Shrinking, Tyler says, the delta can kill you, whereas growth might not help you.

    1. Very frustrating. I suppose this does partially respond, by saying that it is hard to transition. But man I feel for Dwarkesh here. You can feel his despair as he transitions to the next question.

  1. (11: 35) Dwarkesh asks what are the specific bottlenecks? Tyler says: Humans! All of you! Especially you who are terrified.

    1. That’s not an answer yet, but then he actually does give one.

  2. He says once AI starts having impact, there will be a lot of opposition to it, not primarily on ‘doomer’ grounds but based on: Yes, this has benefits, but I grew up and raised my kids for a different way of life, I don’t want this. And there will be a massive fight.

    1. Yes. He doesn’t even mention jobs directly but that will be big too. We already see that the public strongly dislikes AI when it interacts with it, for reasons I mostly think are not good reasons.

    2. I’ve actually been very surprised how little resistance there has been so far, in many areas. AIs are basically being allowed to practice medicine, to function as lawyers, and do a variety of other things, with no effective pushback.

    3. The big pushback has been for AI art and other places where AI is clearly replacing creative work directly. But that has features that seem distinct.

    4. Yes people will fight, but what exactly do they intend to do about it? People have been fighting such battles for a while, every year I watch the battle for Paul Bunyan’s Axe. He still died. I think there’s too much money at stake, too much productivity at stake, too many national security interests.

    5. Yes, it will cause a bunch of friction, and slow things down somewhat, in the scenarios like the one Tyler is otherwise imagining. But if that’s the central actual thing, it won’t slow things down all that much in the end. Rarely has.

    6. We do see some exceptions, especially involving powerful unions, where the anti-automation side seems to do remarkably well, see the port strike. But also see which side of that the public is on. I don’t like their long term position, especially if AI can seamlessly walk in and take over the next time they strike. And that, alone, would probably be +0.1% or more to RGDP growth.

  1. (12: 15) Dwarkesh tries using China as a comparison case. If you can do 8% growth for decades merely by ‘catching up’ why can’t you do it with AI? Tyler responds, China’s in a mess now, they’re just a middle income country, they’re the poorest Chinese people on the planet, a great example of how hard it is to scale. Dwarkesh pushes back that this is about the previous period, and Tyler says well, sure, from the $200 level.

    1. Dwarkesh is so frustrated right now. He’s throwing everything he can at Tyler, but Tyler is such a polymath that he has detail points for anything and knows how to pivot away from the question intents.

  1. (13: 40) Dwarkesh asks, has Tyler’s attitude on AI changed from nine months ago? He says he sees more potential and there was more progress than he expected, especially o1 (this was before o3). The questions he wrote for GPT-4, which Dwarkesh got all wrong, are now too easy for models like o1. And he ‘would not be surprised if an AI model beat human experts on a regular basis within three years.’ He equates it to the first Kasparov vs. DeepBlue match, which Kasparov won, before the second match which he lost.

    1. I wouldn’t be surprised if this happens in one year.

    2. I wouldn’t be that shocked o3 turns out to do it now.

    3. Tyler’s expectations here, to me, contradict his statements earlier. Not strictly, they could still both be true, but it seems super hard.

    4. How much would availability of above-human level economic thinking help us in aiding economic growth? How much would better economic policy aid economic growth?

We take a detour to other areas, I’ll offer brief highlights.

  1. (15: 45) Why are founders staying in charge important? Courage. Making big changes.

  2. (19: 00) What is going on with the competency crisis? Tyler sees high variance at the top. The best are getting better, such as in chess or basketball, and also a decline in outright crime and failure. But there’s a thick median not quite at the bottom that’s getting worse, and while he thinks true median outcomes are about static (since more kids take the tests) that’s not great.

  3. (22: 30) Bunch of shade on both Churchill generally and on being an international journalist, including saying it’s not that impressive because how much does it pay?

    1. He wasn’t paid that much as Prime Minister either, you know…

  4. (24: 00) Why are all our leaders so old? Tyler says current year aside we’ve mostly had impressive candidates, and most of the leadership in Washington in various places (didn’t mention Congress!) is impressive. Yay Romney and Obama.

    1. Yes, yay Romney and Obama as our two candidates. So it’s only been three election cycles where both candidates have been… not ideal. I do buy Tyler’s claim that Trump has a lot of talent in some ways, but, well, ya know.

    2. If you look at the other candidates for both nominations over that period, I think you see more people who were mostly also not so impressive. I would happily have taken Obama over every candidate on the Democratic side in 2016, 2020 or 2024, and Romney over every Republican (except maybe Kasich) in those elections as well.

    3. This also doesn’t address Dwarkesh’s concern about age. What about the age of Congress and their leadership? It is very old, on both sides, and things are not going so great.

    4. I can’t speak about the quality people in the agencies.

  5. (27: 00) Commentary on early-mid 20th century leaders being terrible, and how when there is big change there are arms races and sometimes bad people win them (‘and this is relevant to AI’).

For something that is going to not cause that much growth, Tyler sees AI as a source for quite rapid change in other ways.

  1. (34: 20) Tyler says all inputs other than AI rise in value, but you have to do different things. He’s shifting from producing content to making connections.

    1. This again seems to be a disconnect. If AI is sufficiently impactful as to substantially increase the value of all other inputs, then how does that not imply substantial economic growth?

    2. Also this presumes that the AI can’t be a substitute for you, or that it can’t be a substitute for other people that could in turn be a substitute for you.

    3. Indeed, I would think the default model would presumably be that the value of all labor goes down, even for things where AI can’t do it (yet) because people substitute into those areas.

  2. (35: 25) Tyler says he’s writing his books primarily for the AIs, he wants them to know he appreciates them. And the next book will be even more for the AIs so it can shape how they see the AIs. And he says, you’re an idiot if you’re not writing for the AIs.

    1. Basilisk! Betrayer! Misaligned!

    2. ‘What the AIs will think of you’ is actually an underrated takeover risk, and I pointed this out as early as AI #1.

    3. The AIs will be smarter and better at this than you, and also will be reading what the humans say about you. So maybe this isn’t as clever as it seems.

    4. My mind boggles that it could be correct to write for the AIs… but you think they will only cause +0.5% GDP annual growth.

  3. (36: 30) What won’t AIs get from one’s writing? That vibe you get talking to someone for the first 3 minutes? Sense of humor?

    1. I expect the AIs will increasingly have that stuff, at least if you provide enough writing samples. They have true sight.

    2. Certainly if they have interview and other video data to train with, that will work over time.

  1. (37: 25) What happens when Tyler turns down a grant in the first three minutes? Usually it’s failure to answer a question, like ‘how do you build out your donor base?’ without which you have nothing. Or someone focuses on the wrong things, or cares about the wrong status markers, and 75% of the value doesn’t display on the transcript, which is weird since the things Tyler names seem like they would be in the transcript.

  2. (42: 15) Tyler’s portfolio is diversified mutual funds, US-weighted. He has legal restrictions on most other actions such as buying individual stocks, but he would keep the same portfolio regardless.

    1. Mutual funds over ETFs? Gotta chase that lower expense ratio.

    2. I basically think This Is Fine as a portfolio, but I do think he could do better if he actually tried to pick winners.

  3. (42: 45) Tyler expects gains to increasingly fall to private companies that see no reason to share their gains with the public, and he doesn’t have enough wealth to get into good investments but also has enough wealth for his purposes anyway, if he had money he’d mostly do what he’s doing anyway.

    1. Yep, I think he’s right about what he would be doing, and I too would mostly be doing the same things anyway. Up to a point.

    2. If I had a billion dollars or what not, that would be different, and I’d be trying to make a lot more things happen in various ways.

    3. This implies the efficient market hypothesis is rather false, doesn’t it? The private companies are severely undervalued in Tyler’s model. If private markets ‘don’t want to share the gains’ with public markets, that implies that public markets wouldn’t give fair valuations to those companies. Otherwise, why would one want such lack of liquidity and diversification, and all the trouble that comes with staying private?

    4. If that’s true, what makes you think Nvidia should only cost $140 a share?

Tyler Cowen doubles down on dismissing AI optimism, and is done playing nice.

  1. (46: 30) Tyler circles back to rate of diffusion of tech change, and has a very clear attitude of I’m right and all people are being idiots by not agreeing with me, that all they have are ‘AI will immediately change everything’ and ‘some hyperventilating blog posts.’ AIs making more AIs? Diminishing returns! Ricardo knew this! Well that was about humans breeding. But it’s good that San Francisco ‘doesn’t know about’ diminishing returns and the correct pessimism that results.

    1. This felt really arrogant, and willfully out of touch with the actual situation.

    2. You can say the AIs wouldn’t be able to do this, but: No, ‘Ricardo didn’t know that’ and saying ‘diminishing returns’ does not apply here, because the whole ‘AIs making AIs’ principle is that the new AIs would be superior to the old AIs, a cycle you could repeat. The core reason you get eventual diminishing returns from more people is that they’re drawn from the same people distribution.

    3. I don’t even know what to say at this point to ‘hyperventilating blog posts.’ Are you seriously making the argument that if people write blog posts, that means their arguments don’t count? I mean, yes, Tyler has very much made exactly this argument in the past, that if it’s not in a Proper Academic Journal then it does not count and he is correct to not consider the arguments or update on them. And no, they’re mostly not hyperventilating or anything like that, but that’s also not an argument even if they were.

    4. What we have are, quite frankly, extensive highly logical, concrete arguments about the actual question of what [X] will happen and what [Y]s will result from that, including pointing out that much of the arguments being made against this are Obvious Nonsense.

    5. Diminishing returns holds as a principle in a variety of conditions, yes, and is a very important concept to know. Bt there are other situations with increasing returns, and also a lot of threshold effects, even outside of AI. And San Francisco importantly knows this well.

    6. Saying there must be diminishing returns to intelligence, and that this means nothing that fast or important is about to happen when you get a lot more of it, completely begs the question of what it even means to have a lot more intelligence.

    7. Earlier Tyler used chess and basketball as examples, and talked about the best youth being better, and how that was important because the best people are a key bottleneck. That sounds like a key case of increasing returns to scale.

    8. Humanity is a very good example of where intelligence at least up to some critical point very obviously had increasing returns to scale. If you are below a certain threshold of intelligence as a human, your effective productivity is zero. Humanity having a critical amount of intelligence gave it mastery of the Earth. Tell what gorillas and lions still exist about decreasing returns to intelligence.

    9. For various reasons, with the way our physical world and civilization is constructed, we often don’t typically end up rewarding relatively high intelligence individuals with that much in the way of outsided economic returns versus ordinary slightly-above-normal intelligence individuals.

    10. But that is very much a product of our physical limitations and current social dynamics and fairness norms, and the concept of a job with essentially fixed pay, and actual good reasons not to try for many of the higher paying jobs out there in terms of life satisfaction.

    11. In areas and situations where this is not the case, returns look very different.

    12. Tyler Cowen himself is an excellent example of increasing returns to scale. The fact that Tyler can read and do so much enables him to do the thing he does at all, and to enjoy oversized returns in many ways. And if you decreased his intelligence substantially, he would be unable to produce at anything like this level. If you increased his intelligence substantially or ‘sped him up’ even more, I think that would result in much higher returns still, and also AI has made him substantially more productive already as he no doubt realizes.

    13. (I’ve been over all this before, but seems like a place to try it again.)

Trying to wrap one’s head around all of it at once is quite a challenge.

  1. (48: 45) Tyler worries about despair in certain areas from AI and worries about how happy it will make us, despite expecting full employment pretty much forever.

    1. If you expect full employment forever then you either expect AI progress to fully stall or there’s something very important you really don’t believe in, or both. I don’t understand, what does Tyler thinks happen once the AIs can do anything digital as well as most or all humans? What does he think will happen when we use that to solve robotics? What are all these humans going to be doing to get to full employment?

    2. It is possible the answer is ‘government mandated fake jobs’ but then it seems like an important thing to say explicitly, since that’s actually more like UBI.

  2. Tyler Cowen: “If you don’t have a good prediction, you should be a bit wary and just say, “Okay, we’re going to see.” But, you know, some words of caution.”

    1. YOU DON’T SAY.

    2. Further implications left as an exercise to the reader, who is way ahead of me.

  1. (54: 30) Tyler says that the people in DC are wise and think on the margin, whereas the SF people are not wise and think in infinities (he also says they’re the most intelligent hands down, elsewhere), and the EU people are wisest of all, but that if the EU people ran the world the growth rate would be -1%. Whereas the USA has so far maintained the necessary balance here well.

    1. If the wisdom you have would bring you to that place, are you wise?

    2. This is such a strange view of what constitutes wisdom. Yes, the wise man here knows more things and is more cultured, and thinks more prudently and is economically prudent by thinking on the margin, and all that. But as Tyler points out, a society of such people would decay and die. It is not productive. In the ultimate test, outcomes, and supporting growth, it fails.

    3. Tyler says you need balance, but he’s at a Progress Studies conference, which should make it clear that no, America has grown in this sense ‘too wise’ and insufficiently willing to grow, at least on the wise margin.

    4. Given what the world is about to be like, you need to think in infinities. You need to be infinitymaxing. The big stuff really will matter more than the marginal revolution. That’s kind of the point.

    5. You still have to, day to day, constantly think on the margin, of course.

  2. (55: 10) Tyler says he’s a regional thinker from New Jersey, that he is an uncultured barbarian, who only has a veneer of culture because of collection of information, but knowing about culture is not like being cultured, and that America falls flat in a lot of ways that would bother a cultured Frenchman but he’s used to it so they don’t bother Tyler.

    1. I think Tyler is wrong here, to his own credit. He is not a regional thinker, if anything he is far less a regional thinker than the typical ‘cultured’ person he speaks about. And to the extent that he is ‘uncultured’ it is because he has not taken on many of the burdens and social obligations of culture, and those things are to be avoided – he would be fully capable of ‘acting cultured’ if the situation were to call for that, it wouldn’t be others mistaking anything.

    2. He refers to his approach as an ‘autistic approach to culture.’ He seems to mean this in a pejorative way, that an autistic approach to things is somehow not worthy or legitimate or ‘real.’ I think it is all of those things.

    3. Indeed, the autistic-style approach to pretty much anything, in my view, is Playing in Hard Mode, with much higher startup costs, but brings a deeper and superior understanding once completed. The cultured Frenchman is like a fish in water, whereas Tyler understands and can therefore act on a much deeper, more interesting level. He can deploy culture usefully.

  3. (56: 00) What is autism? Tyler says it is officially defined by deficits, by which definition no one there [at the Progress Studies convention] is autistic. But in terms of other characteristics maybe a third of them would count.

    1. I think term autistic has been expanded and overloaded in a way that was not wise, but at this point we are stuck with this, so now it means in different contexts both the deficits and also the general approach that high-functioning people with those deficits come to take to navigating life, via consciously processing and knowing the elements of systems and how they fit together, treating words as having meanings, and having a map that matches the territory, whereas those not being autistic navigate largely on vibes.

    2. By this definition, being the non-deficit form of autistic is excellent, a superior way of being at least in moderation and in the right spots, for those capable of handling it and its higher cognitive costs.

    3. Indeed, many people have essentially none of this set of positive traits and ways of navigating the world, and it makes them very difficult to deal with.

  4. (56: 45) Why is tech so bad at having influence in Washington? Tyler says they’re getting a lot more influential quickly, largely due to national security concerns, which is why AI is being allowed to proceed.

For a while now I have found Tyler Cowen’s positions on AI very frustrating (see for example my coverage of the 3rd Cowen-Patel podcast), especially on questions of potential existential risk and expected economic growth, and what intelligence means and what it can do and is worth. This podcast did not address existential risks at all, so most of this post is about me trying (once again!) to explain why Tyler’s views on returns to intelligence and future economic growth don’t make sense to me, seeming well outside reasonable bounds.

I try to offer various arguments and intuition pumps, playing off of Dwarkesh’s attempts to do the same. It seems like there are very clear pathways, using Tyler’s own expectations and estimates, that on their own establish more growth than he expects, assuming AI is allowed to proceed at all.

I gave only quick coverage to the other half of the podcast, but don’t skip that other half. I found it very interesting, with a lot of new things to think about, but they aren’t areas where I feel as ready to go into detailed analysis, and was doing triage. In a world where we all had more time, I’d love to do dives into those areas too.

On that note, I’d also point everyone to Dwarkesh Patel’s other recent podcast, which was with physicist Adam Brown. It repeatedly blew my mind in the best of ways, and I’d love to be in a different branch where I had the time to dig into some of the statements here. Physics is so bizarre.

Discussion about this post

On Dwarkesh Patel’s 4th Podcast With Tyler Cowen Read More »