Author name: Kelly Newman

the-2024-rolex-24-at-daytona-put-on-very-close-racing-for-a-record-crowd

The 2024 Rolex 24 at Daytona put on very close racing for a record crowd

actually 23 hours and 58 minutes this time —

The around-the-clock race marked the start of the North American racing calendar.

Porsche and Cadillac GTP race cars at Daytona

Enlarge / The current crop of GTP hybrid prototypes look wonderful, thanks to rules that cap the amount of downforce they can generate in favor of more dramatic styling.

Porsche Motorsport

DAYTONA BEACH, Fla.—Near-summer temperatures greeted a record crowd at the Daytona International Speedway in Florida last weekend. At the end of each January, the track hosts the Rolex 24, an around-the-clock endurance race that’s now as high-profile as it has ever been during the event’s 62-year history.

Between the packed crowd and the 59-car grid, there’s proof that sports car racing is in good shape. Some of that might be attributable to Drive to Survive‘s rising tide lifting a bunch of non-F1 boats, but there’s more to the story than just a resurgent interest in motorsport. The dramatic-looking GTP prototypes have a lot to do with it—powerful hybrid racing cars from Acura, BMW, Cadillac, and Porsche are bringing in the fans and, in some cases, some pretty famous drivers with F1 or IndyCar wins on their resumes.

But IMSA and the Rolex 24 is about more than just the top class of cars; in addition to the GTP hybrids, the field also comprised the very competitive pro-am LMP2 prototype class and a pair of classes (one for professional teams, another for pro-ams) for production-based machines built to a global set of rules, called GT3. (To be slightly confusing, in IMSA, those classes are known as GTD-Pro and GTD. More on sports car racing being needlessly confusing later.)

The crowd for the 2024 Rolex 24 was larger even than last year. This is the pre-race grid walk, which I chose to watch from afar.

Enlarge / The crowd for the 2024 Rolex 24 was larger even than last year. This is the pre-race grid walk, which I chose to watch from afar.

Jonathan Gitlin

There was even a Hollywood megastar in attendance, as the Jerry Bruckheimer-produced, Joseph Kosinski-directed racing movie starring Brad Pitt was at the track filming scenes for the start of that movie.

GTP finds its groove

Last year’s Rolex 24 was the debut of the new GTP cars, and they didn’t have an entirely trouble-free race. These cars are some of the most complicated sports prototypes to turn a wheel due to hybrid systems, and during the 2023 race, two of the entrants required lengthy stops to replace their hybrid batteries. Those teething troubles are a thing of the past, and over the last 12 months, the cars have found an awful lot more speed, with most of the 10-car class breaking Daytona’s lap record during qualifying.

Most of that new speed has come from the teams’ familiarity with the cars after a season of racing but also from a year of software development. Only Porsche’s 963 has had any mechanical upgrades during the off-season. “You… will not notice anything on the outside shell of the car,” explained Urs Kuratle, Porsche Motorsport’s director of factory racing. “So the aerodynamics, all [those] things, they look the same… Sometimes it’s a material change, where a fitting used to be out of aluminum and due to reliability reasons we change to steel or things like this. There are minor details like this.”

  • This year, the Wayne Taylor Racing team had not one but two ARX-06s. I expected the cars to be front-runners, but a late BoP change added another 40 kg.

    Jonathan Gitlin

  • The Cadillacs are fan favorites because of their loud, naturally aspirated V8s. I think the car looks better than the other GTP cars, too.

    Jonathan Gitlin

  • Porsche’s 963 is the only GTP car that has had any changes since last year, but they’re all under the bodywork.

    Jonathan Gitlin

  • Porsche is the only manufacturer to start selling customer GTP cars so far. The one on the left is the Proton Competition Mustang Sampling car; the one on the right belongs to JDC-Miller MotorSports.

    Jonathan Gitlin

GTP cars aren’t as fast or even as powerful as an F1 single-seater, but the driver workload from inside the cockpit may be even higher. At last year’s season-ending Petit Le Mans, former F1 champion Jenson Button—then making a guest appearance in the privateer-run JDC Miller Motorsport Porsche 963—came away with a newfound respect for how many different systems could be tweaked from the steering wheel.

The 2024 Rolex 24 at Daytona put on very close racing for a record crowd Read More »

tesla’s-week-gets-worse:-fines,-safety-investigation,-and-massive-recall

Tesla’s week gets worse: Fines, safety investigation, and massive recall

toxic waste, seriously? —

There have been 2,388 complaints about steering failure in the Model 3 and Model Y.

A Tesla Model Y steering wheel and dashboard

Enlarge / More than 2,000 Tesla model-year 2023 Model Y and Model 3s have suffered steering failure, according to a new NHTSA safety defect investigation.

Sjoerd van der Wal/Getty Images

It’s been a rough week for Tesla. On Tuesday, a court in Delaware voided a massive $55.8 billion pay package for CEO Elon Musk. Then, news emerged that Tesla was being sued by 25 different counties in California for years of dumping toxic waste. That was followed by a recall affecting 2.2 million Teslas. Now, Ars has learned that the National Highway Traffic Safety Administration’s Office of Defects Investigation is investigating the company after 2,388 complaints of steering failure affecting the model-year 2023 Model 3 sedan and Model Y crossover.

Paint, brake fluid, used batteries, antifreeze, diesel

Tesla has repeatedly run afoul of laws designed to protect the environment from industrial waste. In 2019, author Edward Niedermeyer cataloged the troubles the company ran into with air pollution from its paint shop in Fremont, California, some of which occurred when the automaker took to painting its cars in a temporary tent-like marquee.

In 2022, the US Environmental Protection Agency fined Tesla $275,000 for violating the Clean Air Act, which followed a $31,000 penalty Tesla paid to the EPA in 2019. But EPA data shows that Tesla continued to violate the Clean Air Act in 2023.

And on Wednesday, Reuters reported that 25 Californian counties sued Tesla for violating the state’s hazardous waste laws and unfair business laws by improperly labeling hazardous waste before sending it to landfills that were not able to deal with the material.

The suit alleged that violations occurred at more than 100 facilities, including the factory in Fremont, and that Tesla disposed of hazardous materials including “but not limited to: lubricating oils, brake fluids, lead acid batteries, aerosols, antifreeze, cleaning fluids, propane, paint, acetone, liquified petroleum gas, adhesives and diesel fuel.”

Despite potentially large penalties for these industrial waste violations, which could have resulted in tens of thousands of dollars of fines for each day the automaker was not compliant, the counties and Tesla swiftly settled the suit on Thursday. Tesla, which had annual revenues of $96.8 billion in 2023, will pay just $1.3 million in civil penalties and an additional $200,000 in costs. The company is supposed to properly train its employees and hire a third party to conduct annual waste audits at 10 percent of its facilities, according to the Office of the District Attorney in San Francisco.

“While electric vehicles may benefit the environment, the manufacturing and servicing of these vehicles still generates many harmful waste streams,” said District Attorney Brooke Jenkins. “Today’s settlement against Tesla, Inc. serves to provide a cleaner environment for citizens throughout the state by preventing the contamination of our precious natural resources when hazardous waste is mismanaged and unlawfully disposed. We are proud to work with our district attorney partners to enforce California’s environmental laws to ensure these hazardous wastes are handled properly.”

An easy recall, a not-so-easy defect investigation

Tesla’s latest recall is a big one, affecting 2,193,869 vehicles—nearly every Tesla sold in the US, including the Model S (model years 2012–2023), the Model X (model years 2016–2024), the Model 3 (model years 2014–2023), the Model Y (model years 2019–2024) and the Cybertruck.

According to the official Part 573 Safety Notice, the issue is due to the cars’ displays, which use a font for the brake, park, and antilock brake warning indicators that is smaller than is legally required under the federal motor vehicle safety standards. NHTSA says it noticed the problem as part of a routine compliance audit on a Model Y in early January. After the agency informed the automaker, Tesla looked into the issue itself, and on January 24, it decided to issue a safety recall. Fortunately for the automaker, it can fix this problem with a software update.

A software patch is unlikely to help its other safety defect problem, however. Yesterday, NHTSA’s ODI upgraded a preliminary evaluation (begun in July 2023) to a full investigation of the steering components fitted to model-year 2023 Models 3 and Y.

NHTSA’s ODI says the problem affects up to 334,569 vehicles, which could suffer a loss of steering control. There have been 124 complaints of steering failure to NHTSA, and the agency says Tesla identified a further 2,264 customer complaints related to the problem. So far, at least one Tesla has crashed as a result of being unable to complete a right turn in an intersection.

A third of the complaints were reported to have happened at speeds below 5 mph, with the majority occurring between 5 and 35 mph and about 10 percent occurring above 35 mph (at least one complaint alleges the problem occurred at 75 mph). “A majority of allegations reported seeing a warning message, ‘Steering assist reduced,’ either before, during, or after the loss of steering control. A portion of drivers described their steering begin to feel ‘notchy’ or ‘clicky’ either prior to or just after the incident,” NHTSA’s investigation said.

NHTSA says there have been “multiple allegations of drivers blocking intersections and/or roadways,” and that more than 50 Teslas had to be towed as a result of the problem. The problem appears to be related to two of the four steering rack part numbers that Tesla used for these model-year 2023 EVs. They were installed in 2,187 of the vehicles, according to the complaints.

Tesla’s week gets worse: Fines, safety investigation, and massive recall Read More »

daily-telescope:-a-wolf-rayet-star-puts-on-a-howling-light-show

Daily Telescope: A Wolf-Rayet star puts on a howling light show

Hungry like the wolf —

I’d like to see it go boom.

The Crescent Nebula.

Enlarge / The Crescent Nebula.

1Zach1

Welcome to the Daily Telescope. There is a little too much darkness in this world and not enough light, a little too much pseudoscience and not enough science. We’ll let other publications offer you a daily horoscope. At Ars Technica, we’re going to take a different route, finding inspiration from very real images of a universe that is filled with stars and wonder.

Good morning. It’s February 2, and today’s image concerns an emission nebula about 5,000 light-years away in the Cygnus constellation.

Discovered more than 230 years ago by William Herschel, astronomers believe the Crescent Nebula is formed by the combination of an energetic stellar wind from a Wolf-Rayet star at its core, colliding with slower-moving material ejected earlier in the star’s lifetime. Ultimately, this should all go supernova, which will be quite spectacular.

Will you or I be alive to see it? Probably not.

But in the meantime, we can enjoy the nebula for what it is. This photo was captured by Ars reader 1Zach1 with an Astro-Tech AT80ED Refractor telescope. It was the product of 11 hours of integration, or 228 exposures each lasting three minutes. It was taken in rural southwestern Washington.

Have a great weekend, everyone.

Source: 1Zach1

Do you want to submit a photo for the Daily Telescope? Reach out and say hello.

Daily Telescope: A Wolf-Rayet star puts on a howling light show Read More »

why-interstellar-objects-like-‘oumuamua-and-borisov-may-hold-clues-to-exoplanets

Why interstellar objects like ‘Oumuamua and Borisov may hold clues to exoplanets

celestial nomads —

Two celestial interlopers in Solar System have scientists eagerly anticipating more.

The first interstellar interloper detected passing through the Solar System, 1l/‘Oumuamua, came within 24 million miles of the Sun in 2017

Enlarge / The first interstellar interloper detected passing through the Solar System, 1l/‘Oumuamua, came within 24 million miles of the Sun in 2017. It’s difficult to know exactly what ‘Oumuamua looked like, but it was probably oddly shaped and elongated, as depicted in this illustration.

On October 17 and 18, 2017, an unusual object sped across the field of view of a large telescope perched near the summit of a volcano on the Hawaiian island of Maui. The Pan-STARRS1 telescope was designed to survey the sky for transient events, like asteroid or comet flybys. But this was different: The object was not gravitationally bound to the Sun or to any other celestial body. It had arrived from somewhere else.

The mysterious object was the first visitor from interstellar space observed passing through the Solar System. Astronomers named it 1I/‘Oumuamua, borrowing a Hawaiian word that roughly translates to “messenger from afar arriving first.” Two years later, in August 2019, amateur astronomer Gennadiy Borisov discovered the only other known interstellar interloper, now called 2I/Borisov, using a self-built telescope at the MARGO observatory in Nauchnij, Crimea.

While typical asteroids and comets in the Solar System orbit the Sun, ‘Oumuamua and Borisov are celestial nomads, spending most of their time wandering interstellar space. The existence of such interlopers in the Solar System had been hypothesized, but scientists expected them to be rare. “I never thought we would see one,” says astrophysicist Susanne Pfalzner of the Jülich Supercomputing Center in Germany. At least not in her lifetime.

With these two discoveries, scientists now suspect that interstellar interlopers are much more common. Right now, within the orbit of Neptune alone, there could be around 10,000 ‘Oumuamua-size interstellar objects, estimates planetary scientist David Jewitt of UCLA, coauthor of an overview of the current understanding of interstellar interlopers in the 2023 Annual Review of Astronomy and Astrophysics.

Researchers are busy trying to answer basic questions about these alien objects, including where they come from and how they end up wandering the galaxy. Interlopers could also provide a new way to probe features of distant planetary systems.

But first, astronomers need to find more of them.

“We’re a little behind at the moment,” Jewitt says. “But we expect to see more.”

2I/Borisov appears as a fuzzy blue dot in front of a distant spiral galaxy (left) in this November 2019 image taken by the Hubble Space Telescope when the object was approximately 200 million miles from Earth.

Enlarge / 2I/Borisov appears as a fuzzy blue dot in front of a distant spiral galaxy (left) in this November 2019 image taken by the Hubble Space Telescope when the object was approximately 200 million miles from Earth.

Alien origins

At least since the beginning of the 18th century, astronomers have considered the possibility that interstellar objects exist. More recently, computer models have shown that the Solar System sent its own population of smaller bodies into the voids of interstellar space long ago due to gravitational interactions with the giant planets.

Scientists expected most interlopers to be exocomets composed of icy materials. Borisov fit this profile: It had a tail made of gases and dust created by ices that evaporated during its close passage to the Sun. This suggests that it originated in the outer region of a planetary system where temperatures were cold enough for gases like carbon monoxide to have frozen into its rocks. At some point, something tossed Borisov, roughly a kilometer across, out of its system.

One potential culprit is a stellar flyby. The gravity of a passing star can eject smaller bodies, known as planetesimals, from the outer reaches of a system, according to a recent study led by Pfalzner. A giant planet could also eject an object from the outer regions of a planetary system if an asteroid or comet gets close enough for the planet’s gravitational tug to speed up the smaller body enough for it to escape its star’s hold. Close approaches can also happen when planets migrate across their planetary systems, as Neptune is thought to have done in the early Solar System.

Why interstellar objects like ‘Oumuamua and Borisov may hold clues to exoplanets Read More »

rocket-report:-spacex-at-the-service-of-a-rival;-endeavour-goes-vertical

Rocket Report: SpaceX at the service of a rival; Endeavour goes vertical

Stacked —

The US military appears interested in owning and operating its own fleet of Starships.

Space shuttle<em> Endeavour</em>, seen here in protective wrapping, was mounted on an external tank and inert solid rocket boosters at the California Science Center.” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/02/GFNrsMPWIAAWxNw-800×1000.jpeg”></img><figcaption>
<p><a data-height=Enlarge / Space shuttle Endeavour, seen here in protective wrapping, was mounted on an external tank and inert solid rocket boosters at the California Science Center.

Welcome to Edition 6.29 of the Rocket Report! Right now, SpaceX’s Falcon 9 rocket is the only US launch vehicle offering crew or cargo service to the International Space Station. The previous version of Northrop Grumman’s Antares rocket retired last year, forcing that company to sign a contract with SpaceX to launch its Cygnus supply ships to the ISS. And we’re still waiting on United Launch Alliance’s Atlas V (no fault of ULA) to begin launching astronauts on Boeing’s Starliner crew capsule to the ISS. Basically, it’s SpaceX or bust. It’s a good thing that the Falcon 9 has proven to be the most reliable rocket in history.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

Virgin Galactic flies four passengers to the edge of space. Virgin Galactic conducted its first suborbital mission of 2024 on January 26 as the company prepares to end flights of its current spaceplane, Space News reports. The flight, called Galactic 06 by Virgin Galactic, carried four customers for the first time, along with its two pilots, on a suborbital hop over New Mexico aboard the VSS Unity rocket plane. Previous commercial flights had three customers on board, along with a Virgin Galactic astronaut trainer. The customers, which Virgin Galactic didn’t identify until after the flight, held US, Ukrainian, and Austrian citizenship.

Pending retirement … Virgin Galactic announced last year it would soon wind down flights of VSS Unity, citing the need to conserve its cash reserves for development of its next-generation Delta class of suborbital vehicles. Those future vehicles are intended to fly more frequently and at lower costs than Unity. After Galactic 06, Virgin Galactic said it will fly Unity again on Galactic 07 in the second quarter of the year with a researcher and private passengers. The company could fly Unity a final time later this year on the Galactic 08 mission. Since 2022, Virgin Galactic has been the only company offering commercial seats on suborbital spaceflights. The New Shepard rocket and spacecraft from competitor Blue Origin hasn’t flown people since a launch failure in September 2022. (submitted by Ken the Bin)

Iran launches second rocket in eight days. Iran launched a trio of small satellites into low-Earth orbit on January 28, Al Jazeera reports. This launch used Iran’s Simorgh rocket, which made its first successful flight into orbit after a series of failures dating back to 2017. The two-stage, liquid-fueled Simorgh rocket deployed three satellites. The largest of the group, named Mehda, was designed to measure the launch environments on the Simorgh rocket and test its ability to deliver multiple satellites into orbit. Two smaller satellites will test narrowband communication and geopositioning technology, according to Iran’s state media.

Back to back … This was a flight of redemption for the Simorgh rocket, which is managed by the civilian-run Iranian Space Agency. While the Simorgh design has repeatedly faltered, the Iranian military’s Islamic Revolutionary Guard Corps has launched two new orbital-class rockets in recent years. The military’s Qased launch vehicle delivered small satellites into orbit on three successful flights in 2020, 2022, and 2023. Then, on January 20, the military’s newest rocket, named the Qaem 100, put a small remote-sensing payload into orbit. Eight days later, the Iranian Space Agency finally achieved success with the Simorgh rocket. Previously, Iranian satellite launches have been spaced apart by at least several months. (submitted by Ken the Bin)

The easiest way to keep up with Eric Berger’s space reporting is to sign up for his newsletter, we’ll collect his stories in your inbox.

Rocket Lab’s first launch of 2024. Rocket Lab was back in action on January 31, kicking off its launch year with a recovery Electron mission from New Zealand. This was its second return-to-flight mission following a mishap late last year, Spaceflight Now reports. Rocket Lab’s Electron rocket released four Space Situational Awareness (SSA) satellites into orbit for Spire Global and NorthStar Earth & Space. Peter Beck, Rocket Lab’s founder and CEO, said in a statement that the company has more missions on the books for 2024 than in any year before. Last year, Rocket Lab launched 10 flights of its light-class Electron launcher.

Another recovery … Around 17 minutes after liftoff, the Electron’s first-stage booster splashed down in the Pacific Ocean under parachute. A recovery vessel was stationed nearby downrange from the launch base at Mahia Peninsula, located on the North Island of New Zealand. Rocket Lab has ambitions of re-flying a first stage booster in its entirety. Last August, it demonstrated partial reuse with the re-flight of a Rutherford engine salvaged from a booster recovered on a prior mission. (submitted by Ken the Bin)

PLD Space wins government backing. PLD Space has won the second and final round of a Spanish government call to develop sovereign launch capabilities, European Spaceflight reports. Spain’s Center for Technological Development and Innovation announced on January 26 that it selected PLD Space, which is developing a small launch vehicle called Miura 5, to receive a 40.5-million euro loan from a government fund devoted to aiding the Spanish aerospace sector, with a particular emphasis on access to space. Last summer, the Spanish government selected PLD Space and Pangea Aerospace to each receive 1.5 million euros in a preliminary funding round to mature their designs. PLD Space won the second round of the loan competition.

Moving toward Miura 5 … “The technical decision in favor of PLD Space confirms that our technological development strategy is sound and is based on a solid business plan,” said Ezequiel Sanchez, PLD Space’s executive president. “Winning this public contract to create a strategic national capability reinforces our position as a leading company in securing Europe’s access to space.” Miura 5 will be capable of launching about a half-ton of payload mass into low-Earth orbit and is scheduled to make its debut launch from French Guiana in late 2025 or early 2026, followed by the start of commercial operations later in 2026. PLD Space will need to repay the loan through royalties over the first 10 years of the commercial operation of Miura 5. (submitted by Leika)

Rocket Report: SpaceX at the service of a rival; Endeavour goes vertical Read More »

on-dwarkesh’s-3rd-podcast-with-tyler-cowen

On Dwarkesh’s 3rd Podcast with Tyler Cowen

This post is extensive thoughts on Tyler Cowen’s excellent talk with Dwarkesh Patel.

It is interesting throughout. You can read this while listening, after listening or instead of listening, and is written to be compatible with all three options. The notes are in order in terms of what they are reacting to, and are mostly written as I listened.

I see this as having been a few distinct intertwined conversations. Tyler Cowen knows more about more different things than perhaps anyone else, so that makes sense. Dwarkesh chose excellent questions throughout, displaying an excellent sense of when to follow up and how, and when to pivot.

The first conversation is about Tyler’s book GOAT about the world’s greatest economists. Fascinating stuff, this made me more likely to read and review GOAT in the future if I ever find the time. I mostly agreed with Tyler’s takes here, to the extent I am in position to know, as I have not read that much in the way of what these men wrote, and at this point even though I very much loved it at the time (don’t skip the digression on silver, even, I remember it being great) The Wealth of Nations is now largely a blur to me.

There were also questions about the world and philosophy in general but not about AI, that I would mostly put in this first category. As usual, I have lots of thoughts.

The second conversation is about expectations given what I typically call mundane AI. What would the future look like, if AI progress stalls out without advancing too much? We cannot rule such worlds out and I put substantial probability on them, so it is an important and fascinating question.

If you accept the premise of AI remaining within the human capability range in some broad sense, where it brings great productivity improvements and rewards those who use it well but remains foundationally a tool and everything seems basically normal, essentially the AI-Fizzle world, then we have disagreements but Tyler is an excellent thinker about these scenarios. Broadly our expectations are not so different here.

That brings us to the third conversation, about the possibility of existential risk or the development of more intelligent and capable AI that would have greater affordances. For a while now, Tyler has asserted that such greater intelligence likely does not much matter, that not so much would change, that transformational effects are highly unlikely, whether or not they constitute existential risks. That the world will continue to seem normal, and follow the rules and heuristics of economics, essentially Scott Aaronson’s Futurama. Even when he says AIs will be decentralized and engage in their own Hayekian trading with their own currency, he does not think this has deep implications, nor does it imply much about what else is going on beyond being modestly (and only modestly) productive.

Then at other times he affirms the importance of existential risk concerns, and indeed says we will be in need of a hegemon, but the thinking here seems oddly divorced from other statements, and thus often rather confused. Mostly it seems consistent with the view that it is much easier to solve alignment quickly, build AGI and use it to generate a hegemon, than it would be to get any kind of international coordination. And also that failure to quickly build AI risks our civilization collapsing. But also I notice this implies that the resulting AIs will be powerful enough to enable hegemony and determine the future, when in other contexts he does not think they will even enable sustained 10% GDP growth.

Thus at this point, I choose to treat most of Tyler’s thoughts on AI as if they are part of the second conversation, with an implicit ‘assuming an AI at least semi-fizzle’ attached to them, at which point they become mostly excellent thoughts.

Dealing with the third conversation is harder. There is place where I feel Tyler is misinterpreting a few statements, in ways I find extremely frustrating and that I do not see him do in other contexts, and I pause to set the record straight in detail. I definitely see hope in finding common ground and perhaps working together. But so far I have been unable to find the road in.

  1. I don’t buy the idea that investment returns have tended to be negative, or that VC investment returns have overall been worse than the market, but I do notice that this is entirely compatible with long term growth due to positive externalities not captured by investors.

  2. I agree with Tyler that the entrenched VCs are highly profitable, but that other VCs due to lack of good deal flow and adverse selection, and lack of skill, don’t have good returns. I do think excessive optimism produces competition that drives down returns but that returns would otherwise be insane.

  3. I also agree with Tyler that those with potential for big innovations or otherwise very large returns both do well themselves and also capture only a small fraction of total returns they generate, and I agree that the true rate is unknown and 2% is merely a wild guess.

  4. And yes, many people foolishly (or due to highly valuing independence) start small businesses that will have lower expected returns than a job. But I think that they are not foolish to value that independence highly versus taking a generic job, and also I believe that with proper attention to what actually causes success plus hard work small business can have higher private returns than a job for a great many people. A bigger issue is that many small businesses are passion projects such as restaurants and bars where the returns tend to be extremely bad. But the reason the returns are low is exactly because so many are passionate and want to do it.

  5. I find it silly to think that literal Keynes did not at the time have the ability to beat the market by anticipating what others would do. I am on record as saying the efficient market hypothesis is false, certainly in this historical context it should be expected to be highly false. The reason you cannot make money from this kind of anticipation easily is that the anticipation is priced in, but Keynes was clearly in position to notice it being not priced in. I share Tyler’s disdain for where the argument was leading regarding socializing long term investment, and also think that long term fundamentals-based investing or building factories is profitable, having less insight and more risk should get priced in. That is indeed what I am doing with most of my investments.

  6. Financial system at 2% of wealth might not be growing in those terms and maybe it’s not outrageous on its face but it is at least suspicious, that’s a hell of a management fee especially given many assets aren’t financialized, and 8% of GDP still seems like a huge issue. And yes, I think that if that number goes up as wealth goes up that still constitutes a very real problem.

  7. Risk behavior where you buy insurance for big things and take risks in small things makes perfect sense, both as mood management and otherwise, considering marginal utility curves and blameworthiness. You need to take a lot of small risks at minimum. No Gamble, No Future.

  8. The idea that someone’s failures are highly illustrative seems right, also I worry about people adapting that idea too rigorously.

  9. The science of what lets people ‘get away with’ what is generally considered socially unacceptable behaviors while being prominent seems neglected.

  10. Tyler continuing to bet on economic growth meaning things turned out well pretty much no matter what, whereas shrinking fertility risks things turning out badly. I find it so odd to model the future in ways that implicitly assume away AI.

  11. If hawks always gain long term status and pacifists always lose it, that does not seem like it can be true in equilibrium?

  12. I think that Hayek’s claim that there is a general natural human trend towards more socialism has been proven mostly right, and I’m confused why Tyler disagrees. I do think there are other issues we are facing now that are at least somewhat distinct from that question, and those issues are important, but also I would notice that those other problems are mostly closely linked to larger government intervention in markets.

  13. Urbanization is indeed very underrated. Housing theory of everything.

  14. ‘People overrate the difference between government and market’ is quite an interesting claim, that the government acts more like a market than you think. I don’t think I agree with this overall, although some doubtless do overrate it?

  15. (30: 00) The market as the thing that finds a solution that gets us to the next day is a great way to think about it. And the idea that doing that, rather than solving for the equilibrium, is the secret of its success, seems important. It turns out that, partly because humans anticipate the future and plan for it, this changes what they are willing to do at what price today, and that this getting to tomorrow to fight another day will also do great things in the longer term. That seems exactly right, and also helps us point to the places this system might fail, while keeping in mind that it tends to succeed more than you would expect. A key question regarding AI is whether this will continue to work.

  16. Refreshing to hear that the optimum amount of legibility and transparency is highly nonzero but also not maximally high either.

  17. (34: 00): Tyler reiterates that AIs will create their own markets, and use their own currencies, property rights and perhaps Bitcoins and NFTs will be involved, and that decentralized AI systems acting in self-interested ways will be an increasing portion of our economic activity. Which I agree is a baseline scenario of sorts if we dodge some other bullets. He even says that the human and AI markets will be fully integrated. And that those who are good at AI integration, at outsourcing their activities to AI, will be vastly more productive than those who do not (and by implication, outcompete them).

  18. What I find frustrating is Tyler failing to then solve for the equilibrium, and asking what happens next. If we are increasingly handing economic activity over to self-interested competitive AI agents who compete against each other in a market and to get humans to allocate power and resources to them, subject to the resulting capitalistic and competitive and evolutionary and selection dynamics, where does that lead? How do we survive? I would as Tyler often requests Model This, except that I don’t see how not to assume the conclusion.

  19. (37: 00) Tyler expresses skepticism that GPT-N can scale up its intelligence that far, that beyond 5.5 maybe integration with other systems matters more, and says ‘maybe the universe is not that legible.’ I essentially read this as Tyler engaging in superintelligence denialism, consistent with his idea that humans with very high intelligence are themselves overrated, and saying that there is no meaningful sense in which intelligence can much exceed generally smart human level other than perhaps literal clock speed.

  20. A lot of this, that I see from many economists, seems to be based on the idea that the world will still be fundamentally normal and respond to existing economic principles and dynamics, and effectively working backwards from there, although of course it is not framed or presented that way. Thus intelligence and other AI capabilities will ‘face bottlenecks’ and regulations that they will struggle to overcome, which will doubtless be true, but I think gets easily overrun or gone around at some point relatively quickly.

  21. (39: 00) Tyler asks, is more intelligence likely to be good or bad against existential risk? And says he thinks it is more likely to be good. There are several ways to respond with ‘it depends.’ The first is that while I would very much be against this as a strategy of course, if we were always not as intelligent as we actually are, such that we never industrialized, then we would not face substantial existential risks except over very long time horizons. Talk of asteroid impacts is innumerate, without burning coal we wouldn’t be worried about climate, nuclear and biological threats and AI would be irrelevant, fertility would remain high.

  22. Then on the flip side of adding more intelligence, I agree that adding more actually human intelligence will tend to be good, so the question again is how to think about this new intelligence and how it will get directed and to what extent we will remain in control of it and of what happens, and so on. How exactly will this new intelligence function and to what extent will it be on our behalf? Of course I have said much of this before as has Tyler, so I will stop there.

  23. The idea that AI potentially prevents other existential risks is of course true. It also potentially causes them. We are (or should be) talking price. As I have said before, if AI posed a non-trivial but sufficiently low existential risk, its upsides including preventing other risks would outweigh that.

  24. (40: 30) Tyler made an excellent point here, that market participants notice a lot more than the price level. They care about size, about reaction speed and more, and take in the whole picture. The details teach you so much more. This is also another way of illustrating that the efficient market hypothesis is false.

  25. How do some firms improve over time? It is a challenge for my model of Moral Mazes that there are large centuries old Japanese or Dutch companies. It means there is at least some chance to reinvigorate such companies, or methods that can establish succession and retain leadership that can contain the associated problems. I would love to see more attention paid to this. The fact that Israel and the United States only have young firms and have done very well on economic growth suggests the obvious counterargument.

  26. I love the point that a large part of the value of free trade is that it bankrupts your very worst firms. Selection is hugely important.

  27. (48: 00) Tyler says we should treat children better and says we have taken quite a few steps in that direction. I would say that we are instead treating children vastly worse. Children used to have copious free time and extensive freedom of movement, and now they lack both. If they do not adhere to the programs we increasingly put them on medication and under tremendous pressure. The impacts of smartphones and social media are also ‘our fault.’ There are other ways in which we treat them better, in particular not tolerating using corporal punishment or other forms of what we now consider abuse. Child labor is a special case, where we have gone from forcing children to do productive labor in often terrible ways to instead forcing children to do unproductive labor in often terrible ways, and also banning children from doing productive labor for far too long, which is also its own form of horrific. But of course most people will say that today’s abuses are fine and yesterday’s are horrific.

  28. Mill getting elected to Parliament I see as less reflecting differential past ability for a top intellectual to win an election, and more a reflection of his willingness to put himself up for the office and take one for the team. I think many of our best intellectuals could absolutely make it to Congress if they cared deeply about making it to Congress, but that they (mostly wisely) choose not to do that.

  29. (53: 00) Smith noticed, despite persistent millennia long very slow if any growth, that economic growth was coming by observing a small group and seeing those dynamics as the future. The parallels to AI are obvious and Patel asks about it. Cowen says that to Smith 10% growth would likely be inconceivable, and he wouldn’t predict it because it would just shock him. I think this is right, and also I believe a lot of current economists are doing exactly that mental step today.

  30. Cowen also says he finds 10% growth for decades on end implausible. I would agree that seems unlikely, but I would say that not because it is too high but because you would then see such growth accelerate if it failed to rapidly hit a hard wall or cause a catastrophe, not because there would be insufficient room for continued growth. I do think his point that GDP growth ceases to be a good measure under sufficiently large level changes is sensible.

  31. I am curious how he would think about all these questions with regard to for example China’s emergence in the late 20th century. China has grown at 9% a year since 1978, so it is an existence proof that this can happen for some time. In some sense you can think of growth under AI potentially as a form of catch-up growth as well, in the sense that AI unlocks a superior standard of technological, intellectual and physical capacity for production (assuming the world is somehow recognizable at all) and we would be adapting to it.

  32. Tyler asks: If you had the option to buy from today’s catalogue or the Sears catalogue from 1905 and had $50,000 to spend, which would you choose? He points out you have to think about it, which indeed you do if this is to be your entire consumption bundle. If you are allowed trade, of course, it is a very easy decision, you can turn that $50,000 into vastly more.

  33. (1: 05: 00) Dwarkesh says my exact perspective on Tyler’s thinking, that he is excellent on GPT-5 level stuff, then seems (in my words not his) to hit a wall, and fails (in Dwarkesh’s words) to take all his wide ranging knowledge and extrapolate. That seems exactly right to me, that there is an assumption of normality of sorts, and when we get to the point where normality as a baseline stops making sense the predictions stop making sense. Tyler responds saying he writes about AI a lot and shares ideas he has them, and I don’t doubt those claims, but it does not address the point. I like that Dwarkesh asked the right question, and also realized that it would not be fruitful to pursue it once Tyler dodged answering. Dwarkesh has GOAT-level podcast question game.

  34. Should we subsidize savings? Tyler says he will come close to saying yes, at minimum we should stop taxing savings, which I agree with. He warns that the issue with subsidizing savings is it is regressive and would be seen as unacceptable.

  1. (1: 14: 00) Tyler worries about the fragile world hypothesis, not in terms of what AI could do but in terms of what could be done with… cheap energy? He asks what would happen if a nuclear bomb costs $50k. Which is a great question, but seems rather odd to worry about it primarily in terms of cost of energy?

  2. Tyler notes that due to intelligence we are doing better than the other great apes. I would reply that this is very true, that being the ape with the most intelligence has gone very well for us, and perhaps we should hesitate to create something that in turn has more intelligence than we do, for similar reasons?

  3. He says the existential risk people say ‘we should not risk all of this’ for AI, and that this is not how you should view history. Well, all right, then let’s talk price?

  4. Tyler thinks there is a default outcome of retreating to a kind of Medieval Balkans style existence with a much lower population ‘with or without AI.’ The with or without part really floors me, and makes me more confident that when he thinks about AI he simply is not pondering what I am pondering, for whatever reason, at all? But the more interesting claim is that, absent ‘going for it’ via AI, we face this kind of outcome.

  5. Tyler says things are hard to control, that we cannot turn back (and that we ‘chose a decentralized world well before humans even existed’) and such, although he does expect us to turn back via the decline scenario? He calls for some set of nations to establish dominance in AI, to at least buy us some amount of time. In some senses he has a point, but he seems to be doing some sort of confluence of the motte and bailey here. Clearly some forms of centralization are possible.

  6. By calling for nations such as America and the UK to establish dominance in this way, he must mean for particular agents within those nations to establish that dominance. It is not possible for every American to have root access and model weights and have that stay within America, or be functionally non-decentralized in the way he sees as necessary here. It could be the governments themselves, a handful of corporations or a combination or synthesis thereof. I would note this is, among other things, entirely incompatible with open model weights for frontier systems, and will require a compute monitoring regime.

  7. It certainly seems like Tyler is saying that we need to avoid misuse and proliferation of sufficiently capable AI systems at the cost of establishment of hegemonic control over AI, with all that implies? There is ultimately remarkable convergence of actual models of the future and of what is to be done, on many fronts, even without Tyler buying the full potential of such systems or thinking their consequences fully through. But notice the incompatibility of American dominance in AI with the idea of everyone’s AIs engaging in Hayekian commerce under a distinct ecosystem, unless you think that there is some form of centralized control over those AIs and access to them. So what exactly is he actually proposing? And how does he propose that we lay the groundwork now in order to get there?

  1. I get a mention and am praised as super smart which is always great to hear, but in the form of Tyler once again harping on the fact that when China came out saying they would require various safety checks on their AIs, I and others pointed out that China was open to potential cooperation and was willing to slow down its AI development in the name of safety even without such cooperation. He says that I and others said “see, China is not going to compete with us, we can shut AI down.”

So I want to be clear: That is simply not what I said or was attempting to convey.

I presume he is in particular referring to this:

Zvi Mowshowitz (April 19, 2023): Everyone: We can’t pause or regulate AI, or we’ll lose to China.

China: All training data must be objective, no opinions in the training data, any errors in output are the provider’s responsibility, bunch of other stuff.

I look forward to everyone’s opinions not changing.

[I quote tweeted MMitchell saying]: Just read the draft Generative AI guidelines that China dropped last week. If anything like this ends up becoming law, the US argument that we should tiptoe around regulation ‘cos China will beat us will officially become hogwash. Here are some things that stood out…

So in this context, Tyler and many others were claiming that if we did any substantive regulations on AI development we risked losing to China.

I was pointing out that China was imposing substantial regulations for its own reasons. These requirements, even if ultimately watered down, would be quite severe restrictions on their ability to deploy such systems.

The intended implication was that China clearly was not going to go full speed ahead with AI, they were going to impose meaningfully restrictive regulations, and so it was silly to say that unless we imposed zero restrictions we would ‘lose to China.’ And also that perhaps China would be open to collaboration if we would pick up the phone.

And yes, that we could pause the largest AI training runs for some period of time without substantively endangering our lead, if we choose to do that. But the main point was that we could certainly do reasonable regulations.

The argument was not that we could permanently shut down all AI development forever without any form of international agreement, and China and others would never move forward or never catch up to that.

I believe actually that the rest of 2023 has borne out that China’s restrictions in various ways have mattered a lot, that even within specifically AI they have imposed more meaningful barriers than we have, that they remain quite behind, and that they have shown willingness to sit down to talk on several occasions, including the UK Summit, the agreement on nuclear weapons and AI, a recent explicit statement of the importance of existential risk and more.

Tyler also says we seem to have “zero understanding of some properties of decentralized worlds.” On many such fronts I would strongly deny this, I think we have been talking extensively about these exact properties for a long time, and treating them as severe problems to finding any solutions. We studied game theory and decision theory extensively, we say ‘coordination is hard’ all the time, we are not shy about the problem that places like China exist. Yes, we think that such issues could potentially be overcome, or at least that if we see no other paths to survival or victory that we need to try, and that we should not treat ‘decentralized world’ as a reason to completely give up on any form of coordination and assume that we will always be in a fully competitive equilibrium where everyone defects.

Based on his comments in the last two minutes, perhaps instead the thing he thinks we do not understand is that the AI itself will naturally and inevitably also be decentralized, and there will not be only one AI? But again that seems like something we talk about a lot, and something I actively try to model and think about a lot, and try to figure out how to deal with or prevent the consequences. This is not a neglected point.

There are also the cases made by Eliezer and others that with sufficiently advanced decision theory and game theory and ability to model others or share source code and generate agents with high correlations and high overlap of interests and identification and other such affordances then coordination between various entities becomes more practical, and thus we should indeed expect that the world with sufficiently advanced agents will act in a centralized fashion even if it started out decentralized, but that is not a failure to understand the baseline outcome absent such new affordances. I think you have to put at least substantial weight on those possibilities.

Tyler once warned me – wisely and helpfully – in an email, that I was falling into too often strawmanning or caricaturing opposing views and I needed to be careful to avoid that. I agree, and have attempted to take those words to heart, the fact that I could say many others do vastly worse, both to views I hold and to many others, on this front is irrelevant. I am of course not perfect at this, but I do what I can, and I think I do substantially less than I would be doing absent his note.

Then he notes that Eliezer made a Tweet that Tyler thinks probably was not a joke – that I distinctly remember and that was 100% very much a joke – that the AI could read all the legal code and threaten us with enforcement of the legal system. That Eliezer does not seem to understand how screwed up the legal system is, talking about how this would cause very long courtroom waits and would be impractical and so on.

That’s the joke. The whole point was that the legal system is so screwed up that it would be utterly catastrophic if we actually enforced it, and also that is bad. Eliezer is constantly tweeting and talking, independently of AI, about how screwed up the legal system is, if you follow him it is rather impossible to miss. There are also lessons here about potential misalignment of socially verbally affirmed with what we actually want to happen, and also an illustration of the fact that a sufficiently capable AI would have lots of different forms of leverage over humans, it works on many levels. I laughed at the time, and knew it was a joke without being told. It was funny.

I would say to him, please try to give a little more benefit of the doubt, perhaps?

  1. Tyler predicts that until there is an ‘SBF-like’ headline incident, the government won’t do much of anything about AI even though the smartest people in the government in national security will think we should, and then after the incident we will overreact. If that is the baseline, it seems odd to oppose (as Tyler does) doing anything at all now, as this is how you get that overreaction.

  2. Should we honor past generations more because we want our views to be respected more in the future? Tyler says probably yes, that there is no known philosophically consistent view on this that anyone lives by. I can’t think of one either. He points out the Burke perspective on this is time inconsistent, as you are honoring the recent dead only, which is how most of us actually behave. Perhaps one way to think about this is that we care about the wishes of the dead in the sense that people still alive care about those particular dead, and thus we should honor the dead to the extent that they have a link to those who are alive? Which can in turn pass along through the ages, as A begets B begets C on to Z, and we also care about such traditions as traditions, but that ultimately this fades, faster with some than others? But that if we do not care about that particular person at all anymore, than we also don’t care about their preferences because dead is dead? And on top of that, we can say that there are certain specific things which we feel the dead are entitled to, like a property right or human right, such as their funerals and graves, and the right to a proper burial even if we did not know them at all, and we honor those things for everyone as a social compact exactly to keep that compact going. However none of this bodes especially well for getting future generations, or especially future AIs, to much care about our quirky preferences in the long run.

  3. Why does Argentina go crazy with the printing press and have hyperinflation so often? Tyler points out this is a mystery. My presumption is this begets itself. The markets expect it again, although not to the extent they should, I can’t believe (and didn’t at the time) some of the bond sales over the years actually happened at the prices they got and this seems like another clear case of the EMH being false, but certainly everyone involved has ‘hyperinflation expectations’ that make it much harder to go back from the brink, and will be far more tolerant of irresponsible policies that go down such roads into the future because it looks relatively responsible, and because as Tyler asks about various interest groups presumably are used to capturing more rents than the state can afford. Of course, this can also go the other way, at some point you get fed up with all that, and thus you get Milei.

  4. So weird to hear Tyler talk about the power of American civic virtue, but he still seems right compared to most places. We have so many clearly smart and well meaning people in government, yet it in many ways functions so poorly, as they operate under such severe constraints.

  5. Agreement that in the past economists and other academics were inclined to ask bigger questions, and now they more often ask smaller questions and overspecialize.

  6. (1: 29: 00) Tyler worries about competing against AI as an academic or thinker, that people might prefer to read what the AI writes for 10-20 years. This seems to me like a clear case of ‘if this is true then we have much bigger problems.’

  7. I love Tyler’s ‘they just say that’ to the critique that you can’t have capitalism with proper moral equality. And similarly with Fukuyama. Tyler says today’s problems are more manageable than those of any previous era, although we might still all go poof. I think that if you judge relative to standards and expectations and what counts as success that is not true, but his statement that we are in the fight and have lots of resources and talent is very true. I would say, we have harder problems that we aim to solve, while also having much better tools to solve them. As he says, let’s do it, indeed. This all holds with or without AI concerns.

  8. Tyler predicts that volatility will go up a lot due to AI. I am trying out two manifold markets to attempt to capture this.

  9. It seems like Tyler is thinking of greater intelligence in terms of ‘fitting together quantum mechanics and relativity’ and thus thinking it might cap out, rather than thinking about what that intelligence could do in various more practical areas. Strange to see a kind of implicit Straw Vulcan situation.

  10. Tyler says (responding to Dwarkesh’s suggestion) that maybe the impact of AI will be like the impact of Jews in the 20th century, in terms of innovation and productivity, where they were 2% of the population and generated 20% of the Nobel Prizes. That what matters is the smartest model, not how many copies you have (or presumably how fast it can run). So once again, the expectation that the capabilities of these AIs will cap out in intelligence, capabilities and affordances essentially within the human range, even with our access to them to help us go farther? I again don’t get why we would expect that.

  11. Tyler says existential risk is indeed one of the things we should be most thinking about. He would change his position most if he thought international cooperation were very possible or no other country could make AI progress, this would cause very different views. He notices correctly that his perspective is more pessimistic than what he would call a ‘doomer’ view. He says he thinks you cannot ‘just wake up in the morning and legislate safety.’

  12. In the weak sense, well, of course you can do that, the same way we legislate safe airplanes. In the strong sense, well, of course you cannot do that one morning, it requires careful planning, laying the groundwork, various forms of coordination including international coordination and so on. And in many ways we don’t know how to get safety at all, and we are well aware of many (although doubtless not all) of the incentive issues. This is obviously very hard. And that’s exactly why we are pushing now, to lay groundwork now. In particular that is why we want to target large training runs and concentrations of compute and high end chips, where we have more leverage. If we thought you could wake up and do it in 2027, then I would be happy to wait for it.

  13. Tyler reiterates that the only safety possible here, in his view, comes from a hegemon that stays good, which he admits is a fraught proposition on both counts.

  14. His next book is going to be The Marginal Revolution, not about the blog about the actual revolution, only 40k words. Sounds exciting, I predict I will review it.

So in the end, if you combine his point that he would think very differently if international coordination were possible or others were rendered powerless, his need for a hegemon if we want to achieve safety, and clear preference for the United States (or one of its corporations?) to take that role if someone has to, and his statement that existential risk from AI is indeed one of the top things we should be thinking about, what do you get? What policies does this suggest? What plan? What ultimate world?

As he would say: Solve for the equilibrium.

On Dwarkesh’s 3rd Podcast with Tyler Cowen Read More »

convicted-console-hacker-says-he-paid-nintendo-$25-a-month-from-prison

Convicted console hacker says he paid Nintendo $25 a month from prison

Crime doesn’t pay —

As Gary Bowser rebuilds his life, fellow Team Xecuter indictees have yet to face trial.

It's-a me, the long arm of the law.

Enlarge / It’s-a me, the long arm of the law.

Aurich Lawson / Nintendo / Getty Images

When 54-year-old Gary Bowser pleaded guilty to his role in helping Team Xecuter with their piracy-enabling line of console accessories, he realized he would likely never pay back the $14.5 million he owed Nintendo in civil and criminal penalties. In a new interview with The Guardian, though, Bowser says he began making $25 monthly payments toward those massive fines even while serving a related prison sentence.

Last year, Bowser was released after serving 14 months of that 40-month sentence (in addition to 16 months of pre-trial detention), which was spread across several different prisons. During part of that stay, Bowser tells The Guardian, he was paid $1 an hour for four-hour shifts counseling other prisoners on suicide watch.

From that money, Bowser says he “was paying Nintendo $25 a month” while behind bars. That lines up roughly with a discussion Bowser had with the Nick Moses podcast last year, where he said he had already paid $175 to Nintendo during his detention.

According to The Guardian, Nintendo will likely continue to take 20 to 30 percent of Bowser’s gross income (after paying for “necessities such as rent”) for the rest of his life.

The fall guy?

While people associated with piracy often face fines rather than prison, Nintendo lawyers were upfront that they pushed for jail time for Bowser to “send a message that there are consequences for participating in a sustained effort to undermine the video game industry.” That seems to have been effective, at least as far as Bowser’s concerned; he told The Guardian that “The sentence was like a message to other people that [are] still out there, that if they get caught … [they’ll] serve hard time.”

Bowser appears on the Nick Moses Gaming Podcast from a holding center in Washington state in 2023.

Enlarge / Bowser appears on the Nick Moses Gaming Podcast from a holding center in Washington state in 2023.

Nick Moses 05 Gaming Podcast/YouTube

But Bowser also maintains that he wasn’t directly involved with the coding or manufacture of Team Xecuter’s products, and only worked on incidental details like product testing, promotion, and website coding. Speaking to Ars in 2020, Aurora, a writer for hacking news site Wololo, described Bowser as “kind of a PR guy” for Team Xecuter. Despite this, Bowser said taking a plea deal on just two charges saved him the time and money of fighting all 14 charges made against him in court.

Bowser was arrested in the Dominican Republic in 2020. Fellow Team Xecuter member and French national Max “MAXiMiLiEN” Louarn, who was indicted and detained in Tanzania at the same time as Bowser’s arrest, was still living in France as of mid-2022 and has yet to be extradited to the US. Chinese national and fellow indictee Yuanning Chen remains at large.

“If Mr. Louarn comes in front of me for sentencing, he may very well be doing double-digit years in prison for his role and his involvement, and the same with the other individual [Chen],” US District Judge Robert Lasnik said during Bowser’s sentencing.

Returning to society

During his stay in prison, Bowser tells The Guardian that he suffered a two-week bout of COVID that was serious enough that “a priest would come over once a day to read him a prayer.” A bout of elephantiasis also left him unable to wear a shoe on his left foot and required the use of a wheelchair, he said.

Now that he’s free, Bowser says he has been relying on friends and a GoFundMe page to pay for rent and necessities as he looks for a job. That search could be somewhat hampered by his criminal record and by terms of the plea deal that prevent him from working with any modern gaming hardware.

Despite this, Bowser told The Guardian that his current circumstances are still preferable to a period of homelessness he experienced during his 20s. And while console hacking might be out for Bowser, he is reportedly still “tinkering away with old-school Texas Instruments calculators” to pass the time.

Convicted console hacker says he paid Nintendo $25 a month from prison Read More »

agencies-using-vulnerable-ivanti-products-have-until-saturday-to-disconnect-them

Agencies using vulnerable Ivanti products have until Saturday to disconnect them

TOUGH MEDICINE —

Things were already bad with two critical zero-days. Then Ivanti disclosed a new one.

Photograph depicts a security scanner extracting virus from a string of binary code. Hand with the word

Getty Images

Federal civilian agencies have until midnight Saturday morning to sever all network connections to Ivanti VPN software, which is currently under mass exploitation by multiple threat groups. The US Cybersecurity and Infrastructure Security Agency mandated the move on Wednesday after disclosing three critical vulnerabilities in recent weeks.

Three weeks ago, Ivanti disclosed two critical vulnerabilities that it said threat actors were already actively exploiting. The attacks, the company said, targeted “a limited number of customers” using the company’s Connect Secure and Policy Secure VPN products. Security firm Volexity said on the same day that the vulnerabilities had been under exploitation since early December. Ivanti didn’t have a patch available and instead advised customers to follow several steps to protect themselves against attacks. Among the steps was running an integrity checker the company released to detect any compromises.

Almost two weeks later, researchers said the zero-days were under mass exploitation in attacks that were backdooring customer networks around the globe. A day later, Ivanti failed to make good on an earlier pledge to begin rolling out a proper patch by January 24. The company didn’t start the process until Wednesday, two weeks after the deadline it set for itself.

And then, there were three

Ivanti disclosed two new critical vulnerabilities in Connect Secure on Wednesday, tracked as CVE-2024-21888 and CVE-2024-21893. The company said that CVE-2024-21893—a class of vulnerability known as a server-side request forgery—“appears to be targeted,” bringing the number of actively exploited vulnerabilities to three. German government officials said they had already seen successful exploits of the newest one. The officials also warned that exploits of the new vulnerabilities neutralized the mitigations Ivanti advised customers to implement.

Hours later, the Cybersecurity and Infrastructure Security Agency—typically abbreviated as CISA—ordered all federal agencies under its authority to “disconnect all instances of Ivanti Connect Secure and Ivanti Policy Secure solution products from agency networks” no later than 11: 59 pm on Friday. Agency officials set the same deadline for the agencies to complete the Ivanti-recommended steps, which are designed to detect if their Ivanti VPNs have already been compromised in the ongoing attacks.

The steps include:

  • Identifying any additional systems connected or recently connected to the affected Ivanti device
  • Monitoring the authentication or identity management services that could be exposed
  • Isolating the systems from any enterprise resources to the greatest degree possible
  • Continuing to audit privilege-level access accounts.

The directive went on to say that before agencies can bring their Ivanti products back online, they must follow a long series of steps that include factory resetting their system, rebuilding them following Ivanti’s previously issued instructions, and installing the Ivanti patches.

“Agencies running the affected products must assume domain accounts associated with the affected products have been compromised,” Wednesday’s directive said. Officials went on to mandate that by March 1, agencies must have reset passwords “twice” for on-premise accounts, revoke Kerberos-enabled authentication tickets, and then revoke tokens for cloud accounts in hybrid deployments.

Steven Adair, the president of Volexity, the security firm that discovered the initial two vulnerabilities, said its most recent scans indicate that at least 2,200 customers of the affected products have been compromised to date. He applauded CISA’s Wednesday directive.

“This is effectively the best way to alleviate any concern that a device might still be compromised,” Adair said in an email. “We saw that attackers were actively looking for ways to circumvent detection from the integrity checker tools. With the previous and new vulnerabilities, this course of action around a completely fresh and patched system might be the best way to go for organizations to not have to wonder if their device is actively compromised.”

The directive is binding only on agencies under CISA’s authority. Any user of the vulnerable products, however, should follow the same steps immediately if they haven’t already.

Agencies using vulnerable Ivanti products have until Saturday to disconnect them Read More »

cops-arrest-17-year-old-suspected-of-hundreds-of-swattings-nationwide

Cops arrest 17-year-old suspected of hundreds of swattings nationwide

Coordinated effort —

Police traced swatting calls to teen’s home IP addresses.

Booking photo of Alan Filion, charged with multiple felonies connected to a

Enlarge / Booking photo of Alan Filion, charged with multiple felonies connected to a “swatting” incident at the Masjid Al Hayy Mosque in Sanford, Florida.

Police suspect that a 17-year-old from California, Alan Filion, may be responsible for “hundreds of swatting incidents and bomb threats” targeting the Pentagon, schools, mosques, FBI offices, and military bases nationwide, CNN reported.

Swatting occurs when fraudulent calls to police trigger emergency response teams to react forcefully to non-existent threats.

Recently extradited to Florida, Filion was charged with multiple felonies after the Seminole County Sheriff’s Office (SCSO) traced a call where Filion allegedly claimed to be a mass shooter entering the Masjid Al Hayy Mosque in Sanford, Florida. The caller played “audio of gunfire in the background,” SCSO said, while referencing Satanism and claiming he had a handgun and explosive devices.

Approximately 30 officers responded to the call in May 2023, then determined it was a swatting incident after finding no shooter and confirming that mosque staff was safe. In a statement, SCSO Sheriff Dennis Lemma said that “swatting is a perilous and senseless crime, which puts innocent lives in dangerous situations and drains valuable resources” by prompting a “substantial law enforcement response.”

Seminole County authorities coordinated with the FBI and Department of Justice to track the alleged “serial swatter” down, ultimately arresting Filion on January 18. According to SCSO, police were able to track down Filion after he allegedly “created several accounts on websites offering swatting services” that were linked to various IP addresses connected to his home address. The FBI then served a search warrant on the residence and found “incriminating evidence.”

Filion has been charged as an adult for a variety of offenses, including making a false report while facilitating or furthering an act of terrorism. He is currently being detained in Florida, CNN reported.

Earlier this year, Sen. Rick Scott (R-Fla.) introduced legislation to “crack down” on swattings after he became a target at his home in December. If passed, the Preserving Safe Communities by Ending Swatting Act would impose strict penalties, including a maximum sentence of 20 years in prison for any swatting that lead to serious injuries. If death results, bad actors risk a lifetime sentence. That bill is currently under review by the House Judiciary Committee.

“We must send a message to the cowards behind these calls—this isn’t a joke, it’s a crime,” Scott said.

Last year, Sen. Chuck Schumer (D-NY) warned that an “unprecedented wave” of swatting attacks in just two weeks had targeted 11 states, including more than 200 schools across New York. In response, Schumer called for over $10 million in FBI funding to “specifically tackle the growing problem of swatting.”

Schumer said it was imperative that the FBI begin tracking the incidents more closely, not just to protect victims from potentially deadly swattings, but also to curb costs to law enforcement and prevent unnecessary delays of emergency services tied up by hoax threats.

As a result of Schumer’s push, the FBI announced it would finally begin tracking swatting incidents nationwide. Hundreds of law enforcement agencies and police departments now rely on an FBI database to share information on swatting incidents.

Coordination appears to be key to solving these cases. Lemma noted that SCSO has an “unwavering dedication” to holding swatters accountable, “regardless of where they are located.” His office confirmed that investigators suspect that Filion may have also been behind “other swatting incidents” across the US. SCSO said that it will continue coordinating with local authorities investigating those incidents.

“Make no mistake, we will continue to work tirelessly in collaboration with our policing partners and the judiciary to apprehend swatting perpetrators,” Lemma said. “Gratitude is extended to all agencies involved at the local, state, and federal levels, and this particular investigation and case stands as a stern warning: swatting will face zero tolerance, and measures are in place to identify and prosecute those responsible for such crimes.”

Cops arrest 17-year-old suspected of hundreds of swattings nationwide Read More »

fcc-to-declare-ai-generated-voices-in-robocalls-illegal-under-existing-law

FCC to declare AI-generated voices in robocalls illegal under existing law

AI and robocalls —

Robocalls with AI voices to be regulated under Telephone Consumer Protection Act.

Illustration of a robot wearing a headset for talking on the phone.

Getty Images | Thamrongpat Theerathammakorn

The Federal Communications Commission plans to vote on making the use of AI-generated voices in robocalls illegal. The FCC said that AI-generated voices in robocalls have “escalated during the last few years” and have “the potential to confuse consumers with misinformation by imitating the voices of celebrities, political candidates, and close family members.”

FCC Chairwoman Jessica Rosenworcel’s proposed Declaratory Ruling would rule that “calls made with AI-generated voices are ‘artificial’ voices under the Telephone Consumer Protection Act (TCPA), which would make voice cloning technology used in common robocalls scams targeting consumers illegal,” the commission announced yesterday. Commissioners reportedly will vote on the proposal in the coming weeks.

A recent anti-voting robocall used an artificially generated version of President Joe Biden’s voice. The calls told Democrats not to vote in the New Hampshire Presidential Primary election.

An analysis by the company Pindrop concluded that the artificial Biden voice was created using a text-to-speech engine offered by ElevenLabs. That conclusion was apparently confirmed by ElevenLabs, which reportedly suspended the account of the user who created the deepfake.

FCC ruling could help states crack down

The TCPA, a 1991 US law, bans the use of artificial or prerecorded voices in most non-emergency calls “without the prior express consent of the called party.” The FCC is responsible for writing rules to implement the law, which is punishable with fines.

As the FCC noted yesterday, the TCPA “restricts the making of telemarketing calls and the use of automatic telephone dialing systems and artificial or prerecorded voice messages.” Telemarketers are required “to obtain prior express written consent from consumers before robocalling them. If successfully enacted, this Declaratory Ruling would ensure AI-generated voice calls are also held to those same standards.”

The FCC has been thinking about revising its rules to account for artificial intelligence for at least a few months. In November 2023, it launched an inquiry into AI’s impact on robocalls and robotexts.

Rosenworcel said her proposed ruling will “recognize this emerging technology as illegal under existing law, giving our partners at State Attorneys General offices across the country new tools they can use to crack down on these scams and protect consumers.

“AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate,” Rosenworcel said. “No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls.”

FCC to declare AI-generated voices in robocalls illegal under existing law Read More »

google’s-pixel-storage-issue-fix-requires-developer-tools-and-a-terminal

Google’s Pixel storage issue fix requires developer tools and a terminal

Stagefright’s revenge —

Automatic updates broke your phone; the fix is a highly technical manual process.

Google’s Pixel storage issue fix requires developer tools and a terminal

Google has another fix for the second major storage bug Pixel phones have seen in the last four months. Last week, reports surfaced that some Pixel owners were being locked out of their phone’s local storage, creating a nearly useless phone with all sorts of issues. Many blamed the January 2024 Google Play system update for the issue, and yesterday, Google confirmed that hypothesis. Google posted an official solution to the issue on the Pixel Community Forums, but there’s no user-friendly solution here. Google’s automatic update system broke people’s devices, but the fix is completely manual, requiring users to download the developer tools, install drivers, change settings, plug in their phones, and delete certain files via a command-line interface.

The good news is that, if you’ve left your phone sitting around in a nearly useless state for the last week or two, following the directions means you won’t actually lose any data. Having a week or two of downtime is not acceptable to a lot of people, though, and several users replied to the thread saying they had already wiped their device to get their phone working again and had to deal with the resulting data loss (despite many attempts and promises, Android does not have a comprehensive backup system that works).

The bad news is that I don’t think many normal users will be able to follow Google’s directions. First, you’ll need to perform the secret action to enable Android’s Developer Options (you tap on the build number seven times). Then, you have to download Google’s “SDK Platform-Tools” zip file, which is meant for app developers. After that, plug in your phone, switch to the correct “File transfer” connection mode, open a terminal, navigate to the platform-tools folder, and run both “./adb uninstall com.google.android.media.swcodec” and “./adb uninstall com.google.android.media.” Then reboot the phone and hope that works.

I skipped a few steps (please read Google’s instructions if you’re trying this), but that’s the basic gist of it. The tool Google is having people use is “ADB,” or the “Android Debug Bridge.” This is meant to give developers command-line access to their phones, which allows them to quickly push new app builds to the device, get a readout of system logs, and turn on special developer flags for various testing.

Google’s instructions will only work if everything goes smoothly, and as someone with hundreds of hours in ADB from testing various Android versions, I will guess that it will probably not go smoothly. On Windows, the ADB drivers often don’t install automatically. Instead, you’ll get “unknown device” or some other incorrect device detection, and you won’t be able to run any commands. You usually have to use the “let me pick from drivers on my computer” option, browse through your file system, and manually “select” (more like “guess”) the driver you need while clicking through various warnings. You can already see at least one user with driver issues in the thread, with Windows telling them, “Your device has malfunctioned,” when really it just needs a driver.

Google’s Pixel storage issue fix requires developer tools and a terminal Read More »

hulu,-disney+-password-crackdown-kills-account-sharing-on-march-14

Hulu, Disney+ password crackdown kills account sharing on March 14

profit push —

New subscribers are already banned from sharing logins outside their household.

Selena Gomez and Martin Short on the set of <em>Only Murders in the Building</em> on February 14, 2022, in New York City. ” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/02/GettyImages-1370661621-800×513.jpg”></img><figcaption>
<p><a data-height=Enlarge / Selena Gomez and Martin Short on the set of Only Murders in the Building on February 14, 2022, in New York City.

Hulu and Disney+ subscribers have until March 14 to stop sharing their login information with people outside of their household. Disney-owned streaming services are the next to adopt the password-crackdown strategy that has helped Netflix add millions of subscribers.

An email sent from “The Hulu Team” to subscribers yesterday and viewed by Ars Technica tells customers that Hulu is “adding limitations on sharing your account outside of your household.”

Hulu’s subscriber agreement, updated on January 25, now states that users “may not share your subscription outside of your household,” with household being defined as the “collection of devices associated with your primary personal residence that are used by the individuals who reside therein.”

The updated terms also note that Hulu might scrutinize user accounts to ensure that the accounts aren’t being used on devices located outside of the subscriber’s residence:

We may, in our sole discretion, analyze the use of your account to determine compliance with this Agreement. If we determine, in our sole discretion, that you have violated this Agreement, we may limit or terminate access to the Service and/or take any other steps as permitted by this Agreement (including those set forth in Section 6 of this Agreement).

Section 6 of Hulu’s subscriber agreement says Hulu can “restrict, suspend, or terminate” access without notice.

Hulu didn’t respond to a request for comment on how exactly it will “analyze the use” of accounts. But Netflix, which started its password crackdown in March 2022 and brought it to the US in May 2023, says it uses “information such as IP addresses, device IDs, and account activity to determine whether a device signed in to your account is part of your Netflix Household” and doesn’t collect GPS data from devices.

According to the email sent to Hulu subscribers, the policy will apply immediately to people subscribing to Hulu from now on.

The updated language in Hulu’s subscriber agreement matches what’s written in the Disney+/ESPN+ subscriber agreement, which was also updated on January 25. Disney+’s password crackdown first started in November in Canada.

A Disney spokesperson confirmed to Ars Technica that Disney+ subscribers have until March 14 to comply. The rep also said that notifications were sent to Disney+’s US subscribers yesterday; although, it’s possible that some subscribers didn’t receive an email alert, as is the case with a subscriber in my household.

The representative didn’t respond to a question asking how Disney+ will “analyze” user accounts to identify account sharing.

Push for profits

Disney CEO Bob Iger first hinted at a Disney streaming-password crackdown in August during an earnings call. He highlighted a “significant” amount of password sharing among Disney-owned streaming services and said Disney had “the technical capability to monitor much of this.” The executive hopes a password crackdown will help drive subscribers and push profits to Netflix-like status. Disney is aiming to make its overall streaming services business profitable by the end of 2024.

In November, it was reported that Disney+ had lost $11 billion since launching in November 2019. The streaming service has sought to grow revenue by increasing prices and encouraging users to join its subscription tier with commercials, which is said to bring streaming services higher average revenue per user (ARPU) than non-ad plans.

Hulu, which Disney will soon own completely, has been profitable in the past, and in Disney’s most recent financial quarter, it had a higher monthly ARPU than Disney+. Yet, Hulu has far fewer subscribers than Disney+ (48.5 million versus 150.2 million). Cracking down on Hulu password sharing is an obvious way for Disney to try to squeeze more money from the more financially successful streaming service.

Such moves run the risk of driving away users. However, Hulu, like Netflix, may be able to win over longtime users who have gotten accustomed to having easy access to Hulu, even if they weren’t paying for it. Disney+, meanwhile, is a newer service, so a change in policy may not feel as jarring to some.

Netflix, which allowed account sharing for years, has seen success with its password crackdown, saying in November that the efforts helped it add 8.8 million subscribers. Unlike the Disney-owned streaming services, though, Netflix allows people to add extra members to their non-ad subscription (in the US, Netflix charges $7.99 per person per month).

As Disney embarks on an uphill climb to make streaming successful this year, you can expect it to continue following the leader while also trying to compete with it. Around the same time as the password-sharing ban takes full effect, Disney should also unveil a combined Hulu-Disney+ app, a rare attempt at improving a streaming service that doesn’t center on pulling additional monthly dollars from customers.

Hulu, Disney+ password crackdown kills account sharing on March 14 Read More »