Author name: DJ Henderson

after-breach,-senators-ask-why-at&t-stores-call-records-on-“ai-data-cloud”

After breach, senators ask why AT&T stores call records on “AI Data Cloud”

A man with an umbrella walking past a building with an AT&T logo.

US senators want AT&T to explain why it stores massive amounts of call and text message records on a third-party analytics platform that bills itself as an “AI Data Cloud.”

AT&T revealed last week that “customer data was illegally downloaded from our workspace on a third-party cloud platform,” and that the breach “includes files containing AT&T records of calls and texts of nearly all of AT&T’s cellular customers.” The third-party platform is Snowflake, and AT&T is one of many Snowflake corporate customers that had data stolen. Ticketmaster is another notable company affected by the breach.

AT&T and Snowflake each got letters yesterday from US Sens. Richard Blumenthal (D-Conn.) and Josh Hawley (R-Mo.), the chair and ranking member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. The senators asked AT&T CEO John Stankey to answer a series of questions, including this one:

Why had AT&T retained months of detailed records of customer communication for an extended amount of time and why had AT&T uploaded that sensitive information onto a third party analytics platform? What is AT&T policy, including timelines, concerning retaining and using such information?

AT&T’s disclosures to customers and to the Securities and Exchange Commission didn’t explain how Snowflake is used by AT&T. Snowflake’s website says the company’s cloud platform provides opportunities for collaborating and sharing data:

Powering the AI Data Cloud is Snowflake’s single platform. Its unique architecture connects businesses globally, at practically any scale to bring data and workloads together. Together with the Snowflake Marketplace which simplifies the sharing, collaborating, and monetizing of thousands of datasets, services, and entire data applications—this creates the active and growing AI Data Cloud.

AT&T a featured customer

There was already a public explanation for why AT&T uses Snowflake, but it’s written in marketing speak and isn’t likely to directly answer the senators’ questions. Sometime before the hacks, Snowflake posted a glowing case study on how AT&T lowered costs and gained “faster insights” by switching from internal systems to Snowflake.

Snowflake says it provides a telecom-focused AI Data Cloud service that helps firms like AT&T “improve customer experiences, maximize operational efficiency and increase profitability by reducing costs and monetizing new data products.” AT&T’s decision to move data to Snowflake apparently allowed it to abandon “complex on-premises systems, including Hadoop” that “were slowing down business.”

“The Snowflake Data Cloud has given us the power to harness and integrate data to create insights,” AT&T Chief Data Officer Andy Markus is quoted as saying in the promotional material. “With data at our fingertips, we are growing revenue, becoming more cost effective and, most importantly, improving the customer experience.”

Markus said the previous internal system made it hard to collaborate with other companies. “Prior to Snowflake, we had a very complex data environment on-premises,” Markus said. “That led to a more ineffective operating environment for our business partners, both from a speed and cost perspective.”

With Snowflake, AT&T is said to have “a powerful, easy-to-use data management system that efficiently processes hundreds of petabytes of data every day.” This makes it easier to share data.

“Using Hadoop for storage and processing, AT&T’s monolithic on-premises data warehouse hampered the team from collecting, storing, sharing and processing its vast stores of data,” the customer case study said. “By moving to the Snowflake Telecom Data Cloud, Markus and his team achieved their goal of democratizing data across the business.”

Snowflake boasted that because of its cloud platform, “this leading telecom provider uses data to advance innovation, create new revenue streams, optimize operations and, most importantly, better connect people to their world.”

AT&T said it uses “trusted” cloud providers

When contacted by Ars today, AT&T provided a statement in response to the senators’ questions about its use of Snowflake. “Like most companies that deal with large amounts of data, AT&T often uses specialized and trusted cloud services platforms for various functions. These platforms enable companies to work with large amounts of data in a centralized place. In this case, AT&T had put a copy of the data on the third-party platform for analysis related to our business,” AT&T told us.

AT&T added that it “analyzes historical customer data for uses that include network planning, capacity utilization, and developing new services and offers.”

AT&T did not provide specifics on how long it retains data. “We set our data retention periods depending on the type of personal information, how long it is needed to operate the business or provide our products and services, and whether it is subject to contractual or legal obligations. These obligations might be ongoing litigation, mandatory data retention laws, or government orders to preserve data for an investigation,” the company said today.

We also asked Snowflake for details on exactly how phone companies use its platform. A Snowflake spokesperson did not answer our question but told us that the company will respond directly to the senators.

After breach, senators ask why AT&T stores call records on “AI Data Cloud” Read More »

real,-actual-markdown-support-is-arriving-in-google-docs,-not-a-moment-too-soon

Real, actual Markdown support is arriving in Google Docs, not a moment too soon

### _Finally!_ —

It’s a big day for typing in plain-text fashion, for the good of syntax.

Illustration of a factory machine, with a conveyer belt moving markup characters like and ## into a machine with the Google Docs logo.

Enlarge / In goes the sensible characters, out goes a document for which you almost always have to adjust the sharing permissions.

Aurich Lawson | Getty Images

The best time to truly implement the Markdown markup language into Google Docs was in the early 2010s, but yesterday was a pretty good time, too.

Google Docs was born from the conjoined features of a series of software company acquisitions (Writely, DocVerse, and QuickOffice), plus the remains of Google Wave, smooshed together into Drive by 2012. By that point, Markdown, a project of web writer John Gruber with input from data activist Aaron Swartz, had been solidified and gathering steam for about eight years. Then, for another decade or so, writing in Markdown and writing in Google Docs were two different things, joined together only through browser extensions or onerous import/export tools. An uncountable number of cloud-syncing, collaboration-friendly but Markdown-focused writing tools flourished in that chasm.

In early 2022, the first connecting plank was placed: Docs could “Automatically detect Markdown,” if you enabled it. This expanded the cursory support for numbered and unordered lists and checkboxes to the big items, like headlines, italics, bold, strikethrough, and links. You could write in Markdown in Docs, but you could not paste, nor could you import or export between Docs and Markdown styling.

Now, or at some point in the next 14 days, real, actual Markdown work can be done in Google Docs. Docs can convert Markdown text to its equivalent Docs formatting on paste or when imported as a file, and it can export to Markdown from the copy menu or as a file. Google’s blog post notes that this is “particularly useful for technical content writers as they can now convert Docs content to/from Markdown,” so as to use Google’s always-on syncing and collaboration in the interim stages.

As someone who doesn’t work as a technical content writer (at least in proper job title fashion), but does write a lot, allow me to say that this is also particularly useful for people who adopted Markdown as a kind of One True Style. It is hard to avoid being invited to collaborate on Google Docs, even if you primarily work elsewhere. It is even harder to remember all the different shortcuts for headlines, bullet points, and other text elements across various apps, web apps, content management systems, IDEs, and other writing platforms.

There’s no indication of which flavor of Markdown Google’s import and export functions will hew to, and Ars was unable to test the new function as of July 17. Markdown is intentionally not fully standardized by its author, leading to some kerfuffles and, eventually, an understanding that each version, like GitHub Flavored Markdown, has its own additions and changes.

By allowing for import/export, but even better, “Copy as Markdown” and “Paste from Markdown,” Docs is now a place where I can be a Markdown grouch and still play reasonably nice with others. You should see “Enable Markdown” show up in your Docs’ Tools > Preferences menu within the next two weeks.

Real, actual Markdown support is arriving in Google Docs, not a moment too soon Read More »

five-people-infected-as-bird-flu-appears-to-go-from-cows-to-chickens-to-humans

Five people infected as bird flu appears to go from cows to chickens to humans

Cows and chickens and humans, oh my! —

High temperatures made it hard for workers to use protective gear during culling.

Five people infected as bird flu appears to go from cows to chickens to humans

The highly pathogenic avian influenza H5N1 virus that spilled from wild birds into US dairy cows late last year may have recently seeped from a dairy farm in Colorado to a nearby poultry farm, where it then infected five workers tasked with culling the infected chickens

In a press briefing Tuesday, federal officials reported that four of the avian influenza cases have been confirmed by the Centers for Disease Control and Prevention, while the fifth remains a presumptive positive awaiting CDC confirmation.

All five people have shown mild illnesses, though they experienced variable symptoms. Some of the cases involved conjunctivitis, as was seen in other human cases linked to the H5N1 outbreak in dairy cows. Others in the cluster of five had respiratory and typical flu-like symptoms, including fever, chills, sore throat, runny nose, and cough. None of the five cases required hospitalization.

The virus infecting the five people is closely related to the virus infecting the chickens on the poultry farm, which, in turn, is closely related to virus seen in infected dairy herds and in other human cases that have been linked to the dairy outbreak. The affected poultry farm is in Colorado’s northern county of Weld, which has also reported about two dozen outbreaks of avian influenza in dairy herds.

Dairy to poultry hypothesis

In one fell swoop, Colorado’s poultry farm outbreak has more than doubled the number of human avian influenza cases linked to the dairy cow spillover, bringing the previous tally of four cases to nine. While officials have previously noted instances where it appeared that H5N1 on dairy farms had moved to nearby poultry farms, this appears to be the first time such spread has led to documented human infections.

The link between the poultry farm cases and neighboring dairy farms is still just a hypothesis, however, Nirav Shah, the principal deputy director at the CDC, emphasized to reporters Tuesday. “It is a hypothesis that needs and requires a full investigation. But that is a hypothesis at this point,” he said of the link between the dairy farms and the poultry farm. So far, there is no direct evidence of a specific source of the poultry farm’s infection, and the route of infection is also unclear.

Throughout the outbreak of H5N1 on dairy farms, officials have noted that the primary way the virus appears to spread to new farms is via the movement of cows, people, and machinery between those facilities. There remains no evidence of human-to-human transmission. But milk from infected cows has been found to be brimming with high levels of infectious virus, and milk-contaminated equipment is a prime suspect in the spread.

In the press briefing Tuesday, Eric Deeble, acting senior advisor for H5N1 response with the US Department of Agriculture, noted the poultry are very susceptible to avian influenza and are easily infected. “It does not take much to introduce this into a flock,” Deeble said. The USDA is now working on a “trace-back” investigation on how the Colorado poultry farm was infected.

Searing spread

As for how the farm workers specifically became infected with the virus, health officials pointed to high temperatures that prevented workers from donning protective gear. The poultry farm is a commercial egg layer operation with around 1.8 million birds. Given the presence of bird flu on the premises, all 1.8 million birds need to be culled, aka “depopulated.” This is being carried out using mobile carts with carbon dioxide gas chambers, a common culling method. Workers are tasked with placing the birds in the chambers, which only hold a few dozen birds at a time. In all, the method requires workers to have a high degree of contact with the infected birds, going from bird to bird and batch to batch with the carts.

Amid this grim task, temperatures in the area reached over 100° Fahrenheit, and massive industrial fans were turned on in the facility to try to cool things down. Between the heat and the fans, the approximately 160 people involved in the culling struggled to use personal protective equipment (PPE). The required PPE for the depopulation involves a full Tyvek suit, boots, gloves, goggles, and an N95 respirator.

“The difficulty with wearing all that gear in that kind of heat, you can imagine,” said Julie Gauthier, executive director for field operations at the USDA’s Animal and Plant Health Inspection Service (APHIS). The industrial fans blowing large amounts of air made it yet more difficult for workers to keep goggles and respirators on their faces, she said.

The CDC and the USDA are both involved in further investigations of the poultry farm outbreak. CDC’s Shah noted that the team the agency deployed to Colorado included an industrial hygienist, who can work on strategies to prevent further transmission.

To date, at least 161 herds in 13 states have tested positive for avian influenza since the dairy outbreak was confirmed in March. Since January 2022, when US birds first tested positive for the H5N1 virus, 99 million birds in the US have been affected in 48 states, which involved 1,165 individual outbreaks.

Five people infected as bird flu appears to go from cows to chickens to humans Read More »

amd-brags-about-ryzen-9000’s-efficiency,-extends-am5-support-guarantee-to-2027

AMD brags about Ryzen 9000’s efficiency, extends AM5 support guarantee to 2027

still processing —

Ryzen 9000 will also have more overclocking headroom, for those interested.

AMD's Ryzen 9000 launch lineup.

Enlarge / AMD’s Ryzen 9000 launch lineup.

AMD

AMD has released more information about its next-generation Ryzen 9000 processors and their underlying Zen 5 CPU architecture this week ahead of their launch at the end of July. The company reiterated some of the high-level performance claims it made last month—low- to mid-double-digit performance increases over Zen 4 in both single- and multi-threaded tasks. But AMD also bragged about the chips’ power efficiency compared to Ryzen 7000, pointing out that they would reduce power usage despite increasing performance.

Prioritizing power efficiency

AMD said that it has lowered the default power limits for three of the four Ryzen 9000 processors—the Ryzen 5 9600X, the Ryzen 7 9700X, and the Ryzen 9 7900X—compared to the Ryzen 7000 versions of those same chips. Despite the lower default power limit, all three of those chips still boast double-digit performance improvements over their predecessors. AMD also says that Ryzen 9000 CPU temperatures have been reduced by up to 7º Celsius compared to Ryzen 7000 chips at the same settings.

  • Ryzen 9000’s low-double-digit performance gains are coming despite the fact that the company has lowered most of its chips’ default TDPs. These TDP settings determine how much power one of AMD’s CPUs can use (though not necessarily how much they will use).

    AMD

  • Because the TDPs have been lowered, AMD claims that Ryzen 9000 chips will have a bit more overclocking headroom than Ryzen 7000.

    AMD

It’s worth noting that we generally tested the original Ryzen 7000 CPUs at multiple power levels, and for most chips—most notably the 7600X and 7700X—we found that the increased TDP levels didn’t help performance all that much in the first place. The TDP lowering in the Ryzen 9000 may be enabled partly by architectural improvements or a newer manufacturing process, but AMD already had some headroom to lower those power usage numbers without affecting performance too much. TDP is also best considered as a power limit rather than the actual amount of power that a CPU will use for any given workload, even when fully maxed out.

Still, we appreciate AMD’s focus on power efficiency for the Ryzen 9000 series, especially because Intel’s high-end 13900K and 14900K have been plagued by crashes that seem to be related to high power use and improper motherboard configurations. Intel has yet to release a definitive statement about what the issue is, but it’s plausible (maybe even likely!) that it’s a side effect of these chips being pushed to their thermal and electrical limits.

Ryzen 9000 CPUs can still be pushed further by users who want to raise those power limits and try overclocking—AMD points out that the chips all have more headroom for Precision Boost Overdrive automated overclocking, precisely because the default power limits leave a little more performance on the table. But as long as the chips still perform well at their default settings, people who just want to build a computer without doing a ton of tinkering will be better served by chips that run cooler and use less power.

More time on the clock for socket AM5

  • AMD has committed to supporting the AM5 socket until “2027+,” two more years than the “2025+” it promised back in late 2022.

    AMD

  • Ryzen 9000 will launch alongside several marginally updated chipsets, though existing AM5 boards will be able to use these chips after a BIOS update.

    AMD

Another small but noteworthy change buried in AMD’s slide decks, and good news for anyone who has already invested in a Socket AM5 motherboard or has plans to do so in the near future: AMD has officially extended the socket’s guaranteed support timeline to at least 2027 and is leaving the door open to support past that point. That’s a two-year extension from the “2025+” timeline that the company laid out in late 2022.

Of course, “support” could mean a lot of different things. AMD is still officially supporting the AM4 socket with new CPU releases and continues to lean on AM4 as a budget platform as socket AM5 costs have remained stubbornly high. But these “new” releases have all been repackagings of various iterations of the late-2020-era Ryzen 5000 CPUs, rather than truly new products. Still, AMD’s formal commitment to socket AM5’s longevity makes it a bit easier to recommend for people who upgrade their CPUs regularly.

Ryzen 9000 chips will be able to pop into any current AM5 motherboard after a BIOS update. The company is also announcing a range of 800-series chipsets for new motherboards, though these generally only come with minor improvements compared to the 600-series chipsets they replace. The X870E and X870 are guaranteed to have USB 4 ports, and the X870 supports PCIe 5.0 speeds for the GPU slot where the X670 only supported PCIe 4.0 speeds for the GPU slot. The lower-end B850 chipset still supports PCIe 5.0 speeds for SSDs and PCIe 4.0 speeds for GPUs, while an even lower-end B840 chipset is restricted to PCIe 3.0 speeds for everything. The B840 also won’t support CPU overclocking, though it can still overclock RAM.

Listing image by AMD

AMD brags about Ryzen 9000’s efficiency, extends AM5 support guarantee to 2027 Read More »

seismic-data-shows-mars-is-often-pummeled-by-planet-shaking-meteorites

Seismic data shows Mars is often pummeled by planet-shaking meteorites

Brace for impact —

Seismic information now allows us to make a planet-wide estimate of impact rates.

One of the craters identified seismically, then confirmed through orbital images.

Enlarge / One of the craters identified seismically, then confirmed through orbital images.

Mars trembles with marsquakes, but not all of them are driven by phenomena that occur beneath the surface—many are the aftermath of meteorite strikes.

Meteorites crash down to Mars every day. After analyzing data from NASA’s InSight lander, an international team of researchers noticed that its seismometer, SEIS, detected six nearby seismic events. These were linked to the same acoustic atmospheric signal that meteorites generate when whizzing through the atmosphere of Mars. Further investigation identified all six as part of an entirely new class of quakes known as VF (very high frequency) events.

The collisions that generate VF marsquakes occur in fractions of a second, much less time than the few seconds it takes tectonic processes to cause quakes similar in size. This is some of the key seismological data that has helped us understand the occurrence of earthquakes caused by meteoric impacts on Mars. This is also the first time seismic data was used to determine how frequently impact craters are formed.

“Although a non-impact origin cannot be definitively excluded for each VF event, we show that the VF class as a whole is plausibly caused by meteorite impacts,” the researchers said in a study recently published in Nature.

Seismic shift

Scientists had typically determined the approximate meteorite impact rate on Mars by comparing the frequency of craters on its surface to the expected rate of impacts calculated using counts of lunar craters that were left behind by meteorites. Models of the lunar cratering rate were then adjusted to fit Martian conditions.

Looking to the Moon as a basis for comparison was not ideal, as Mars is especially prone to being hit by meteorites. The red planet is not only a more massive body that has greater gravitational pull, but it is located near the asteroid belt.

Another issue is that lunar craters are often better preserved than Martian craters because there is no place in the Solar System dustier than Mars. Craters in orbital images are often partly obscured by dust, which makes them difficult to identify. Sandstorms can complicate matters by covering craters in more dust and debris (something that cannot occur on the Moon due to the absence of wind).

InSight deployed its SEIS instrument after it landed in the Elysium Planitia region of Mars. In addition to detecting tectonic activity, the seismometer can potentially determine the impact rate through seismic data. When meteorites strike Mars, they produce seismic waves just like tectonic marsquakes do, and the waves can be detected by seismometers when they travel through the mantle and crust. An immense quake picked up by SEIS was linked to a crater 150 meters (492 feet) wide. SEIS would later detect five more marsquakes that were all associated with an acoustic signal (detected by a different sensor on InSight) that is a telltale sign of a falling meteorite.

A huge impact

Something else stood out about the six impact-driven marsquakes detected with seismic data. Because of the velocity of meteorites (over 3,000 meters or 9,842 feet per second), these events happened faster than any other type of marsquake, even faster than quakes in the high frequency (HF) class. That’s how they earned their own classification: very high frequency, or VF, quakes. When the InSight team used the Mars Reconnaissance Orbiter’s (MRO) Context Camera (CTX) to image the locations of the events picked up by SEIS, there were new craters present in the images.

There are additional seismic events that haven’t been assigned to craters yet. They are thought to be small craters formed by meteorites about the size of basketballs, which are extremely difficult to see in orbital images from MRO.

The researchers were able to use SEIS data to estimate the diameters of craters based on distance from InSight (according to how long it took seismic waves to reach the spacecraft) and the magnitude of the VF marsquakes associated with them. They were also able to derive the frequency of quakes picked up by SEIS. Once a frequency estimate based on the data was applied to the entire surface area of Mars, they estimated that around 280 to 360 VF quakes occur each year.

“The case is strong that the unique VF marsquake class is consistent with impacts,” they said in the same study. “It is, therefore, worthwhile considering the implications of attributing all VF events to meteoroid impacts.”

Their detection has added to the estimated number of impact craters on Mars since many could not be seen from space before. What can VF impacts tell us? The impact rate on a planet or moon is important for determining the age of that object’s surface. Using impacts has helped us determine that the surface of Venus is constantly being renewed by volcanic activity, while most of the surface of Mars has not been covered in lava for billions of years.

Figuring out the rate of meteorite impacts can also help protect spacecraft and, someday, maybe Martian astronauts, from potential hazards. The study suggests that there are periods where impacts are more or less frequent, so it might be possible to predict when the sky is a bit more likely to be clear of falling space rocks—and when it isn’t. Meteorites are not much of a danger to Earth since most of them burn up in the atmosphere. Mars has a much thinner atmosphere that more can make it through, and there is no umbrella for a meteor shower.

Nature Astronomy, 2024. DOI: 10.1038/s41550-024-02301-z

Seismic data shows Mars is often pummeled by planet-shaking meteorites Read More »

full-dev-build-of-space-marine-2-leaks,-and-players-are-already-leveling-up

Full dev build of Space Marine 2 leaks, and players are already leveling up

Weaknesses in the Imperium of Man —

Developers canceled a beta test—but may have gotten one anyway.

Space Marine looking to the side in a rendered image from Space Marine 2

Enlarge / Heresy must be punished.

Focus Entertainment

How badly do you want to play the upcoming Warhammer 40,000: Space Marine 2 ahead of its September 9 launch? Enough to torrent a 75GB package from a Russian site? Enough to not only unpack and play it, but connect to a server and start building up your character?

Me neither, but that hasn’t stopped seemingly hundreds of people from doing just that. Publisher Focus Entertainment had announced the third-person action game having “gone gold” (released for manufacturing) on July 9. The leaked build might date to February 23, 2024, as suggested by site Insider Gaming, which had previously suggested a June 20 date.

Footage from the leaked builds, which has been mostly taken offline by Focus through copyright claims, suggested that it was a mostly complete version of the game, with some placeholder assets in menus. Said footage also suggests that the game’s pirates are playing online, and their characters are retaining their levels and items. For now, at least.

Gameplay overview trailer for Warhammer 40,000: Space Marine 2.

In some ways, this shouldn’t really matter. When the game’s servers go officially live, developer Saber Interactive and Focus should be able to flag those accounts that have logged unauthorized time. Getting online to play will likely require authentication from the Steam, Xbox, or PlayStation platform one chooses. And there’s a chance that a closer look at the game, or even just news about its leak, might entice more people into buying and playing the game proper.

Focus Entertainment

In other ways, though, it stinks. Spoilers from the game’s campaign and multiplayer offerings will filter out, and firms that should be focusing entirely on release and quality control will have to deal with the security and fairness aspects of such a leak. A planned open beta of the game had already been canceled in favor of the developer’s focus on launch readiness.

Game leaks of this scale aren’t as common as select images or isolated information, but it does happen. Grand Theft Auto VI had nearly an hour of gameplay footage leak in 2022. Discs of Starfield being posted for sale in August 2023, weeks ahead of its September release, resulted in felony charges for a Tennessee man. Videos spoiling much of The Last of Us Part 2 leaked online in 2020, thanks to someone “not affiliated” with its developer and publisher.

And the biggest and most unexpected hack and leak came from a small German town, where Axel Gembe stole and leaked early source code for Half-Life 2. He later tried to apologize to Valve founder Gabe Newell and leverage his break-in to land a job at Valve, which was, to say the least, unsuccessful.

Ars has contacted Focus Entertainment for comment and will update this post with new information.

Full dev build of Space Marine 2 leaks, and players are already leveling up Read More »

the-struggle-to-understand-why-earthquakes-happen-in-america’s-heartland

The struggle to understand why earthquakes happen in America’s heartland

Top: A view of the downtown Memphis skyline, including the Hernando De Soto bridge which has been retrofitted for earthquakes. Memphis is located around 40 miles from a fault line in the quake-prone New Madrid system.

Enlarge / Top: A view of the downtown Memphis skyline, including the Hernando De Soto bridge which has been retrofitted for earthquakes. Memphis is located around 40 miles from a fault line in the quake-prone New Madrid system.

iStock via Getty Images

The first earthquake struck while the town was still asleep. Around 2: 00 am on Dec. 16, 1811, New Madrid—a small frontier settlement of 400 people on land now located in Missouri—was jolted awake. Panicked townsfolk fled their homes as buildings collapsed and the smell of sulfur filled the air.

The episode didn’t last long. But the worst was yet to come. Nearly two months later, after dozens of aftershocks and another massive quake, the fault line running directly under the town ruptured. Thirty-one-year-old resident Eliza Bryan watched in horror as the Mississippi River receded and swept away boats full of people. In nearby fields, geysers of sand erupted, and a rumble filled the air.

In the end, the town had dropped at least 15 feet. Bryan and others spent a year and a half living in makeshift camps while they waited for the aftershocks to end. Four years later, the shocks had become less common. At last, the rattled townspeople began “to hope that ere long they will entirely cease,” Bryan wrote in a letter.

Whether Bryan’s hope will stand the test of time is an open question.

The US Geological Survey released a report in December 2023 detailing the risk of dangerous earthquakes around the country. As expected on the hazard map, deep red risk lines run through California and Alaska. But the map also sports a big bull’s eye in the middle of the country—right over New Madrid.

The USGS estimates that the region has a 25 to 40 percent chance of a magnitude 6.0 or higher earthquake in the next 50 years, and as much as a 10 percent chance of a repeat of the 1811-1812 sequence. While the risk is much lower compared to, say, California, experts say that when it comes to earthquake resistance, the New Madrid region suffers from inadequate building codes and infrastructure.

Caught in this seismic splash zone are millions of people living across five states—mostly in Tennessee and Missouri, as well as Kentucky, Illinois, and Arkansas—including two major cities, Memphis and St. Louis. Mississippi, Alabama, and Indiana have also been noted as places of concern.

In response to the potential for calamity, geologists have learned a lot about this odd earthquake hotspot over the last few decades. Yet one mystery has persisted: why earthquakes even happen here in the first place.

This is a problem, experts say. Without a clear mechanism for why New Madrid experiences earthquakes, scientists are still struggling to answer some of the most basic questions, like when—or even if—another large earthquake will strike the region. In Missouri today, earthquakes are “not as front of mind” as other natural disasters, said Jeff Briggs, earthquake program manager for the Missouri State Emergency Management Agency.

But when the next big shake comes, “it’s going to be the biggest natural disaster this state has ever experienced.”

The struggle to understand why earthquakes happen in America’s heartland Read More »

apple-releases-public-betas-of-all-next-gen-os-updates,-except-for-visionos

Apple releases public betas of all next-gen OS updates, except for VisionOS

beta believe it —

Apple’s public betas are usually stable enough for daily use, but be careful.

Apple releases public betas of all next-gen OS updates, except for VisionOS

Apple

Apple’s next-generation operating systems are taking their next step toward release today: Apple is issuing the first public beta builds of iOS 18, iPadOS 18, macOS 15 Sequoia, tvOS 18, and HomePod Software 18 today. Sign up for Apple’s public beta program with your Apple ID, and you’ll be able to select the public beta builds from Software Update in the Settings app.

We covered the highlights of most of these releases when they were announced during Apple’s Worldwide Developers Conference in June, including more home screen customization in iOS and iPadOS, window tiling and iPhone mirroring in macOS, RCS text messaging support across all of Apple’s platforms, and more. But Apple still isn’t ready to show off a preview of its Apple Intelligence AI features, including the text and image generation features and a revamped Siri. Many of these features are still slated for “later this summer” and will presumably be available in some form in the final releases this fall.

Most devices that can run iOS 17, iPadOS 17, and macOS 14 Sonoma will be able to update to the new versions, including owners of the last couple generations of Intel Macs. But a handful of older phones and tablets and the 2018 MacBook Air are being dropped by the new releases. The watchOS 11 update is also dropping the Series 4 and Series 5 models as well as the first-generation Apple Watch SE.

Apple is also not releasing a public beta build of VisionOS 2, the first major update to the Apple Vision Pro’s operating system. Users who want to try out new Vision Pro features ahead of time will still need to opt into the developer beta, at least for now.

Beta best practices

The first public betas are similar—if not identical—to the third developer beta builds that were released last week. Apple usually releases new developer betas of next-gen OS releases every two weeks, so we’d expect to see a fourth developer beta early next week and a second near-identical public beta build released shortly after.

Apple’s developer and public beta builds used to be more clearly delineated, with a $99-per-year developer account paywall put up between general users and the earliest, roughest preview builds. That changed last year when Apple made basic developer accounts (and beta software access) free for anyone who wanted to sign up.

Apple still issues separate developer/public beta builds, but these days it’s more of a statement about who the betas are ready for than an actual technical barrier. Developer betas are rougher and visibly unfinished, but developers likely have the extra patience and technical chops needed to deal with these issues; public betas are still unfinished and unstable, but you can at least expect most basic functionality to work fine.

Regardless of how stable these betas may or may not be, the standard warnings apply: Make a good backup of your device before updating in case you need to restore the older, more stable operating system, and don’t install beta software on mission-critical hardware that you absolutely need to work correctly in your day-to-day life. For iPhones and iPads that connect to iCloud, connecting the devices to a PC or Mac and performing a local backup (preferably an encrypted one) can be a more surefire way to make sure you keep a pre-upgrade backup around than relying on continuous iCloud backups.

Apple releases public betas of all next-gen OS updates, except for VisionOS Read More »

net-neutrality-rules-temporarily-stayed-as-judges-weigh-impact-of-scotus-ruling

Net neutrality rules temporarily stayed as judges weigh impact of SCOTUS ruling

Net neutrality delay —

Court delays FCC rules until August 5, asks sides for briefs on Brand X.

FCC Chairwoman Jessica Rosenworcel and FCC Commissioner Brendan Carr stand next to each other in a Congressional hearing room before a hearing.

Enlarge / FCC Chairwoman Jessica Rosenworcel and FCC Commissioner Brendan Carr arrive to testify during a House committee hearing on March 31, 2022, in Washington, DC.

Getty Images | Kevin Dietsch

A federal court on Friday temporarily stayed enforcement of net neutrality regulations but has not decided on the merits of a telecom-industry request to block the rules on a longer-term basis.

The Federal Communications Commission’s revived net neutrality rules were scheduled to take effect on July 22. But the US Court of Appeals for the 6th Circuit needs more time to consider the industry motion to block the rules and wants the parties to file supplemental briefs. As a result, the FCC can’t enforce the rules until at least August 5.

“To provide sufficient opportunity to consider the merits of the motion to stay the FCC’s order, we conclude that an administrative stay is warranted. The FCC’s order is hereby temporarily stayed until August 5, 2024,” the court said on Friday.

The administrative stay is due in part to the 6th Circuit Court’s consideration of Supreme Court precedent. The Supreme Court’s decision last month in Loper Bright Enterprises v. Raimondo limited the regulatory authority of federal agencies by overturning the 40-year-old Chevron precedent. Chevron gave agencies leeway to interpret ambiguous laws as long as the agency’s conclusion was reasonable.

Briefs on Brand X

The telecom industry and FCC already filed briefs on the impact of Loper Bright. But the 6th Circuit wants supplemental briefs on a related topic.

Chevron deference was crucial in the 2005 Brand X ruling that has repeatedly played a role in cases over the FCC’s ability to regulate net neutrality. Brand X allowed the FCC to classify cable Internet as a lightly regulated information service. The precedent helped the FCC win court cases both when the Obama-era commission implemented net neutrality rules and when the Trump-era commission repealed those same rules.

On Friday, the 6th Circuit said the judges’ panel considering the present case “would be grateful for supplemental briefs by the parties with respect to the application of stare decisis and National Cable & Telecom. Ass’n v. Brand X Internet Servs., to this dispute, filed no later than July 19, 2024.” (Stare decisis is the “doctrine that courts will adhere to precedent in making their decisions.”)

The Supreme Court overturning Chevron doesn’t automatically nullify Brand X. The Supreme Court said in the Loper Bright ruling that “we do not call into question prior cases that relied on the Chevron framework. The holdings of those cases that specific agency actions are lawful—including the Clean Air Act holding of Chevron itself—are still subject to statutory stare decisis despite our change in interpretive methodology.”

The telecom industry and FCC briefs on Loper Bright both discussed Brand X, but the judges evidently want more on that topic. The 6th Circuit’s administrative stay was handed down by Chief Judge Jeffrey Sutton, Judge Eric Clay, and Judge Stephanie Dawkins Davis. Sutton was appointed by George W. Bush, while Clay is a Clinton appointee, and Davis was appointed by Biden.

FCC lost motion to move case

The administrative stay doesn’t necessarily signal anything about how the 6th Circuit judges will rule on the merits. But telcos did already win one ruling when the court rejected a motion to transfer the case.

Previous net neutrality cases were decided by the US Court of Appeals for the District of Columbia Circuit. This time, the 6th Circuit was randomly selected to hear the case in a multi-circuit lottery after telco lobby groups filed suit in seven circuits.

The FCC sought to transfer the current case to the DC Circuit, which ruled in the agency’s favor in the previous cases. The 6th Circuit denied the motion on June 28.

“When considering a motion to transfer a multi-circuit petition, we give considerable weight to our selection in the lottery. That lottery system would not mean much if a party disappointed by the luck of the draw could transfer the case to its preferred forum,” the court said.

Though the DC Circuit handled previous similar cases, the 6th Circuit said this is not merely a continuation of the earlier cases. The court also made a point of referring to the FCC repeatedly changing its position on whether broadband should be regulated as a common-carrier service.

“The DC Circuit has some familiarity with the legal classification of broadband through its consideration of prior FCC orders,” the 6th Circuit panel said. “But the FCC’s vacillating positions on the proper classification of broadband demonstrate that the prior orders do not represent the staggered implementation of a single undertaking. And, as the DC Circuit itself has explained, ‘general familiarity with the legal questions presented by a case is decidedly different from acquaintance with the proceedings that gave rise to the order in suit.'”

Net neutrality rules temporarily stayed as judges weigh impact of SCOTUS ruling Read More »

pc-emulator-comes-to-ios,-but-apple’s-restrictions-hamper-performance

PC emulator comes to iOS, but Apple’s restrictions hamper performance

It works, technically —

UTM SE’s lack of JIT compilation means “SE stands for Slow Edition.”

<em>Space Cadet Pinball</em> has never been so portable… or so tiny!” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/07/utmse.png”></img><figcaption>
<p><a data-height=Enlarge / Space Cadet Pinball has never been so portable… or so tiny!

One month after PC emulator UTM was rejected from the iOS App Store, the developers have announced that a new “UTM SE” version is now available for free on the App Store. But the app’s performance is severely hampered by Apple’s restrictions on so-called “just-in-time” (JIT) compilation, limiting the app’s suitability for effectively emulating many PC games.

Built on the generic command-line emulation layer QEMU, the open-source UTM boasts support for “30+ processors,” from x86 and PowerPC to RISC-V and ARM64. The App Store listing promises you can “run classic software and old-school games” through both a VGA graphics mode and text-based terminal.

Don’t expect a seamless, RetroArch-style path to playing Space Cadet Pinball on your iPhone, though. The UTM developers link to pre-configured settings downloads for versions of Windows going back to XP, alongside guides for getting those OSes up and running on iOS. But users will need to bring their own legitimate Windows installation ISO and go through the cumbersome process of installing the OS as well as a version of SPICE tools to help coordinate access through iOS (downloading pre-built, UTM-compatible Linux builds seems more straightforward).

Slow by design

Even after that, don’t expect high-level performance from this new emulator. That’s because UTM SE must abide by App Store restrictions prohibiting apps that “install executable code.” As such, the App Store version is a “JIT-less” build that uses a Tiny-Code Threaded Interpreter (TCTI) to interpret each original line of code being run rather than fully recompiling it at runtime for smoother performance.

A video shows how the lack of JIT recompilation slows down GameCube emulation on DolphiniOS.

The lack of that JIT recompilation means the “SE [in UTM SE] stands for Slow Edition,” as moderator CZ pithily put it in the UTM Discord. “This is us telling you gaming on UTM SE is not happening.” At least one user who tested running Linux via UTM SE confirmed it is “dog slow” and “a gloopy experience.” Those who want full performance out of UTM can still install the regular, non-SE version of the app via sideloading or an alt store.

You may remember that the developers of GameCube/Wii emulator DolphiniOS cited the lack of JIT recompilation as the reason their app can’t run at a functional frame rate through the iOS App Store. However, similar restrictions haven’t stopped emulators like Delta from running classic gaming consoles up through the Nintendo DS at a playable frame rate, suggesting that UTM SE might be sufficient for older MS-DOS or Windows 95-era titles.

PC emulator comes to iOS, but Apple’s restrictions hamper performance Read More »

housing-roundup-#8:-your-local-area

Housing Roundup #8: Your Local Area

In honor of San Francisco failing its housing target and becoming subject to SB 423, this special edition of housing deals with various state and local developments. It is presumed (although plans never survive contact with the enemy) that this greatly streamlines getting housing built, to the tune of several thousand per year.

Oakland rents falling most of all large US cities after building more housing. No way.

Court says UCSF is exempt from local height and zoning restrictions because it’s an educational institution serving the public good. I notice this ruling confuses me, especially because the building proposed is a hospital. That seems like something where zoning should apply. Or else you could say all hospitals are public goods, and I would agree.

San Francisco mayor London Breed vows to veto any and all anti-housing legislation, says San Francisco must be a housing leader, a city of yes.

Well, that is going to take a lot more than vetoing new laws.

What seems to be her plan for accomplishing this noble goal?

Since the current situation seems more like this?

Hayden Clarkin: Given an 8-3 vote tonight by San Francisco’s leaders to downzone the wealthy waterfront neighborhoods to protect the views of millionaire homeowners, I figure I’d show a visual representation of the amount of housing the city approved in January. Yes, it’s in red, look closely.

This photograph makes it very clear that whatever the area in question is about, building housing is not it. That is a huge amount of land on which building high would be highly profitable and no one can build.

This next photograph is also illustrative.

Hayden Clarkin: Don’t have the eyes of a hawk? No problem.

Sachin Agarwal: Eight Supervisors voted to save Aaron Peskin’s view. Unbelievable. Every one of these people has got to go.

London Breed: Today is a setback in our work to get to yes on housing. But I will not let this be the first step in a dangerous course correction back towards being a city of no. We will not move backward.

My statement on the Board of Supervisors downzoning vote.

I appreciate that she gets it in principle, but saying boo is not a strategy. What you get in practice, then, is:

Gianmatteo Costanza: YIMBY action lost this lawsuit re 469 Stevenson parking lot? How disappointing.

Garry Tan: The worthless SF bureaucrats are hard to beat. “Historic Nordstrom parking lot” is preserved, instead of building 495 new units of housing in a transit corridor (23% affordable) …

Corrupt nonprofit TODCO, Aaron Peskin and Dean Preston and their NIMBY agenda won this time.

sp6=Underrated: Will @California_HCD do anything about this? #sanfrancisco permitted 1 housing unit in February, bringing the total to 7 for the year. And 1,143 under this housing element.

I seriously don’t know why @CAgovernor even bothers talking about housing if he’s not going to intervene in a city that thumbs its nose at statewide housing targets.

San Francisco needs 82,069 units of housing under the RHNA formula (which is too low but besides the point).#sanfrancisco has only permitted >5k houses in 1 year since 1980.

@GavinNewsom you worked here. You know SF will never build willingly.

Armand Domalewski: San Francisco permitted ONE UNIT OF HOUSING in Februrary and we have someone launching a Mayoral campaign on Saturday centered on the premise that this is too much.

So London Breed’s plan is, despite this, to rezone the city for more housing.

In particular, her plan is to upzone some places from 3 stories to 8 stories, and reducing or avoiding super-tall 50 story buildings elsewhere, partly because with current interest rates those get expensive.

All right. How is she going to do that?

She is going to write a letter politely asking the San Francisco Planning Commission to rezone for more housing especially around transit stops, instead of doing their best not to.

London Breed (link includes text of letter): We need a future that includes housing for all San Franciscans. This past week, I directed Planning to revise our proposed citywide rezoning plan so we see actual construction spread across the entire City.

Rezoning is an iterative process. We have time before our January 2026 deadline to propose a rezoning that is far bolder than the City’s current draft, and results in actually building new homes.

But what’s most important is that San Francisco can, and should, adopt a rezoning plan that results in far more housing than the current draft. This is the only way to meet our CA state requirements, and this is what we committed to do when we passed the Housing Element.

My North Star on housing has always been grounded in being ambitious, data-driven, and embracing change. These are my San Francisco values. Let’s redo the math and re-up our commitment to housing.

I’m excited to see what the talented staff at the Planning Department returns in response to my letter, and this renewed call to recalculate our zoning plan based on the actual probability of where development will actually occur.

We have until January 2026 to pass a robust, citywide rezoning that will be the most significant change to San Francisco housing since parts of this city were downzoned in the 1970s. It’s time to lean into this historic moment and see this challenge as an opportunity.

The San Francisco Planning Commission is not interested in a plan to build more housing. They are interested in a plan to not build more housing. To build as much housing as Mark Zuckerberg paid attention to the Social Network’s legal proceedings.

How is it going in the meantime?

sp6r=underrated: #sanfrancisco housing permits through 15 months of the current RHNA cycle.

15 Permits this calendar year. 1,151 total.

I have no idea why SF’s nimbys feel so down. You guys are winning.

I don’t why anyone brags about yimby Sacramento laws. They aren’t doing anything.

Perhaps one place to start on building more housing would be to not give money to organizations whose central purpose is to stop housing construction? As in TODCO, a SF ‘affordable housing nonprofit’ that works to block the construction of any new affordable housing while reducing upkeep on its existing buildings and providing no new affordable housing. Why play into such a racket?

It is not only housing. Remember that story about how so many women dreamed of opening a 24 hour coffee shop and bookstore with a cat?

It is legal in the areas in green.

But wait! What is that I hear?

Things are about to get feisty. Could be huge.

Scott Weiner (yep, same guy from SB 1047): Today, San Francisco goes from the slowest in CA to approve new homes to one of the fastest. Why?

Because my new expanded housing permit streamlining law, SB 423, takes effect in San Francisco. SF is the 1st city in CA it’s in effect.

It’s super hard to build new homes in SF, partly b/c we made getting permits a chaotic, politicized, long process. We had the longest housing approval time in CA (26 months on average).

Under SB 423, that timeline is now capped at 6 months.

Permit streamlining is already in use across California under SB 35, the 1st law I authored as Senator. But it’s mainly used for 100% affordable housing, which is a small fraction of total homes. SB 423 expands SB 35’s successful streamlining to the vast majority of new homes.

San Francisco needs streamlined housing approvals badly: Last year, the state housing agency found that 18 SF housing policies and practices violated state law.

But we’re not stopping here—if cities don’t meet their housing goals, more will start triggering SB 423 next year 😈

Grow SF: San Francisco needs to build 82,000 units of housing in the next eight years. Since our Board of Supervisors can’t get the job done, the state has taken over control. It’s time to build.

SF Chronicle: S.F. being subject to SB 423 means that most proposed housing projects will not require approval from the Planning Commission and therefore won’t be able to be appealed to the Board of Supervisors.

Using the ‘ask Claude’ principle, I get a prediction of several thousand additional units of housing built per year, accelerating to close to 5,000 new units per year in the long term (e.g. 5+ years out). Presumably the city will do its best to prevent anything from actually being built.

Los Angeles city planning department proposal would revise zoning codes to make room for up to 250,000 new homes, also expand the adaptive reuse program to move from only applying to things built before 1974 to applying to anything built more than 15 years ago (so 2009), or 5 years with a conditional use permit. They have about 180,000 homes worth of empty offices.

Meanwhile Los Angeles is dealing with the consequences of ED1, an emergency declaration that the city needs more affordable housing. It turned out that no, they did not actually want lots of new affordable housing. I am not sure exactly why not, but there are limits, they are declaring them.

Jake: We’re doing the entitlements for a 39 unit ED1 project in LA, and here is where we are at:

  1. Case was submitted about 2 months ago

  2. We received a hold letter from the city

  3. We addressed every change/correction with the architect

  4. We resubmitted to the city

  5. They deemed the case complete

  6. They confirmed they are working on the letter of determination, which they will be issuing shortly

  7. We then receive a letter of non compliance

  8. We say WTF

So we get on a call with the city planner to find out what is going on, and it turns out their interpretation of state density bonus law has changed, and they will no longer be granting more than 5 incentives and 1 waiver for these projects.

Their stance must have changed in the last week. It’s a strong stance though. They made it very clear that this is the position of the city.

My advice to those working on these types of projects is to be proactive, and get in touch with your city planner to discuss what the best option is moving forward.

Toby Muresianu: Unbelievable but true: If you want to build exactly the 100% low-income Affordable housing the city declared an emergency need, years into a crisis, on land you own. They can’t simply tell you how much you’re allowed to build & may change their mind months after they do.

Mishka: LA deputy mayor of housing told NIMBY crowd @ UCLA’s land use law conference a few months ago that they were going to water down ED1 because it was getting too much housing production done, too soon. Citing a story of a dryer vent ‘blowing directly out onto a single family home.’

Los Angeles wrote an “emergency declaration” to address the housing crisis that somewhat accidentally *actually unleashed a torrent of applications to buildand now the city is furiously backpedaling.

There’s lots of room to disagree about specifics + genuine uncertainty about which policies generate lots of new units.

But the fundamental question for politicians to ask themselves is do they, in fact, want to make housing less scarce? If not it would be better to say so.

That’s right. The problem was that there was too much purely affordable housing being built. Can’t have that.

Here is a dashboard of currently proposed ED1 projects. Joe Cohen is hard at work pointing out exactly how illegal are various attempts to deny the permits.

Wall Street Journal notices Austin’s rents are down 7%, frames this as bad news. Then we have this, saying rents declined 12.5% in Austin in December, describing this great news as a ‘nosedive.’

Similarly:

Hayden Donnell: Auckland’s housing market has been struck by a disastrous plague of affordability, to the point that some buyers might even be *pauses to retch violentlygetting bargains.

Not content with their several other efforts I have noticed recently, the Minnesota Democrats propose requiring electrical vehicle parking and charging stations for all new homes. Why not use ‘electric’ as a reason to massively increase parking requirements?

I suppose you need that parking spot once you have driven away Uber and Lyft.

Jared Polis signs a new bill and embraces strong YIMBY rhetoric, calls for More Housing Now. The bill he signed prohibits discriminatory residential occupancy limits, one of the lowest hanging of fruits. Let people live in houses. So of course the top two comments are:

Monthly Earth Day – Crypto Whales NFT: This is ridiculous…..we can all read between the lines on the purpose behind this! 😡

Ariesangel1329: You want more housing yet you’re not deporting the EXTRA 50,000 illegal aliens that taxpayers are now paying for which includes housing. So in reality this will now allow 30 people in a 2 bedroom. The single-family neighborhoods are now gone. NWO plans are happening in CO.

It’s rough out there. Here is an op-ed on the subject from the Denver Post Editorial Board, arguing that the combination of ADUs and ending occupancy limits are excellent steps in the right direction.

This was only one of six major land-use bills signed by Polis this cycle. In addition to ADUs, we get increased density around bus and train stops, we eliminate many parking minimums, a new ‘right of first refusal’ for the government to buy properties to turn them into affordable housing (seems like an invitation to incinerate cash but not that destructive I guess), lifting occupancy restrictions (underrated), and requiring housing need assessments. Not bad.

But then Polis goes and makes a much bigger problem elsewhere? What is this?

Merrill Stillwell: Big shift in Colorado with a light form of rent control. This does not have the formal process in other States but it does open up lease renewals to legal scrutiny (including for unreasonable rent increases).

Sam Dangremond: Happy 4/20 to my fellow Colorado housing providers – and guess what… you’re now subject to a type of rent control!

Yesterday, @jaredpolis signed into law HB24-1098, the “For Cause Evictions” bill sponsored by (among others) Pueblo’s @NickForCO.

This law make it so that existing tenants cannot be “termed out” on a lease, and must be allowed to continue to rent a property UNLESS certain conditions apply.

One of those conditions is if the tenant refuses to sign a new lease “with reasonable terms.”

So, how much rent increase is still “reasonable”? Who knows! We’ll be spending lots of lawyer fees to find out.

The law even specifically says that we can’t try to get around being forced to continue to rent to a tenant by engaging in “retaliatory rent increases.”

But what’s this we see over here…? This law of course explicitly prohibits discriminatory and retaliatory rent increases, in accordance with existing law… but they also somehow snuck in a ban on “unconscionable” rent increases as well!

How much rent increase is “unconscionable“? Who knows!

Morgan: This is awesome! Now landlords can’t refuse to re-rent to you just so that they can jack up the prices for more profit.

Real Estate Ranger: I would prefer rent control over this lol.

Sam Dangremond: I had the same thought. An outright rent control regime at least gives owners certainty about what we can or can’t do.

Moses Kagan: Caused me to walk away from a sale process we were spending a bunch of time on.

They are very stupid.

Honestly don’t know how you underwrite a value-add MF deal in CO, at least until there are some court decisions about what constitutes “reasonable” I am particularly concerned that a court will look at the incremental return on capex for a given unit, rather than property-wide returns.

I don’t get why Polis did not know better. This is a disaster.

The good news is it could have been worse. According to the summary, you can choose to live in the property or have a family member do so, or put the property on sale.

Still, ‘sign a lease on reasonable terms’ is not something you ever want to be writing into a law. You need to define what are reasonable terms, at least to the extent of creating a generous safe harbor. Otherwise, this is rent control.

One obvious response to this is that a lot of properties might change hands a lot more. If the only way to evict a tenant is to sell the property, and the tenant is below market price and you can’t raise the rent? Well, guess what.

The lack of permission for a 24 hour bookstore is not unique to San Francisco.

Bernoulli Defect: It’s literally illegal to open a 24 hour coffeeshop (with a cat) in the UK without a council issued license.

These regularly get denied for spurious reasons and are a major reason London’s nightlife is below par.

Iron Economist (May 17): Great article by @jburnmurdoch in the FT today. People really don’t get the scale of the Uk problem. If we want to have outcomes like the good places we really do need to expand the total stock by 25-35%.

Simo: London – a city of 9 million people with the housing stock of a city of 6.5 million. In that context, the crazy numbers you see for rent start to make sense.

You would need to push more than that to get to normal levels, because there is tons of latent demand to live in London.

Following up on British growth deterioration, Tyler Cowen notes Claude 3 estimates 12%-15% of British GDP is land rent, and he estimates NIMBY issues account for about 15% of their GDP shortfall. I continue to think the counterfactual cost is vastly higher, if we compare to actually building to demand. Lowering the cost of housing would radically alter competitiveness and supercharge everything else, if it were fully fixed. That does not tell us what would be available from practical forward-looking marginal changes. My guess is still quite a lot.

Alce Stapp: Guess the floor area threshold at which French law requires a licensed architect to establish the plans for a new home.

A reasonable extrapolation is this means 75% of square footage above 170 ART (about 1830 sq ft) was not built due to this requirement. That is one hell of a cost, well in excess of what the architect could plausibly cost themselves especially if you presume that having one should be a benefit. So this must somehow be a much bigger deal than that, presumably around regulatory uncertainty and delays and that architects are absolute pains in the ass.

The Chinese real estate boom was easy to see, but no one wanted to stop it, says The Wall Street Journal’s Rebecca Feng. Well, one person named Xi did not want to stop it, until Xi suddenly did want to stop it. Then it stopped.

There is a difference between noticing absurdity and wanting it to stop or to bet against it, and being able to pull any of that off. The story here is a lot of people assuming Xi would not want to stop it, or at least being sufficiently worried that Xi would choose not to stop it. Yes, the investments clearly made no economic sense, and were in some sense absurd.

But we all know the market can stay crazy longer than you can stay solvent. For the Chinese government doubly so. These were not easy short bets to make.

What this doubles down on is that when there is what we will later call a historic real estate bubble, to the extent that ‘bubble’ is a thing, it is going to be rather obvious that a bubble is present. China recently, America in the 2000s, Japan before that, and so on. These were not subtle situations. These were very obvious, absurd situations. You might not be able to profitably bet on a reckoning, but if you buy in near the top that is on you.

Supply of rental homes up 240% and real rents down 34% from when Milei took office to May 20. That is what happens when you dismantle rent control in a country with extreme inflation, meaning that the resulting rents could easily get absurd in all directions.

Taiwan has one of most severe housing crises, the post goes into detail of how that happened, which is of course a lot of restrictions on building housing.

Sacramento adapts new general pro-housing plan, mayor claims it makes it the most pro-housing city in the country. He correctly points out this is highly progressive, in the sense that this advances things progressives care about, although it also helps with things conservatives care about.

Salt Lake City exploring reducing minimum lot sizes, from 5,000 square feet to 1,400.

Here’s a building in Washington, DC that’s zoned for that.

Arizona legislature does it.

Welcoming Neighbors Network: The hits keep on coming in Arizona! HB 2721, which legalizes up to fiveplexes on all residential lots in larger cities just passed the AZ House of Representatives with a big bipartisan vote of 36 to 18.

There was talk that the governor would not sign, but Katie Hobbs came through, which now makes four pro-housing bills in Arizona.

She did veto another law, HB 2570:

Dennis Welch: Governor Hobbes vetoes the bipartisan “Arizona Starter Homes Act” (HB2570) that would have limited the zoning authority of most of the state’s cities and towns.

Daryl Fairweather: This logic makes no sense to me. Why is the Department of Defense telling the Governor of Arizona to block zoning for starter homes?

Housing Roundup #8: Your Local Area Read More »

peer-review-is-essential-for-science-unfortunately,-it’s-broken.

Peer review is essential for science. Unfortunately, it’s broken.

Peer review is essential for science. Unfortunately, it’s broken.

Aurich Lawson | Getty Images

Rescuing Science: Restoring Trust in an Age of Doubt was the most difficult book I’ve ever written. I’m a cosmologist—I study the origins, structure, and evolution of the Universe. I love science. I live and breathe science. If science were a breakfast cereal, I’d eat it every morning. And at the height of the COVID-19 pandemic, I watched in alarm as public trust in science disintegrated.

But I don’t know how to change people’s minds. I don’t know how to convince someone to trust science again. So as I started writing my book, I flipped the question around: is there anything we can do to make the institution of science more worthy of trust?

The short answer is yes. The long answer takes an entire book. In the book, I explore several different sources of mistrust—the disincentives scientists face when they try to communicate with the public, the lack of long-term careers, the complicitness of scientists when their work is politicized, and much more—and offer proactive steps we can take to address these issues to rebuild trust.

The section below is taken from a chapter discussing the relentless pressure to publish that scientists face, and the corresponding explosion in fraud that this pressure creates. Fraud can take many forms, from the “hard fraud” of outright fabrication of data, to many kinds of “soft fraud” that include plagiarism, manipulation of data, and careful selection of methods to achieve a desired result. The more that fraud thrives, the more that the public loses trust in science. Addressing this requires a fundamental shift in the incentive and reward structures that scientists work in. A difficult task to be sure, but not an impossible one—and one that I firmly believe will be worth the effort.

Modern science is hard, complex, and built from many layers and many years of hard work. And modern science, almost everywhere, is based on computation. Save for a few (and I mean very few) die-hard theorists who insist on writing things down with pen and paper, there is almost an absolute guarantee that with any paper in any field of science that you could possibly read, a computer was involved in some step of the process.

Whether it’s studying bird droppings or the collisions of galaxies, modern-day science owes its very existence—and continued persistence—to the computer. From the laptop sitting on an unkempt desk to a giant machine that fills up a room, “S. Transistor” should be the coauthor on basically all three million journal articles published every year.

The sheer complexity of modern science, and its reliance on customized software, renders one of the frontline defenses against soft and hard fraud useless. That defense is peer review.

The practice of peer review was developed in a different era, when the arguments and analysis that led to a paper’s conclusion could be succinctly summarized within the paper itself. Want to know how the author arrived at that conclusion? The derivation would be right there. It was relatively easy to judge the “wrongness” of an article because you could follow the document from beginning to end, from start to finish, and have all the information you needed to evaluate it right there at your fingerprints.

That’s now largely impossible with the modern scientific enterprise so reliant on computers.

To makes matters worse, many of the software codes used in science are not publicly available. I’ll say this again because it’s kind of wild to even contemplate: there are millions of papers published every year that rely on computer software to make the results happen, and that software is not available for other scientists to scrutinize to see if it’s legit or not. We simply have to trust it, but the word “trust” is very near the bottom of the scientist’s priority list.

Why don’t scientists make their code available? It boils down to the same reason that scientists don’t do many things that would improve the process of science: there’s no incentive. In this case, you don’t get any h-index points for releasing your code on a website. You only get them for publishing papers.

This infinitely agitates me when I peer-review papers. How am I supposed to judge the correctness of an article if I can’t see the entire process? What’s the point of searching for fraud when the computer code that’s sitting behind the published result can be shaped and molded to give any result you want, and nobody will be the wiser?

I’m not even talking about intentional computer-based fraud here; this is even a problem for detecting basic mistakes. If you make a mistake in a paper, a referee or an editor can spot it. And science is better off for it. If you make a mistake in your code… who checks it? As long as the results look correct, you’ll go ahead and publish it and the peer reviewer will go ahead and accept it. And science is worse off for it.

Science is getting more complex over time and is becoming increasingly reliant on software code to keep the engine going. This makes fraud of both the hard and soft varieties easier to accomplish. From mistakes that you pass over because you’re going too fast, to using sophisticated tools that you barely understand but use to get the result that you wanted, to just totally faking it, science is becoming increasingly wrong.

Peer review is essential for science. Unfortunately, it’s broken. Read More »