Author name: Shannon Garcia

ants-learned-to-farm-fungi-during-a-mass-extinction

Ants learned to farm fungi during a mass extinction

Timing is everything

Tracing the lineages of agricultural ants to their most recent common ancestor revealed that the ancestor probably lived through the end-Cretaceous mass extinction—the one that killed off the dinosaurs. The researchers argue that the two were almost certainly related. Current models suggest that there was so much dust in the atmosphere after the impact that set off the mass extinction that photosynthesis shut down for nearly two years, meaning minimal plant life. By contrast, the huge amount of dead material would allow fungi to flourish. So, it’s not surprising that ants started to adapt to use what was available to them.

That explains the huge cluster of species that cooperate with fungi. However, most of the species that engage in organized farming don’t appear until roughly 35 million years after the mass extinction, at the end of the Eocene (that’s about 33 million years before the present period). The researchers suggest that the climate changes that accompanied the transition to the Oligocene included a drying out of the tropical Americas, where the fungus-farming ants had evolved. This would cut down on the availability of fungi in the wild, potentially selecting for the ability of species that could propagate fungal species on their own.

This also corresponds to the origins of the yeast strains used by farming ants, as well as the most specialized agricultural fungal species. But it doesn’t account for the origin of coral fungus farmers, which seems to have occurred roughly 10 million years later.

The work gives us a much clearer picture of the origin of agriculture in ants and some reasonable hypotheses regarding the selective pressures that might have led to its evolution. In the long term, however, the biggest advance here may be the resources generated during this study. Ultimately, we’d like to understand the genetic basis for the changes in the ants’ behavior, as well as how the fungi have adapted to better provide for their farmers. To do that, we’ll need to compare the genomes of agricultural species with their free-living relatives. The DNA gathered for this study will ultimately be needed to pursue those questions.

Science, 2024. DOI: 10.1126/science.adn7179  (About DOIs).

Ants learned to farm fungi during a mass extinction Read More »

elon-musk-claims-victory-after-judge-blocks-calif.-deepfake-law

Elon Musk claims victory after judge blocks Calif. deepfake law

“Almost any digitally altered content, when left up to an arbitrary individual on the Internet, could be considered harmful,” Mendez said, even something seemingly benign like AI-generated estimates of voter turnouts shared online.

Additionally, the Supreme Court has held that “even deliberate lies (said with ‘actual malice’) about the government are constitutionally protected” because the right to criticize the government is at the heart of the First Amendment.

“These same principles safeguarding the people’s right to criticize government and government officials apply even in the new technological age when media may be digitally altered: civil penalties for criticisms on the government like those sanctioned by AB 2839 have no place in our system of governance,” Mendez said.

According to Mendez, X posts like Kohls’ parody videos are the “political cartoons of today” and California’s attempt to “bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment” is not justified by even “a well-founded fear of a digitally manipulated media landscape.” If officials find deepfakes are harmful to election prospects, there is already recourse through privacy torts, copyright infringement, or defamation laws, Mendez suggested.

Kosseff told Ars that there could be more narrow ways that government officials looking to protect election integrity could regulate deepfakes online. The Supreme Court has suggested that deepfakes spreading disinformation on the mechanics of voting could possibly be regulated, Kosseff said.

Mendez got it “exactly right” by concluding that the best remedy for election-related deepfakes is more speech, Kosseff said. As Mendez described it, a vague law like AB 2839 seemed to only “uphold the State’s attempt to suffocate” speech.

Parody is vital to democratic debate, judge says

The only part of AB 2839 that survives strict scrutiny, Mendez noted, is a section describing audio disclosures in a “clearly spoken manner and in a pitch that can be easily heard by the average listener, at the beginning of the audio, at the end of the audio, and, if the audio is greater than two minutes in length, interspersed within the audio at intervals of not greater than two minutes each.”

Elon Musk claims victory after judge blocks Calif. deepfake law Read More »

bazzite-is-the-next-best-thing-to-steamos-while-we-wait-on-valve

Bazzite is the next best thing to SteamOS while we wait on Valve

I was on vacation last week, the kind of vacation in which entire days had no particular plan. I had brought the ROG Ally X with me, and, with the review done and Windows still annoying me, I looked around at the DIY scene, wondering if things had changed since my last foray into DIY Steam Deck cloning.

Things had changed for the better. I tried out Bazzite, and after dealing with the typical Linux installation tasks—activating the BIOS shortcut, turning off Secure Boot, partitioning—I had the Steam Deck-like experience I had sought on this more powerful handheld. Since I installed Bazzite, I have not had to mess with drivers, hook up to a monitor and keyboard for desktop mode, or do anything other than play games.

Until Valve officially makes SteamOS available for the ROG Ally and (maybe) other handhelds, Bazzite is definitely worth a look for anyone who thinks their handheld could do better.

A laptop and handheld running Bazzite, with an SD card pulled out of the handheld.

Bazzite says that you can swap an SD card full of games between any two systems running Bazzite. This kind of taunting possibility is very effective on people like me. Credit: Bazzite

More game platforms, more customization, same Steam-y feel

There are a few specific features for the ROG Ally X tossed into Bazzite, and the Linux desktop is Fedora, not Arch. Beyond that, it is like SteamOS but better, especially if you want to incorporate non-Steam games. Bazzite bakes in apps like Lutris, Heroic, and Junk Store, which Steam Deck owners often turn to for loading in games from Epic, GOG, itch.io, and other stores, as well as games with awkward Windows-only launchers.

You don’t even need to ditch Windows, really. If you’re using a handheld like the ROG Ally X, with its 1TB of storage, you can dual-boot Bazzite and Windows with some crafty partition shrinking. By all means, check that your game saves are backed up first, but you can, with some guide-reading, venture into Bazzite without abandoning the games for which you need Windows.

Perhaps most useful to the type of person who owns a gaming handheld and also will install Linux on it, Bazzite gives you powerful performance customization at the click of a button. Tap the ROG Ally’s M1 button on the back, and you can mess with Thermal Design Power (TDP), set a custom fan curve, change the charge limit, tweak CPU and GPU parameters, or even choose a scheduler. I most appreciated this for the truly low-power indie games I played, as I could set the ROG Ally below its standard 13 W “Silent” profile down to a custom 7 W without heading deep into Asus’ Armoury Crate.

Bazzite is the next best thing to SteamOS while we wait on Valve Read More »

attackers-exploit-critical-zimbra-vulnerability-using-cc’d-email-addresses

Attackers exploit critical Zimbra vulnerability using cc’d email addresses

Attackers are actively exploiting a critical vulnerability in mail servers sold by Zimbra in an attempt to remotely execute malicious commands that install a backdoor, researchers warn.

The vulnerability, tracked as CVE-2024-45519, resides in the Zimbra email and collaboration server used by medium and large organizations. When an admin manually changes default settings to enable the postjournal service, attackers can execute commands by sending maliciously formed emails to an address hosted on the server. Zimbra recently patched the vulnerability. All Zimbra users should install it or, at a minimum, ensure that postjournal is disabled.

Easy, yes, but reliable?

On Tuesday, Security researcher Ivan Kwiatkowski first reported the in-the-wild attacks, which he described as “mass exploitation.” He said the malicious emails were sent by the IP address 79.124.49[.]86 and, when successful, attempted to run a file hosted there using the tool known as curl. Researchers from security firm Proofpoint took to social media later that day to confirm the report.

On Wednesday, security researchers provided additional details that suggested the damage from ongoing exploitation was likely to be contained. As already noted, they said, a default setting must be changed, likely lowering the number of servers that are vulnerable.

Security researcher Ron Bowes went on to report that the “payload doesn’t actually do anything—it downloads a file (to stdout) but doesn’t do anything with it.” He said that in the span of about an hour earlier Wednesday a honey pot server he operated to observe ongoing threats received roughly 500 requests. He also reported that the payload isn’t delivered through emails directly, but rather through a direct connection to the malicious server through SMTP, short for the Simple Mail Transfer Protocol.

“That’s all we’ve seen (so far), it doesn’t really seem like a serious attack,” Bowes wrote. “I’ll keep an eye on it, and see if they try anything else!”

In an email sent Wednesday afternoon, Proofpoint researcher Greg Lesnewich seemed to largely concur that the attacks weren’t likely to lead to mass infections that could install ransomware or espionage malware. The researcher provided the following details:

  • While the exploitation attempts we have observed were indiscriminate in targeting, we haven’t seen a large volume of exploitation attempts
  • Based on what we have researched and observed, exploitation of this vulnerability is very easy, but we do not have any information about how reliable the exploitation is
  • Exploitation has remained about the same since we first spotted it on Sept. 28th
  • There is a PoC available, and the exploit attempts appear opportunistic
  • Exploitation is geographically diverse and appears indiscriminate
  • The fact that the attacker is using the same server to send the exploit emails and host second-stage payloads indicates the actor does not have a distributed set of infrastructure to send exploit emails and handle infections after successful exploitation. We would expect the email server and payload servers to be different entities in a more mature operation.
  • Defenders protecting  Zimbra appliances should look out for odd CC or To addresses that look malformed or contain suspicious strings, as well as logs from the Zimbra server indicating outbound connections to remote IP addresses.

Proofpoint has explained that some of the malicious emails used multiple email addresses that, when pasted into the CC field, attempted to install a webshell-based backdoor on vulnerable Zimbra servers. The full cc list was wrapped as a single string and encoded using the base64 algorithm. When combined and converted back into plaintext, they created a webshell at the path: /jetty/webapps/zimbraAdmin/public/jsp/zimbraConfig.jsp.

Attackers exploit critical Zimbra vulnerability using cc’d email addresses Read More »

despite-stricter-regulations,-europe-has-issues-with-tattoo-ink-ingredients

Despite stricter regulations, Europe has issues with tattoo ink ingredients

Swierk et al. use various methods, including Raman spectroscopy, nuclear magnetic resonance spectroscopy, and electron microscopy, to analyze a broad range of commonly used tattoo inks. This enables them to identify specific pigments and other ingredients in the various inks.

Earlier this year, Swierk’s team identified 45 out of 54 inks (90 percent) with major labeling discrepancies in the US. Allergic reactions to the pigments, especially red inks, have already been documented. For instance, a 2020 study found a connection between contact dermatitis and how tattoos degrade over time. But additives can also have adverse effects. More than half of the tested inks contained unlisted polyethylene glycol—repeated exposure could cause organ damage—and 15 of the inks contained a potential allergen called propylene glycol.

Meanwhile, across the pond…

That’s a major reason why the European Commission has recently begun to crack down on harmful chemicals in tattoo ink, including banning two widely used blue and green pigments (Pigment Blue 15 and Pigment Green 7), claiming they are often of low purity and can contain hazardous substances. (US regulations are less strict than those adopted by the EU.) Swierk’s team has now expanded its chemical analysis to include 10 different tattoo inks from five different manufacturers supplying the European market.

According to Swierk et al., nine of those 10 inks did not meet EU regulations; five simply failed to list all the components, but four contained prohibited ingredients. The other main finding was that Raman spectroscopy is not very reliable for figuring out which of three common structures of Pigment Blue 15 has been used. (Only one has been banned.) Different instruments failed to reliably distinguish between the three forms, so the authors concluded that the current ban on Pigment Blue 15 is simply unenforceable.

“There are regulations on the book that are not being complied with, at least in part because enforcement is lagging,” said Swierk. “Our work cannot determine whether the issues with inaccurate tattoo ink labeling is intentional or unintentional, but at a minimum, it highlights the need for manufacturers to adopt better manufacturing standards. At the same time, the regulations that are on the books need to be enforced and if they cannot be enforced, like we argue in the case of Pigment Blue 15, they need to be reevaluated.”

Analyst, 2024. DOI: 10.1039/D4AN00793J  (About DOIs).

Despite stricter regulations, Europe has issues with tattoo ink ingredients Read More »

amazon-illegally-refused-to-bargain-with-drivers’-union,-nlrb-alleges

Amazon illegally refused to bargain with drivers’ union, NLRB alleges

The National Labor Relations Board (NLRB) has filed charges against Amazon, alleging that the e-commerce giant has illegally refused to bargain with a union representing drivers who are frustrated by what they claim are low wages and dangerous working conditions.

Back in August, drivers celebrated what they considered a major win when the NLRB found that Amazon was a joint employer of sub-contracted drivers, cheering “We are Amazon workers!” At that time, Amazon seemed to be downplaying the designation, telling Ars that the union was trying to “misrepresent” a merit determination that the NLRB confirmed was only “the first step in the NLRB’s General Counsel litigating the allegations after investigating an unfair labor practice charge.”

But this week, the NLRB took the next step, signing charges soon after Amazon began facing intensifying worker backlash, not just from drivers but also from disgruntled office and fulfillment workers. According to Reuters, the NLRB accused Amazon of “a series of illegal tactics to discourage union activities” organized by drivers in a Palmdale, California, facility.

Amazon has found itself in increasingly hot water ever since the Palmdale drivers joined the International Brotherhood of Teamsters union in 2021. The NLRB’s complaint called out Amazon for terminating its contract with the unionized drivers without ever engaging in bargaining.

The tech company could have potentially avoided the NLRB charges if Amazon had settled with drivers, who claimed that rather than negotiate, Amazon had intimidated employees with security guards and illegally retaliated against workers unionizing.

Although Amazon recently invested $2.1 billion—its “biggest investment yet”—to improve driver safety and increase drivers’ wages, Amazon apparently did not do enough to settle drivers’ complaints.

The NLRB said in a press release sent to Ars that the complaint specifically alleged that “Amazon failed and refused to bargain” with Teamsters “and that it did not afford the union the opportunity to bargain over the effects of terminating” the Palmdale drivers’ contract, “increasing inspections, reducing and termination routes, and terminating employees in the bargaining unit.” Additionally, “the complaint further alleged that Amazon made unlawful threats and promises, held captive audience meetings, delayed employee start times and increased vehicle inspections to discourage union activities, and failed and refused to furnish information to the union.”

Amazon illegally refused to bargain with drivers’ union, NLRB alleges Read More »

The Impact of GenAI on Data Loss Prevention

Data is essential for any organization. This isn’t a new concept, and it’s not one that should be a surprise, but it is a statement that bears repeating.

Why? Back in 2016, the European Union introduced the General Data Protection Regulation (GDPR). This was, for many, the first time that data regulation became an issue, enforcing standards around the way we look after data and making organizations take their responsibility as data collectors seriously. GDPR, and a slew of regulations that followed, drove a massive increase in demand to understand, classify, govern, and secure data. This made data security tools the hot ticket in town.

But, as with most things, the concerns over the huge fines a GDPR breach could cause subsided—or at least stopped being part of every tech conversation. This isn’t to say we stopped applying the principles these regulations introduced. We had indeed gotten better, and it just was no longer an interesting topic.

Enter Generative AI

Cycle forward to 2024, and there is a new impetus to look at data and data loss prevention (DLP). This time, it’s not because of new regulations but because of everyone’s new favorite tech toy, generative AI. ChatGPT opened a whole new range of possibilities for organizations, but it also raised new concerns about how we share data with these tools and what those tools do with that data. We are seeing this manifest itself already in messaging from vendors around getting AI ready and building AI guardrails to make sure AI training models only use the data they should.

What does this mean for organizations and their data security approaches? All of the existing data-loss risks still exist, they have just been extended by the threats presented by AI. Many current regulations focus on personal data, but when it comes to AI, we also have to consider other categories, like commercially sensitive information, intellectual property, and code. Before sharing data, we have to consider how it will be used by AI models. And when training AI models, we have to consider the data we’re training them with. We have already seen cases where bad or out-of-date information was used to train a model, leading to poorly trained AI creating huge commercial missteps by organizations.

How, then, do organizations ensure these new tools can be used effectively while still remaining vigilant against traditional data loss risks?

The DLP Approach

The first thing to note is that a DLP approach is not just about technology; it also involves people and processes. This remains true as we navigate these new AI-powered data security challenges. Before focusing on technology, we must create a culture of awareness, where every employee understands the value of data and their role in protecting it. It’s about having clear policies and procedures that guide data usage and handling. An organization and its employees need to understand risk and how the use of the wrong data in an AI engine can lead to unintended data loss or expensive and embarrassing commercial errors.

Of course, technology also plays a significant part because with the amount of data and complexity of the threat, people and process alone are not enough. Technology is necessary to protect data from being inadvertently shared with public AI models and to help control the data that flows into them for training purposes. For example, if you are using Microsoft Copilot, how do you control what data it uses to train itself?

The Target Remains the Same

These new challenges add to the risk, but we must not forget that data remains the main target for cybercriminals. It’s the reason we see phishing attempts, ransomware, and extortion. Cybercriminals realize that data has value, and it’s important we do too.

So, whether you are looking at new threats to data security posed by AI, or taking a moment to reevaluate your data security position, DLP tools remain incredibly valuable.

Next Steps

If you are considering DLP, then check out GigaOm’s latest research. Having the right tools in place enables an organization to strike the delicate balance between data utility and data security, ensuring that data serves as a catalyst for growth rather than a source of vulnerability.

To learn more, take a look at GigaOm’s DLP Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.

If you’re not yet a GigaOm subscriber, sign up here.

The Impact of GenAI on Data Loss Prevention Read More »

switch-emulator-ryujinx-shuts-down-development-after-“contact-by-nintendo”

Switch emulator Ryujinx shuts down development after “contact by Nintendo”

Ryujinx in Peace —

GitHub removal comes months after a Nintendo lawsuit took down the Yuzu emulator.

These copyrighted Switch games shown on Ryujinx's former GitHub page probably didn't curry any favor with Nintendo.

Enlarge / These copyrighted Switch games shown on Ryujinx’s former GitHub page probably didn’t curry any favor with Nintendo.

Popular open source Nintendo Switch emulator Ryujinx has been removed from GitHub, and the team behind it has reportedly ceased development of the project after apparent discussions with Nintendo.

Ryujinx developer riperiperi writes on the project’s Discord server and social media that fellow developer gdkchan was “contacted by Nintendo and offered an agreement to stop working on the project, remove the organization and all related assets he’s in control of.” While the final outcome of that negotiation is not yet public, riperiperi reports that “the organization has been removed” (presumably from GitHub) and thus “I think it’s safe to say what the outcome is.”

While the Ryujinx website is still up as of this writing, the download page and other links to GitHub-hosted information from that website no longer function. The developers behind the project have not posted a regular progress report update since January after posting similar updates almost every month throughout 2023. Before today, the Ryujinx social media account last posted an announcement in March.

Followers of the Switch emulation scene may remember that March was also when the makers of the Yuzu emulator paid $2.4 million to settle a lawsuit with Nintendo over a project that Nintendo alleged was “facilitating piracy at a colossal scale.”

What is left?

Switch emulator Suyu, which emerged as a “legal gray area” Yuzu fork shortly after that Suyu takedown—is still available on its own self-hosted servers as of this writing (though the project’s last stable release is now six months old). Nintendo previously targeted Suyu’s GitLab hosting through a DMCA takedown and later took down the project’s official Discord server with a similar request. Another prominent Yuzu fork, Sudachi, was removed from GitHub in July via DMCA request.

In the wake of those legal efforts against other Switch emulator developers, the Ryujinx developers posted an automated message on their Discord server in response to any questions about Ryujinx’s ultimate fate. “Nothing is happening to Ryujinx,” the message read. “We know nothing more than you do. No dooming.”

A video of an in-development Ryujinx feature allowing local wired multiplayer between an emulator and official hardware.

Riperiperi reports that development will now stop on “a working Android port” of the emulator, which was not yet ready for release, as well as a tech demo iOS version that would likely have remained a “novelty” due to Apple’s just-in-time compilation restrictions. Developers were also working on updates that would have allowed local wired multiplayer gameplay connections between Ryujinx and real Switch hardware.

“While I won’t be remaining in the switch scene either, I still believe in emulation as a whole, and hope that other developers aren’t dissuaded by this,” riperiperi writes on the project’s Discord. “The future of game preservation does depend on individuals, and maybe one day it’ll be properly recognized.”

According to the developers, “as of May 2024, Ryujinx [had] been tested on approximately 4,300 titles; over 4,100 boot[ed] past menus and into gameplay, with roughly 3,550 of those being considered playable.”

Switch emulator Ryujinx shuts down development after “contact by Nintendo” Read More »

“extreme”-broadcom-proposed-price-hike-would-up-vmware-costs-1,050%,-at&t-says

“Extreme” Broadcom-proposed price hike would up VMware costs 1,050%, AT&T says

Legal dispute continues —

Broadcom “preventing some vendors from selling products to us,” AT&T alleges.

The logo of American cloud computing and virtualization technology company VMware is seen at the Mobile World Congress (MWC), the telecom industry's biggest annual gathering, in Barcelona on March 2, 2023.

Broadcom quoted AT&T a 1,050 percent price hike for VMware offerings, AT&T has claimed in legal documents.

AT&T sued Broadcom on August 29, accusing Broadcom of unlawfully denying it the second of three one-year renewals for support services that AT&T thinks it’s entitled to. AT&T cites a contract signed before Broadcom bought VMware. The telecommunications firm says it has 75,000 virtual machines (VMs) across approximately 8,600 servers running on VMware. Broadcom, which has stopped selling VMware perpetual licenses, has said that AT&T missed its opportunity to renew support and that the contract between VMware and AT&T has an “End of Availability” provision allowing VMware to retire products and services.

Legal filings from September 27 and spotted by The Register today show an email [PDF] that AT&T EVP and GM of wireline transformation and global supply chain Susan A. Johnson apparently sent to Broadcom CEO Hock Tan pointing to “an impasse” over VMware.

Johnson argued in the email that AT&T should have the right to renew support through September 2026 thanks to a previously signed five-year deal:

This proposed annual increase of +1,050% in one year is extreme and certainly not how we expect strategic partners to engage in doing business with AT&T.

A 1,050 percent price hike is the largest that Ars Technica has heard of being proposed by Broadcom. At this time, it’s unknown if AT&T’s claims are accurate. Broadcom hasn’t publicly commented on the allegations.

Many VMware customers have pointed to VMware becoming more expensive under Broadcom, though. Broadcom’s changes to selling VMware have reportedly included bundling products into only about two SKUs and higher CPU core requirements. In March, customers reportedly complained about price increases of up to 600 percent, per The Register. And in February, ServeTheHome said small cloud service providers reported prices increasing tenfold.

AT&T’s contract with VMware may be one of the firm’s bigger accounts. A 1,050 percent price hike would be another level, however, even for a company the size of AT&T. Per Johnson’s email, AT&T and Broadcom have had a “strategic relationship” for over a decade.

The email reads:

… AT&T has decided to pursue a legal strategy along with a disciplined plan to invest to migrate away, all of which will quickly become public. I truly wish we had another option. Unfortunately, this decision will impact the future of our overall relationship and how we manage spend in other Broadcom areas.

AT&T on potentially migrating off VMware

In her email, Johnson points to migration costs as impacting how much AT&T is willing to pay for VMware.

According to the message, projected costs for moving AT&T off of VMware are $40 million to $50 million. AT&T is said to use VMware-based VMs for customer services operations and for operations management efficiency. Per AT&T’s email, migration “has a very quick payback” and “strong” internal rate of return, “especially given the high licensing costs proposed.”

On September 20, Broadcom requested that AT&T’s request to block Broadcom from discontinuing VMware support be denied. In legal documents [PDF], Broadcom said that AT&T is planning to ditch VMware and that AT&T could have spent “the last several months or even years” making the transition.

In an affidavit filed on September 27 [PDF], Johnson stated that her email to Tan does not suggest that migration “would be easy, quick, or inexpensive” and that “none of those would be accurate statements.”

“My point was that although it is not easy, cheap, or quick to migrate off VMware, Defendants’ high fees will incentivize us to migrate to another solution,” the affidavit reads.

Johnson also claimed that AT&T started exploring options for getting off VMware in December but thought that it had time to make decisions, since it believed it could opt to renew support for its licenses until September 2026.

In another legal filing from September 27 [PDF], Gordon Mansfield, president of global technology planning at AT&T Services, says:

AT&T currently estimates it will take a period of years to transition all of its servers currently operating with the VMware software away from VMware. Moreover, Defendants have not made it easy to do so since we understand that they are preventing some vendors from selling certain products to us.

The filing didn’t get into further detail about how exactly Broadcom could be blocking product sales to AT&T. Broadcom hasn’t publicly responded to Mansfield’s claim.

Regarding AT&T’s lawsuit, Broadcom has previously told Ars Technica that it “strongly disagrees with the allegations and is confident we will prevail in the legal process.”

Since Broadcom’s VMware acquisition, most customers are expected to have at least considered ditching VMware. However, moving can be challenging and costly as some IT environments are heavily dependent on VMware. Being able to ensure that things are able to run as expected during the transition period has also complicated potential migrations.

While AT&T and Broadcom’s legal dispute continues, Broadcom has agreed to continue providing AT&T with VMware support until October 9. A preliminary injunction hearing is scheduled for October 15.

“Extreme” Broadcom-proposed price hike would up VMware costs 1,050%, AT&T says Read More »

newsom-vetoes-sb-1047

Newsom Vetoes SB 1047

It’s over, until such a future time as either we are so back, or it is over for humanity.

Gavin Newsom has vetoed SB 1047.

Quoted text is him, comments are mine.

To the Members of the California State Senate: I am returning Senate Bill 1047 without my signature.

This bill would require developers of large artificial intelligence (Al) models, and those providing the computing power to train such models, to put certain safeguards and policies in place to prevent catastrophic harm. The bill would also establish the Board of Frontier Models – a state entity – to oversee the development of these models.

It is worth pointing out here that mostly the ‘certain safeguards and policies’ was ‘have a policy at all, tell us what it is and then follow it.’ But there were some specific things that were requires, so Newsom is indeed technically correct here.

California is home to 32 of the world’s 50 leading Al companies, pioneers in one of the most significant technological advances in modern history. We lead in this space because of our research and education institutions, our diverse and motivated workforce, and our free-spirited cultivation of intellectual freedom. As stewards and innovators of the future, I take seriously the responsibility to regulate this industry.

Queue the laugh track. No, that’s not why California leads, but sure, whatever.

This year, the Legislature sent me several thoughtful proposals to regulate AI companies in response to current, rapidly evolving risks – including threats to our democratic process, the spread of misinformation and deepfakes, risks to online privacy, threats to critical infrastructure, and disruptions in the workforce. These bills, and actions by my Administration, are guided by principles of accountability, fairness, and transparency of AI systems and deployment of AI technology in California.

He signed a bunch of other AI bills. It is quite the rhetorical move to characterize those bills as ‘thoughtful’ in the context of SB 1047, which (like or hate its consequences) was by far the most thoughtful bill, was centrally a transparency bill, and was clearly an accountability bill. What you call ‘fair’ is up to you I guess.

SB 1047 magnified the conversation about threats that could emerge from the deployment of AI. Key to the debate is whether the threshold for regulation should be based on the cost and number of computations needed to develop an Al model, or whether we should evaluate the system’s actual risks regardless of these factors. This global discussion is occurring as the capabilities of AI continue to scale at an impressive pace. At the same time, the strategies and solutions for addressing the risk of catastrophic harm are rapidly evolving.

Yes. This is indeed the key question. Do you target the future more capable frontier models that enable catastrophic and existential harm and require they be developed safely? Or do you let such systems be developed unsafely, and then put restrictions on what you tell people you can do with such systems, with no way to enforce that on users let alone on the systems themselves? I’ve explained over and over why it must be the first one, and focusing on the second is the path of madness that is bad for everyone. Yet here we are.

By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 – at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.

Bold mine. Read that again. The problem, according to Newsom, with SB 1047 was that it did not put enough restrictions on smaller AI models, and this could lead to a ‘false sense of security.’ He claims he is vetoing the bill because it does not go far enough.

Do you believe any of that? I don’t. Would a lower threshold (or no threshold!) on size have made this bill more likely to be signed? Of course not. A more comprehensive bill would have been more likely to be vetoed, not less likely.

Centrally the bill was vetoed, not because it was insufficiently comprehensive, but rather because of one or more of the following:

  1. Newsom was worried about the impact of the bill on industry and innovation.

  2. Industry successfully lobbied to have the bill killed, for various reasons.

  3. Newsom did what he thought helped his presidential ambitions.

You can say Newsom genuinely thought the bill would do harm, whether or not you think this was the result of lies told by various sources. Sure. It’s possible.

You can say Newsom was the subject of heavy lobbying, which he was, and did a political calculation and did what he thought was best for Gavin Newsom. Sure.

I do not buy for a second that he thought the bill was ‘insufficiently comprehensive.’

If it somehow turns out I am wrong about that, I am going to be rather shocked, as for rather different reasons will be everyone who is celebrating that this bill went down. It would represent far more fundamental confusions than I attribute to Newsom.

Adaptability is critical as we race to regulate a technology still in its infancy. This will require a delicate balance. While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions – so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.

No, the bill does not restrict ‘basic functions.’ It does not restrict functions at all. The bill only restricts whether the model is safe to release in general. Once that happens, you’re in the clear. Whereas if you regulate by function, then yes, you will put a regulatory burden on even the most basic functions, that’s how that works.

More importantly, restricting on the basis of ‘function’ does not work. That is not how the threat model works. If you have a sufficiently generally capable model it can be rapidly put to any given ‘function.’ If it is made available to the public, it will be used for whatever it can be used for, and you have very little control over that even under ideal conditions. If you open the weights, you have zero control, telling rival nations, hackers, terrorists or other non-state actors they aren’t allowed to do something doesn’t matter. You lack the ability to enforce such restrictions against future models smarter than ourselves, should they arise and become autonomous, as many inevitably would make them. I have been over this many times.

Let me be clear – I agree with the author – we cannot afford to wait for a major catastrophe to occur before taking action to protect the public. California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable. I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of Al systems and capabilities. Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself.

Bold mine. The main thing SB 1047 would have done was to say ‘if you spend $100 million on your model, you have to create, publish and abide by some chosen set of safety protocols.’ So it’s hard to reconcile this statement with thinking SB 1047 is bad.

Newsom clearly wants California to act without the federal government. He wants to act to create ‘proactive guardrails,’ rather than waiting to respond to harm.

The only problem is that he’s buying into an approach that fundamentally won’t work.

This also helps explain his signing other (far less impactful) AI bills.

To those who say there’s no problem here to solve, or that California does not have a role in regulating potential national security implications of this technology, I disagree. A California-only approach may well be warranted – especially absent federal action by Congress – but it must be based on empirical evidence and science. The U.S. Al Safety Institute, under the National Institute of Science and Technology, is developing guidance on national security risks, informed by evidence-based approaches, to guard against demonstrable risks to public safety. Under an Executive Order I issued in September 2023, agencies within my Administration are performing risk analyses of the potential threats and vulnerabilities to California’s critical infrastructure using Al. These are just a few examples of the many endeavors underway, led by experts, to inform policymakers on Al risk management practices that are rooted in science and fact. And endeavors like these have led to the introduction of over a dozen bills regulating specific, known risks posed by Al, that I have signed in the last 30 days.

Again, he’s clearly going to be signing a bunch of bills, one way or another. It’s not going to be this one, so it’s going to be something else. Be careful what you wish for.

I am committed to working with the Legislature, federal partners, technology experts, ethicists, and academia, to find the appropriate path forward, including legislation and regulation. Given the stakes – protecting against actual threats without unnecessarily thwarting the promise of this technology to advance the public good – we must get this right.

For these reasons, I cannot sign this bill.

Sincerely,

Gavin Newsom

His central point is not Obvious Nonsense. His central point at least gets to be Wrong: He is saying AI regulation should be based on not putting restrictions on frontier model development, and instead it should focus on restricting particular uses.

But again, if you care about catastrophic risks: That. Would. Not. Work.

He doesn’t understand, decided to act as if he doesn’t understand, or both.

The Obvious Nonsense part is the idea that we shouldn’t require those training big models to publish their safety and security protocols – the primary thing SB 1047 does – because this doesn’t impact small models and thus is insufficiently effective and might give a ‘false sense of security.’

This is the same person who warned he was primarily worried about the ‘chilling effect’ of SB 1047 on the little guy.

Now he says that the restrictions don’t apply to the little guy, so he can’t sign the bill?

He wants to restrict uses, but doesn’t want to find out what models are capable of?

What the hell?

‘Falls short.’ ‘Isn’t comprehensive.’ The bill wasn’t strong enough, says Newsom, so he decided nothing was a better option, weeks after warning about that ‘chilling effect.’

If I took his words to have meaning, I would then notice I was confused.

Sounds like we should had our safety requirements also apply to the less expensive models made by ‘little tech’ then, especially since those people were lying to try and stop the bill anyway? Our mistake.

Well, his, actually. Or it would be, if he cared about not dying. So could go either way.

Ben Fritz and Peetike Rana: The Democrat decided to reject the measure because it applies only to the biggest and most expensive AI models and doesn’t take into account whether they are deployed in high-risk situations, he said in his veto message.

Smaller models sometimes handle critical decision-making involving sensitive data, such as electrical grids and medical records, while bigger models at times handle low-risk activities such as customer service.

Kelsey Piper: Is there one single person in the state of California who believes that this is Newsom’s real reason for the veto – SB 1047 isn’t comprehensive enough!

There are reasonable arguments both for and against the bill but this isn’t one of them; there are very good reasons to treat the most expensive models differently including low barriers to entry for startups and small businesses.

Newsom vetoed because he’s in the pocket of lobbyists who pressed him aggressively for a veto; he has no principles and no roadmap for artificial intelligence or anything else, and if there were a more comprehensive bill he’d veto that one too. Come on.

Michael Cohen: Newsom: “By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security. Smaller, specialized models may emerge as equally or even more dangerous”.

I’d like to hear *anyoneclaim he’s not bullshitting here. He could have easily contacted the author’s office at any point in the process to say, “your bill doesn’t go far enough”. Or just start with rules on bigger models and then amend the bill to be more expansive later.

So Newsom has his reasons for vetoing the bill, and for some reason, he didn’t think it would reflect well on him to share them with us.

I thought that I’d seen one person claim that Newsom wasn’t bullshitting with his explanation. Then that person pointed out to me that I was misunderstanding their position – that they were only claiming that Newsom meant it when he said he wanted to regulate AI more in the future.

The idea that he’d have signed the bill if it was more ‘comprehensive’?

That’s rather Obvious Nonsense, more commonly known as bullshit.

When powerful people align with other power or do something for reasons that would not sound great if said out loud, and tell you no, they don’t give you a real explanation.

Instead, they make up a reason for it that they hadn’t mentioned to you earlier.

Often they’ll do what Newsom does here, by turning a concern they previously harped on onto its head. Here, power expresses concern you’ll hurt the little guy, so you exempt the little guy? Power says response is not sufficiently comprehensive. Veto. Everything else that took place, all the things you thought mattered? They’re suddenly irrelevant, except insofar as they didn’t offer a superior excuse.

To answer the question about whether there is one person who is willing to say they think Newsom’s words are genuine, the answer is yes. That person was Dean Ball, for at least some of the words. I did not see any others.

So what’s does Newsom say his plan is now?

It sure looks like he wants use-based regulation. Oh no.

The governor announced that he is working with leading AI researchers including Fei-Fei Li, a Stanford University professor who has worked at Google and recently launched a startup called World Labs, to develop new legislation he is willing to support.

Jam tomorrow, I suppose.

Newsom’s announcement: Governor Newsom announced that the “godmother of AI,” Dr. Fei-Fei Li, as well as Tino Cuéllar, member of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research, and Jennifer Tour Chayes, Dean of the College of Computing, Data Science, and Society at UC Berkeley, will help lead California’s effort to develop responsible guardrails for the deployment of GenAI. He also ordered state agencies to expand their assessment of the risks from potential catastrophic events.

Given he’s centrally consulting Dr. Fei-Fei Li, together with his aim of targeting particular uses, it sounds like a16z did get to Newsom in the end, he has been regulatory captured (what a nice term for it!), and we have several indications here he does indeed intend to pursue the worst possible path of targeting use cases of AI rather than the models themselves.

For a relatively smart version of the argument that you should target use cases, here is Timothy Lee, who does indeed realize the risk the use-based regulations will be far more onerous, although he neglects to consider the reasons it flat out won’t work, citing ‘safety is not a model property,’ the logic of which I’ll address later but to which the response is essentially ‘not with that attitude, and you’re not going to find it anywhere else in any way you’d find remotely acceptable.’

In other ways, such calls make no sense. If you’re proposing, as he suggests here, ‘require safety in your model if you restrict who can use it, but if you let anyone use and modify the model in any way and use it for anything with no ability to undo or restrict that, then we should allow that, nothing unsafe about that’ then any reasonable person must recognize that proposal as Looney Tunes. I mean, what?

A lot of the usual suspects are saying similar things, renewing their calls for going down exactly that path of targeting wrong mundane use, likely motivated in large part by ‘that means if we open the weights then nothing that happens as a result of it would be our responsibility or our fault’ and in large part by ‘that’ll show them.’

Do they have any idea what they are setting up to happen? Where that would inevitably go, and where it can’t go, even on their own terms? Tim has a glimmer, as he mentions at the end of his post. Dean Ball knows and has now chosen to warn about it.

Alas, most have no idea.

This is shaping up to be one of the biggest regulatory own goals in history.

Such folks often think they are being clever or are ‘winning,’ because if we focus on ‘scientifically proven’ harms then we won’t have to worry about existential risk concerns. The safety people will be big mad. That means things are good. No, seriously, you see claims like ‘well the people who advocate for safety are sad SB 1047 failed, which means we should be happy.’ Full zero sum thinking.

Let’s pause for a second to notice that this is insane. The goal is to be at the production possibilities frontier between (innovation, or utility, or progress) and preventing catastrophic and existential harms.

Yes, we can disagree about how best to do that, whether a given policy will be net good, or how much we should value one against the other. That’s fine.

But if you say ‘those who care about safety think today made us less safe, And That’s Wonderful,’ then it seems like you are kind of either an insane person or a nihilistic vindictive f, perhaps both?

That’s like saying that you know the White Sox must have a great offense, because look at their horrible pitching. And saying that if you want the White Sox to score more runs next year, you should want to ensure they have even worse pitching.

(And that’s the charitable interpretation, where the actual motivation isn’t largely rage, hatred, vindictiveness and spite, or an active desire for AIs to control the future.)

Instead, I would implore you, to notice that Newsom made it very clear that the regulations are coming, and to actually ask: If use-based regulations to reduce various mundane and catastrophic risks do come, and are our central strategy – even if you think those risks are fake or not worth caring about – what will that look like? Are you going to be happy about it?

If the ‘little guy’ or fans of innovation think this would go well for them, I would respond: You have not met real world use-based risk-reduction regulations, or you have forgotten what that looks like.

It looks like the EU AI Act. It looks like the EU. Does that help make this clear?

There would be a long and ever growing list of particular things you are not allowed to permit an AI to do, and that you would be required to ensure your AI did do. It will be your responsibility, as a ‘deployer’ of an AI model, to ensure that these things do and do not happen accordingly, whether or not they make any sense in a given context.

This laundry list will make increasingly little sense. It will be ever expanding. It will be ill defined. It will focus on mundane harms, including many things that California is Deeply Concerned about that you don’t care about even a little, but that California thinks are dangers of the highest order. The demands will often not be for ‘reasonable care’ and instead be absolute, and they will often be vague, with lots of room to expand them over time.

You think open models are going to get a free pass from all this, when anyone releasing one is very clearly ‘making it available’ for use in California? What do you think will happen once people see the open models being used to blatantly violate all these use restrictions being laid down?

All the things such people were all warning about with SB 1047, both real and hallucinated, in all directions? Yeah, basically all of that, and more.

Kudos again to Dean Ball in particular for understanding the danger here. He is even more apprehensive than I am about this path, and writes this excellent section explaining what this kind of regime would look like.

Dean Ball: This is not a hypothetical. This is the reality for contractors in the State of California today—one of Governor Newsom’s “use-based” regulations (in this case downstream of an Executive Order he issued that requires would-be government contractors to document all their uses of generative AI).

I fear this is the direction that Western policymakers are sleepwalking toward if we do not make concerted effort. Every sensible person, I think, understands that this is no way to run a civilized economy.

Or do they?

I certainly agree this is no way to run a civilized economy. I also know that one of the few big civilized economies, the EU, is running in exactly this way across the board. If all reasonable people knew this would rule out a large percentage of SB 1047 opponents, as well as Gavin Newsom, has potentially sensible people.

It would be one thing if that approach greatly reduced existential risk at great economic cost, and there was no third option available. Then we’d have to talk price and make a tough decision.

Instead, it does more of damage, without the benefits. What does such an approach do about actual existential risks from AI, by any method other than ‘be so damaging that the entire AI industry is crippled’?

It does essentially nothing, plausibly making us actively less safe. The thing that is dangerous is not any particular ‘use’ of the models. It is creating or causing there to exist AI entities that are highly capable and especially ones that are smarter than ourselves. This approach lets that happen without any supervision or precautions, indeed encourages it.

That is not going to cut it. Once the models exist, they are going to get deployed in the ways that are harmful, and do the harmful things, with or without humans intending for that to happen (and many humans do want it to happen). You can’t take the model back once that happens. You can’t take the damage back once that happens. You can’t un-exfiltrate the model, or get it back under control, or get the future back under control. Danger and ability to cause catastrophic events are absolutely model properties. The only ones who can possibly hope to prevent this from happening without massive intrusions into everything are the developers of the model.

If you let people create, and especially if you allow them to open the weights of, catastrophically dangerous AI models if deployed in the wrong ways, while telling people ‘if you use it the wrong way we will punish you?’

Are you telling me that has a snowball’s chance in hell of having them not deploy the AI in the wrong ways? Especially once the wrong way is as simple as ‘give it a maximizing instruction and access to the internet?’ When they’re as or more persuasive than we are? When on the order of 10% of software engineers would welcome a loss of human control over the future?

Whereas everyone, who wants to do anything actually useful with AI, the same as people who want to do most any other useful thing in California, would now face an increasing set of regulatory restrictions and requirements that cripple the ability to collect mundane utility.

All you are doing is disrupting using AI to actually accomplish things. You’re asking for the EU AI Act. And then that, presumably, by default and in some form, is what you are going to get if we go down this path. Notice the other bills Newsom signed (see section below) and how they start to impose various requirements on anyone who wants to use AI, or even wants to do various tech things.

Ashwinee Panda: It’s remarkably prescient that Newsom’s veto calls out the bill for focusing on large models; indeed many of the capabilities that can cause havoc will start appearing in small models as more people start replicating the distillation process that frontier labs have been using.

Teortaxes: Or rather: SB1047 will come back stronger than everyone asking for a veto wanted.

It won’t come back ‘stronger,’ in this scenario. It will come back ‘wrong.’

Also note that one of SB 1047’s features was that only relative capabilities were targeted (even more so before the limited duty exception was forcibly removed by opponents), whereas a regime where ‘small models are dangerous too’ is central to thinking will hold models and any attempt to ‘deploy’ them to absolute standards of capability by default, rather than asking whether they cause or materially enable something that couldn’t have otherwise been done, or asking whether your actions were reasonable.

Note how other bills didn’t say ‘take reasonable care to do or prevent X,’ they mostly said ‘do or prevent X’ full stop and often imposed ludicrous standards.

Nancy Pelosi is very much not on the same page as Gavin Newsom.

Nancy Pelosi: AI springs from California. Thank you, @CAgovernor Newsom, for recognizing the opportunity and responsibility we all share to enable small entrepreneurs and academia – not big tech – to dominate.

Arthur Conmy: Newsom: we also need to regulate small models and companies Pelosi: thanks for not regulating small models and companies

Miles Brundage: Lol – Newsom’s letter says it is *badthere’s a carveout for small models (which was intended as a proxy for small companies). Regardless of your views on the bill, CA Democrats do not seem to be trying particularly hard to coordinate + show there was some principle here.

Pelosi did not stop to actually parse Newsom’s statement. But that cannot surprise us, since she also did not stop to parse SB 1047, a bill that would not have impacted ‘small entrepreneurs’ or ‘academia’ in any way at all.

Whereas Newsom specifically called out the need to check ‘deployers’ of even small models for wrong use cases, an existential threat to both groups.

Samuel Hammond: Instead of focusing on frontier models where the risk is greatest, Newsom wants a bill that covers *allAI models, big and small.

Opponents of SB1047 will regret not accepting the narrow approach when they had the chance. This is what “safety isn’t a model property” gets you.

Having shot down the bill tailored to whistleblowers and catastrophic risk, California’s next attempt will no doubt be SAG-AFTRA bill from hell.

Dean Ball: SB 1047 co-sponsor threatens next bill, this time with “new allies,” by which she means, basically, the people who are going to shut down american ports next week.(@hamandcheese isn’t wrong that worse bills are possible—we just need to be smarter, friendlier, and less cynical).

If that’s the way things go, as is reasonably likely, then you are going to wish, so badly, that you had instead helped steer us towards a compute-based and model-based regime that outright didn’t apply to you, that was actually well thought out and debated and refined in detail, back when you had the chance and the politics made that possible.

Then there’s the question of what happens if a catastrophic event did occur. In which case, things plausibly spin out of control rather quickly. Draconian restrictions could result. It is very much in the AI industry’s interest for such events to not happen.

That’s all independent of the central issue of actual existential risks, which this veto makes more likely.

I am saying, even if you don’t think the existential risks are that big a deal, that you should be very worried about Newsom’s statement, and where all of this is heading.

So if you are pushing the rhetoric of use-based regulation, I urge you to reconsider. And to try and steer things towards regulatory focus on the model layer and compute thresholds, or development of new other ideas that can serve similar purposes, ‘while you have the chance.’

None of this means Newsom couldn’t come around in the future.

There are scenarios where this could work out well next year. Here are some of them:

  1. Newsom’s political incentives could change, or we could make them change.

  2. In particular, the rising salience of AI, or particular AI incidents, could make it no longer worthwhile to care so much about certain particular interests.

  3. Also in particular, GPT-5 or another 5-level model could change everything.

  4. The people influencing Newsom could change their minds, especially when they see what the alternative regulatory regime starts shaping up to look like, and start regretting not being more careful what they wished for.

  5. Newsom could be genuinely misled or confused about how all of this works, and be confused about the wisdom of targeting the use layer versus the model layer, or not understand it, and then later come to understand it, as he learns more and the situation changes.

  6. Newsom currently doesn’t seem to buy existential risk arguments. He might change his mind about that.

  7. Newsom could genuinely want a highly comprehensive bill, and work in good faith to get one and understand the issues for next session.

  8. There might have been other unique factors in play with this bill. Perhaps (for example, and as some have speculated) there were big political forces that quietly didn’t want to give Weiner a big win here. We can’t know.

Newsom clearly wants California to ‘lead’ on AI regulation, and pass various proactive bills in advance of anything going wrong. He is going to back and sign some bills, and those bills will be more impactful than the ones he signed this session. The question is, will they be good bills, sir?

Here is Scott Weiner’s statement on the veto. He’s not going anywhere.

Scott Weiner: This veto is a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet. The companies developing advanced Al systems acknowledge that the risks these models present to the public are real and rapidly increasing.

While the large Al labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public.

This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way.

This veto is a missed opportunity for California to once again to lead on innovative tech regulation – just as we did around data privacy and net neutrality — and we are all less safe as a result.

At the same time, the debate around SB 1047 has dramatically advanced the issue of Al safety on the international stage. Major Al labs were forced to get specific on the protections they can provide to the public through policy and oversight.

Leaders from across civil society, from Hollywood to women’s groups to youth activists, found their voice to advocate for commonsense, proactive technology safeguards to protect society from foreseeable risks. The work of this incredible coalition will continue to bear fruit as the international community contemplates the best ways to protect the public from the risks presented by Al.

California will continue to lead in that conversation – we are not going anywhere.

Here’s Dan Hendrycks.

Dan Hendrycks: Governor Gavin Newsom’s veto of SB 1047 is disappointing. This bill presented a reasonable path for protecting Californians and safeguarding the AI ecosystem, while encouraging innovation.

But I am not discouraged. The bill encouraged collaboration between industry, academics and lawmakers, and has begun moving the conversation about AI safety into the mainstream, where it belongs. AI developers are now more aware that they already have to exercise reasonable care lest they be found liable.

SB 1047 galvanized a wide-reaching bipartisan coalition of supporters, making clear that a regulatory approach that drives AI safety and innovation is not only possible, but lies on the immediate horizon.

Discourse and tactics around the bill from some in the industry have been disheartening. It is disgraceful that many opponents of SB 1047 trafficked in misinformation to undermine this bill, rather than engaging in a factual debate. SB 1047 has revealed that some industry calls for responsible AI are nothing more than PR aircover for their business and investment strategies. This is a key lesson as we continue to advocate for AI safety measures.

Dean Ball pointed out the detail that Newsom vetoed at a time designed to cause maximum distraction.

Dean Ball: Veto was obvious to everyone paying attention [after the comments about a chilling effect] (prediction markets were low-iq throughout, maybe not enough trading), and newsom probably timed it to be during the 49ers game (maximal public inattention).

Samuel Hammond: Why so cynical.

Dean Ball: Because politicians’ tactical behavior is different from our own strategic behavior.

Samuel Hammond: So it’s cynicism when you take Newsom’s call for a bill that applies to the entire industry at face value, but not when you armchair theorize that Newsom tactically vetoed SB1047 on a game night to keep the rubes distracted. 🤔

Why would Newsom want to make his veto as quiet as possible, especially if he wanted to dispel any possible ‘chilling effect’?

Because the bill was very popular, so he didn’t want people to know he vetoed it.

There were various people who chimed in to support SB 1047 in the last few days. I did not stop to note them. Nor did I note the latest disingenuous arguments trotted out by bill opponents. It’s moot now and it brings me great joy to now ignore all that.

We should note, however: For the record, yes, the bill was very popular. AIPI collaborated with Dean Ball to craft a more clearly neutral question wording, including randomizing argument order.

AIPI: Key findings remain consistent with our past polls:

– 62% support vs. 25% oppose SB1047

– 54% agree more with bill proponents vs. 28% with opponents

– Bipartisan support: 68% Democrats, 58% independents, 53% Republicans favor the bill.

Most striking is that these results closely mirror the previous AIPI poll results, which we now know were not substantially distorted by question wording. They previously found +39 support, 59%-20%. The new result is +37 support, 62%-25%, well within the margin of error versus the old results from AIPI.

The objection that this is a low-salience issue where voters haven’t thought about it and don’t much care is still highly valid. And you could reasonably claim, as Ball says explicitly, that voter preferences shouldn’t determine whether the bill is good or not.

We should look to do more of this adversarial collaborative polling in the future. We should also remember it when estimating the ‘house effect’ and ‘pollster rating’ of AIPI on such issues, and when we inevitably once again see claims that their wordings are horribly biased even when they seem clearly reasonable.

Also, this from Anthropic’s Jack Clark seems worth noting:

Jack Clark: While the final version of SB 1047 was not perfect, it was a promising first step towards mitigating potentially severe and far reaching risks associated with AI development.

We think the core of the bill – mandating developers produce meaningful security and safety policies about their most powerful AI systems, and ensuring some way of checking they’re following their own policies – is a prerequisite for building a large and thriving AI industry.

To get an AI industry that builds products everyone can depend on will require lots of people to work together to figure out the right rules of the road for AI systems – it is welcome news that Governor Newsom shares this view.

Anthropic will talk to people in industry, academia, government, and safety to find a consensus next year and do our part to ensure whatever policy we arrive at appropriately balances supporting innovation with averting catastrophic risks.

Jack Clark is engaging in diplomacy and acting like Newsom was doing something principled and means what he says in good faith. That is indeed the right move for Jack Clark in this spot.

I’m not Jack Clark.

Gavin Newsom did not do us the favor of vetoing during market hours. So we cannot point to the exact point where he vetoed, and measure the impact on various stocks, such as Nvidia, Google, Meta and Microsoft.

That would have been the best way to test the impact of SB 1047 on the AI industry. If SB 1047 was such a threat, those stocks would go up on the news. If they don’t go up, then that means the veto wasn’t impactful.

There is the claim that the veto was obvious given Newsom’s previous comments, and thus priced in.

There are two obvious responses.

  1. There was a Polymarket (and Manifold) prediction market on the result, and they very much did not think the outcome was certain. Why did such folks not take the Free Money?

  2. When Gavin Newsom made those previous comments, did the markets move? On September 17, when SB 1047’s chances declined from 46% to 20% at Polymarket. You absolutely could not tell, looking at stock price charts, that this was the day that it happened. There were no substantial price movements at all.

Then, when the market opened on Monday the 30th, after the veto, again there was no major price movement. This is complicated by potential impact from Spruce Pine, and potential damage to our supply chains for quartz for semiconductors there, but it seems safe to say that nothing major happened here.

The combined market reaction, in particular the performance of Nvidia, is incompatible with SB 1047 having a substantial impact on the general ecosystem. You can in theory claim that Google and Microsoft benefit from a bill that exclusively puts restrictions on a handful of big companies. And you can claim Meta’s investors would actually be happy to have Zuckerberg think better of what they think is his open model folly. But any big drop in AI innovation and progress would hurt Nvidia.

If you think that this was not the right market test, what else would be a good test instead? What market provides a better indication?

The one that most caught my eye was his previous decision to sign AB 2013, requiring training data transparency. Starting on January 1, 2026, before making a new AI system or modification of an existing AI system publicly available for Californians to use, the developer or service shall post documentation regarding the data used to train the system. The bill is short, so here’s the part detailing what you have to post:

(a) A high-level summary of the datasets used in the development of the generative artificial intelligence system or service, including, but not limited to:

  1. The sources or owners of the datasets.

  2. A description of how the datasets further the intended purpose of the artificial intelligence system or service.

  3. The number of data points included in the datasets, which may be in general ranges, and with estimated figures for dynamic datasets.

  4. A description of the types of data points within the datasets. For purposes of this paragraph, the following definitions apply: (A) As applied to datasets that include labels, “types of data points” means the types of labels used. (B) As applied to datasets without labeling, “types of data points” refers to the general characteristics.

  5. Whether the datasets include any data protected by copyright, trademark, or patent, or whether the datasets are entirely in the public domain.

  6. Whether the datasets were purchased or licensed by the developer.

  7. Whether the datasets include personal information, as defined in subdivision (v) of Section 1798.140.

  8. Whether the datasets include aggregate consumer information, as defined in subdivision (b) of Section 1798.140.

  9. Whether there was any cleaning, processing, or other modification to the datasets by the developer, including the intended purpose of those efforts in relation to the artificial intelligence system or service.

  10. The time period during which the data in the datasets were collected, including a notice if the data collection is ongoing.

  11. The dates the datasets were first used during the development of the artificial intelligence system or service.

  12. Whether the generative artificial intelligence system or service used or continuously uses synthetic data generation in its development. A developer may include a description of the functional need or desired purpose of the synthetic data in relation to the intended purpose of the system or service.

(b) A developer shall not be required to post documentation regarding the data used to train a generative artificial intelligence system or service for any of the following:

  1. A generative artificial intelligence system or service whose sole purpose is to help ensure security and integrity. For purposes of this paragraph, “security and integrity” has the same meaning as defined in subdivision (ac) of Section 1798.140, except as applied to any developer or user and not limited to businesses, as defined in subdivision (d) of that section.

  2. A generative artificial intelligence system or service whose sole purpose is the operation of aircraft in the national airspace.

  3. A generative artificial intelligence system or service developed for national security, military, or defense purposes that is made available only to a federal entity.

This is not the most valuable transparency we could get. In particular, you get the information on system release, not on system training, so once it is posted the damage will typically be largely done from an existential risk perspective.

However this is potentially a huge problem.

In particular: You have to post ‘the sources or owners of the data sets’ and whether you had permission from the owners to use those data sets.

Right now, the AI companies use data sources they don’t have the rights to, and count on the ambiguity involved to protect them. If they have to admit (for example) ‘I scraped all of YouTube and I didn’t have permission’ then that makes it a lot easier to cause trouble in response to that. It also makes it a lot harder, in several senses, to justify not making such trouble, as failure to enforce copyright endangers that copyright, which is (AIUI, IANAL, etc) why often owners feel compelled to sue when violations are a little too obvious and prominent, even if they are fine with a particular use.

The rest of it seems mostly harmless, for example I presume everyone is going to answer #2 with something only slightly less of a middle finger than ‘to help the system more accurately predict the next token’ and #9 with ‘Yes we cleaned the data, so that bad data wouldn’t corrupt the system.’

What is a ‘substantial modification’ of a system? If you fine-tune a system, does that count? My assumption would mostly be yes, and you mostly just mumble ‘synthetic data’ as per #12?

Everyone’s favorite regulatory question is, ‘what about open source’? The bill does not mention open source or open models at all, instead laying down rules everyone must follow if they want to make a model available in California. Putting something on the open internet for download makes it available in California. So any open model will need to be able to track and publish all this information, and anyone who modifies the system will have to do so as well, although they will have the original model’s published information to use as a baseline.

What else we got? We get a few bills that regularize definitions, I suppose. Sure.

Otherwise, mostly a grab bag of ‘tell us this is AI’ and various concerns about deepfakes and replicas.

  • AB 1008 by Assemblymember Rebecca Bauer-Kahan (D-Orinda) – Clarifies that personal information under the California Consumer Privacy Act (CCPA) can exist in various formats, including information stored by AI systems. (previously signed)

  • AB 1831 by Assemblymember Marc Berman (D-Menlo Park) – Expands the scope of existing child pornography statutes to include matter that is digitally altered or generated by the use of AI.

  • AB 1836 by Assemblymember Rebecca Bauer-Kahan (D-Orinda) – Prohibits a person from producing, distributing, or making available the digital replica of a deceased personality’s voice or likeness in an expressive audiovisual work or sound recording without prior consent, except as provided. (previously signed)

  • AB 2013 by Assemblymember Jacqui Irwin (D-Thousand Oaks) –  Requires AI developers to post information on the data used to train the AI system or service on their websites. (previously signed)

I covered that one above.

  • AB 2355 by Assemblymember Wendy Carrillo (D-Los Angeles) – Requires committees that create, publish, or distribute a political advertisement that contains any image, audio, or video that is generated or substantially altered using AI to include a disclosure in the advertisement disclosing that the content has been so altered. (previously signed)

  • AB 2602 by Assemblymember Ash Kalra (D-San Jose) – Provides that an agreement for the performance of personal or professional services which contains a provision allowing for the use of a digital replica of an individual’s voice or likeness is unenforceable if it does not include a reasonably specific description of the intended uses of the replica and the individual is not represented by legal counsel or by a labor union, as specified. (previously signed)

  • AB 2655 by Assemblymember Marc Berman (D-Menlo Park) – Requires large online platforms with at least one million California users to remove materially deceptive and digitally modified or created content related to elections, or to label that content, during specified periods before and after an election, if the content is reported to the platform. Provides for injunctive relief. (previously signed)

  • AB 2839 by Assemblymember Gail Pellerin (D-Santa Cruz) – Expands the timeframe in which a committee or other entity is prohibited from knowingly distributing an advertisement or other election material containing deceptive AI-generated or manipulated content from 60 days to 120 days, amongst other things. (previously signed)

  • AB 2876 by Assemblymember Marc Berman (D-Menlo Park) – Require the Instructional Quality Commission (IQC) to consider AI literacy to be included in the mathematics, science, and history-social science curriculum frameworks and instructional materials.

  • AB 2885 by Assemblymember Rebecca Bauer-Kahan (D-Orinda) – Establishes a uniform definition for AI, or artificial intelligence, in California law. (previously signed)

  • AB 3030 by Assemblymember Lisa Calderon (D-Whittier) – Requires specified health care providers to disclose the use of GenAI when it is used to generate communications to a patient pertaining to patient clinical information. (previously signed)

  • SB 896 by Senator Bill Dodd (D-Napa) – Requires CDT to update report for the Governor as called for in Executive Order N-12-23, related to the procurement and use of GenAI by the state; requires OES to perform a risk analysis of potential threats posed by the use of GenAI to California’s critical infrastructure (w/high-level summary to Legislature); and requires that the use of GenAI for state communications be disclosed.

  • SB 926 by Senator Aisha Wahab (D-Silicon Valley) – Creates a new crime for a person to intentionally create and distribute any sexually explicit image of another identifiable person that was created in a manner that would cause a reasonable person to believe the image is an authentic image of the person depicted, under circumstances in which the person distributing the image knows or should know that distribution of the image will cause serious emotional distress, and the person depicted suffers that distress. (previously signed)

  • SB 942 by Senator Josh Becker (D-Menlo Park) – Requires the developers of covered GenAI systems to both include provenance disclosures in the original content their systems produce and make tools available to identify GenAI content produced by their systems. (previously signed)

  • SB 981 by Senator Aisha Wahab (D-Silicon Valley) – Requires social media platforms to establish a mechanism for reporting and removing “sexually explicit digital identity theft.” (previously signed)

  • SB 1120 by Senator Josh Becker (D-Menlo Park) – Establishes requirements on health plans and insurers applicable to their use AI for utilization review and utilization management decisions, including that the use of AI, algorithm, or other software must be based upon a patient’s medical or other clinical history and individual clinical circumstances as presented by the requesting provider and not supplant health care provider decision making. (previously signed)

  • SB 1288 by Senator Josh Becker (D-Menlo Park) – Requires the Superintendent of Public Instruction (SPI) to convene a working group for the purpose of exploring how artificial intelligence (AI) and other forms of similarly advanced technology are currently being used in education. (previously signed)

  • SB 1381 by Senator Aisha Wahab (D-Silicon Valley) – Expands the scope of existing child pornography statutes to include matter that is digitally altered or generated by the use of AI.

Wait till next year, as they say. This is far from over.

This raises the importance of maintaining the Biden Executive Order on AI. This at least gives us a minimal level of transparency into what is going on. If it were indeed repealed, as Trump has promised to do on day one, we would be relying for even a minimum of transparency only on voluntary commitments from top AI labs – commitments that Meta and other bad actors are unlikely to make and honor.

The ‘good’ news is that Gavin Newsom is clearly down for regulating AI.

The bad news is that he wants to do it in the wrong way, by imposing various requirements on those who deploy and use AI. That doesn’t protect us against the threats that matter most. Instead, it only can protect us against the mostly mundane harms that we can address over time as the situation changes.

And the cost of such an approach, in terms of innovation and mundane utility, risks being extremely high – exactly the ‘little guys’ and academics who were entirely exempt from SB 1047 would likely now be hit the hardest.

If we cannot do compute governance, and we cannot do model-level governance, then I do not see an alternative solution. I only see bad options, a choice between an EU-style regime and doing essentially nothing.

The stage is now potentially set for the worst possible outcomes.

There will be great temptation for AI notkilleveryoneism advocates to throw their lot in with the AI ethics and mundane harm crowds.

Rob Wiblin: Having failed to get up a narrow bill focused on frontier models, should AI x-risk folks join a popular front for an Omnibus AI Bill that includes SB1047 but adds regulations to tackle union concerns, actor concerns, disinformation, AI ethics, current safety, etc?

Dean Ball: The AI safety movement could easily transition from being a quirky, heterodox, “extremely online” movement to being just another generic left-wing cause. It could even work.

But I hope they do not. As I have written consistently, I believe that the AI safety movement, on the whole, is a long-term friend of anyone who wants to see positive technological transformation in the coming decades. Though they have their concerns about AI, in general this is a group that is pro-science, techno-optimist, anti-stagnation, and skeptical of massive state interventions in the economy (if I may be forgiven for speaking broadly about a diverse intellectual community).

It is legitimate to have serious concerns about the trajectory of AI: the goal is to make heretofore inanimate matter think. We should not take this endeavor lightly. We should contemplate potential future trajectories rather than focusing exclusively on what we can see with our eyes—even if that does not mean regulating the future preemptively. We should not assume that the AI transformation “goes well” by default. We should, however, question whether and to what extent the government’s involvement helps or hurts in making things “go well.”

I hope that we can work together, as a broadly techno-optimist community, toward some sort of consensus.

I am 110% with Dean Ball here.

Especially: The safety community that exists today, that is concerned with existential risks, really is mostly techno-optimists. This is a unique opportunity, while everyone on all sides is a techno-optimist, and also rather libertarian, to work together to find solutions that work. That window, where the techno-optimist non-safety community has a dancing partner that can and wants to actually dance with them, is going to close.

From the safety side’s perspective, deciding who to work with going forward, one can make common cause with those who have concerns different from yours – if others want to put up stronger precautions against deepfakes and voice clones and copyright infringement or other mundane AI harms than I think is ideal, or make those requests more central, there has to be room for compromise when doing politics. If you also get what you need. One cannot always insist on a perfect bill.

What we must not do is exactly what so many people lied and said SB 1047 was doing – which is to back a destructive bill exactly because it is destructive. We need to continue to recognize that imposing costs is a cost, doing damage is damaging, destruction is to be avoided. Some costs may be necessary along the way, but the plan cannot be to destroy the village in order to save it.

Even if we successfully work together to have those who truly care about safety insist upon only backing sensible approaches, events may quickly be out of our hands. There are a lot more generic liberals, or generic conservatives, than there are heterodox deeply wonky people who care deeply about us all not dying and the path to accomplishing that.

There is the potential for those other crowds to end up writing such bills entirely without the existential risk mitigations and have that be how all of this works, especially if opposition forces continue to do their best to poison the well about the safety causes that matter and those who advocate to deal with them.

Alternatively, one could dream that now that Newsom’s concerns have been made clear, those concerned about existential risks might decide to come back with a much stronger bill that indeed does target everyone. That is what Newsom explicitly said he wants, maybe you call his bluff, maybe it turns out he isn’t fully bluffing. Maybe he is capable of recognizing a policy that would work, or those who would support such a policy. There are doubtless ways to use the tools and approaches Newsom is calling for to make us safer, but it isn’t going to be pretty, and those who opposed SB 1047 are really, really not going to like them.

Meanwhile, the public, in the USA and in California, really does not like AI, is broadly supportive of regulation, and that is not going to change.

Also it’s California, so there’s some chance this happens, seriously please don’t do it, nothing is so bad that you have to resort to a ballot proposition, choose life:

Daniel Eth: I’ll just leave this here (polling from AIPI a few days ago, follow up question on how people would vote in the next tweet):

Thus I reiterate the warning: SB 1047 was probably the most well-written, most well-considered and most light touch bill that we were ever going to get. Those who opposed it, and are now embracing the use-case regulatory path as an alternative thinking it will be better for industry and innovation, are going to regret that. If we don’t get back on the compute and frontier model based path, it’s going to get ugly.

There is still time to steer things back in a good direction. In theory, we might even be able to come back with a superior version of the model-based approach, if we all can work together to solve this problem before something far worse fills the void.

But we’ll need to work together, and we’ll need to move fast.

Newsom Vetoes SB 1047 Read More »

ceo-of-“health-care-terrorists”-sues-senators-after-contempt-of-congress-charges

CEO of “health care terrorists” sues senators after contempt of Congress charges

“not the way this works” —

Suing an entire Senate panel seen as a “Hail Mary play” unlikely to succeed.

The empty chair of Steward Health Care System Chief Executive Officer, Dr. Ralph de la Torre who did not show up during the US Senate Committee on Health, Education, Labor, & Pensions Examining the Bankruptcy of Steward Health Care: How Management Decisions Have Impacted Patient Care.

Enlarge / The empty chair of Steward Health Care System Chief Executive Officer, Dr. Ralph de la Torre who did not show up during the US Senate Committee on Health, Education, Labor, & Pensions Examining the Bankruptcy of Steward Health Care: How Management Decisions Have Impacted Patient Care.

The infamous CEO of a failed hospital system is suing an entire Senate committee after being held in contempt of Congress, with civil and criminal charges unanimously approved by the full Senate last week.

In a federal lawsuit filed Monday, Steward CEO Ralph de la Torre claimed the senators “bulldozed over [his] constitutional rights” as they tried to “pillory and crucify him as a loathsome criminal” in a “televised circus.”

The Senate committee—the Committee on Health, Education, Labor, and Pensions (HELP), led by Bernie Sanders (I-Vt.)—issued a rare subpoena to de la Torre in July, compelling him to testify before the lawmakers. They sought to question the CEO on the deterioration of his hospital system, which previously included more than 30 hospitals across eight states. Steward filed for bankruptcy in May.

Imperiled patients

The committee alleges that de la Torre and Steward executives reaped millions in personal profits by hollowing out the health care facilities, even selling the land out from under them. The mismanagement left them so financially burdened that one doctor in a Steward-owned hospital in Louisiana said they were forced to perform “third-world medicine.” A lawmaker in that state who investigated the conditions at the hospital described Steward executives as “health care terrorists.”

Further, the financial strain on the hospitals is alleged to have led to the preventable deaths of 15 patients and put more than 2,000 other patients in “immediate peril.” As hospitals cut services, closed wards, or shuttered entirely, hundreds of health care workers were laid off, and communities were left without access to care. Nurses who remained in faltering facilities testified of harrowing conditions, including running out of basic supplies like beds. In one Massachusetts hospital, nurses were forced to place the remains of newborns in cardboard shipping boxes because Steward failed to pay a vendor for bereavement boxes.

Meanwhile, records indicate de la Torre and his companies were paid at least $250 million in recent years and he bought a 190-foot yacht for $40 million. Steward also owned two private jets collectively worth $95 million.

While de la Torre initially agreed to testify before the committee at the September 12 hearing, the wealthy CEO backed out the week beforehand. He claimed that a federal court order linked to the bankruptcy case prevented him from speaking on the matter; additionally, he invoked his Fifth Amendment right to avoid self-incrimination.

The HELP committee rejected de la Torre’s arguments, saying there were still relevant topics he could safely discuss without violating the order and that his Fifth Amendment rights did not permit him to refuse to appear before Congress when summoned by a subpoena. Still, the CEO was a no-show, and the Senate moved forward with the contempt charges.

“Not the way this works”

In the lawsuit filed today, de la Torre argues that the senators are attempting to punish him for invoking his Constitutional rights and that the hearing “was simply a device for the Committee to attack [him] and try to publicly humiliate and condemn him.”

The suit describes de la Torre as having a “distinguished career, bedecked by numerous accomplishments,” while accusing the senators of painting him as “a villain and scapegoat[ing] him for the company’s problems, even those caused by systemic deficiencies in Massachusetts’ health care system.” If he had appeared at the Congressional hearing, he would not have been able to defend himself from the personal attacks without being forced to abandon his Constitutional rights, the suit argues.

“Indeed, the Committee made it abundantly clear that they would put Dr. de la Torre’s invocation [of the Fifth Amendment] itself at the heart of their televised circus and paint him as guilty for the sin of remaining silent in the face of these assaults on his character and integrity,” the suit reads.

De la Torre seeks to have the federal court quash the Senate committee’s subpoena, enjoin both contempt charges, and declare that the Senate committee violated his Fifth Amendment rights.

Outside lawyers are skeptical that will occur. The lawsuit is a “Hail Mary play,” according to Stan M. Brand, an attorney who represented former Trump White House official Peter Navarro in a contempt of Congress case. De la Torre’s case “has very little chance of succeeding—I would say no chance of succeeding,” Brand told the Boston Globe.

“Every time that someone has tried to sue the House or Senate directly to challenge a congressional subpoena, the courts have said, ‘That that’s not the way this works,’” Brand said.

CEO of “health care terrorists” sues senators after contempt of Congress charges Read More »

uber-beats-crash-victims’-attempt-to-try-case-in-court-instead-of-arbitration

Uber beats crash victims’ attempt to try case in court instead of arbitration

Uber app icon displayed on a phone screen

Getty Images | NurPhoto

A married couple can’t sue Uber over severe injuries they suffered in a 2022 car accident because of a mandatory arbitration provision in the ride-sharing company’s terms of use, according to a ruling issued by the New Jersey Superior Court appellate division.

In November 2023, a lower court denied Uber’s motion to compel arbitration and dismiss the complaint filed by plaintiffs Georgia and John McGinty. But the lower-court ruling was reversed on September 20 in a unanimous decision by three appellate court judges.

Georgia McGinty had agreed to Uber’s arbitration clause long before the accident. But the couple challenged the terms in part because they say their minor daughter, then 12, was the one who clicked the most recent terms agreement when the girl ordered food through Uber Eats. Those newer terms were also allegedly less specific about users waiving the right to a jury trial.

The September 20 ruling says:

Uber’s digital records show that on January 8, 2022, Georgia logged into her Uber account using her password, checked the box next to the statement “I have reviewed and agree to the Terms of Use,” and pressed “Confirm.” In their motion opposition, plaintiffs asserted that it was not Georgia but rather their minor daughter who checked that box and clicked the “Confirm” button—even though it required attesting to Uber that she was at least eighteen years old. Plaintiffs claim that their daughter, while using Georgia’s phone and with Georgia’s permission, confirmed her agreement to the December [2021] Terms before ordering food for plaintiffs to be delivered to them through Uber Eats.

The December Terms to which Georgia agreed—either by herself or through her daughter using her Uber account—contain an arbitration provision. That agreement provides disputes that may arise between Georgia and Uber, including disputes concerning auto accidents or personal injuries, will be resolved through binding arbitration “and not in a court of law.” The agreement also provides that any disputes over arbitrability would be delegated to the arbitrator.

“We hold that the arbitration provision contained in the agreement under review, which Georgia or her minor daughter, while using her cell phone agreed to, is valid and enforceable,” judges wrote.

Lower court said Uber terms were too vague

The case came to the appellate court on appeal from the Superior Court of New Jersey, Law Division, Middlesex County. The lower court found that Uber’s updated terms “fail[ed] to clearly and unambiguously inform plaintiff of her waiver of the right to pursue her claims in a judicial forum,” making it unclear that “arbitration is a substitute for the right to seek relief in our court system.”

While an earlier version of Uber’s terms contained an express jury waiver provision, the newer version did not. The lower court held that the newer agreement “lacks any specificity on what the resolution would look like or what the alternative to such resolution might be.”

Uber argued that even if the newer terms are invalid, the earlier terms would still require arbitration of the dispute, and that Georgia McGinty can’t escape her agreement with Uber by claiming that her daughter agreed to the newer terms on her behalf.

Despite the newer agreement not using the word “jury,” the appellate court said that legal precedent “does not require specific jury trial language to accomplish a waiver of rights.” Judges said the Uber provision requiring disputes to be handled in arbitration “and not in a court of law… clearly and unambiguously evidences a waiver of plaintiffs’ right to pursue any claims against Uber in a court of law and obligates plaintiffs to resolve their claims through binding arbitration.”

“While ‘jury’ is no longer explicitly used in the updated December Terms, magic words are not required for enforceability and the clause clearly intimates that disputes are resolved through arbitration,” the court said.

The question of whether the couple’s daughter was capable of agreeing to the terms must be decided by an arbitrator, according to the ruling:

Georgia certified that her daughter was “capable,” would frequently order food, and she and John were preoccupied with packing, which supports the inference that the daughter acted knowingly on Georgia’s behalf. In summary, the Arbitration Agreement is valid and delegates the threshold question of the scope of the arbitration to the arbitrator. Therefore, Georgia’s reliance on her daughter’s minority to raise an infancy defense shall be determined by the arbitrator.

Uber beats crash victims’ attempt to try case in court instead of arbitration Read More »