Author name: Shannon Garcia

boston-dynamics’-atlas-tries-out-inventory-work,-gets-better-at-lifting

Boston Dynamics’ Atlas tries out inventory work, gets better at lifting

also better at stumbling —

Atlas learns to pick up a 30-lb car strut and carefully manipulate it.

  • Boston Dynamics’ Atlas research robot.

    Boston Dynamics

  • Atlas’ new spindly, double-jointed fingers are capable but a bit creepy.

    Boston Dynamics

  • Atlas’ old hands were rudimentary clamps, and look at all the damage they did to this plank of wood. It was just crushing things.

    Boston Dynamics

  • More finger movement.

    Boston Dynamics

  • From the robot’s point of view. The video overlays the real world with 3D models of Atlas’ hands and the object.

    Boston Dynamics

  • Atlas has to first balance the strut on a shelf, then it can slide it into place.

    Boston Dynamics

  • You can see all the work that goes into this lift. Recognize the object, wrap your one hand around it, pull it out enough to balance it on the edge of the container, wrap your other hand around it, then torque your upper body to rotate the strut into position.

    Boston Dynamics

The world’s most advanced humanoid robot, Boston Dynamics’ Atlas, is back, and it’s moving some medium-weight car parts. While the robot has mastered a lot of bipedal tricks like walking, running, jumping, and even backflips, it’s still in the early days of picking stuff up. When we last saw the robot, it had sprouted a set of rudimentary hand clamps and was using those to carry heavy objects like a toolbox, barbells, and a plank of wood. The new focus seems to be on “kinetically challenging” work—these things are heavy enough to mess with the robot’s balance, so picking them up, carrying them, and putting them down requires all sorts of additional calculations and planning so the robot doesn’t fall over.

In the latest video, we’re on to what looks like “phase 2” of picking stuff up—being more precise about it. The old clamp hands had a single pivot at the palm and seemed to just apply the maximum grip strength to anything the robot picked up. The most delicate thing Atlas picked up in the last video was a wooden plank, and it was absolutely destroying the wood. Atlas’ new hands look a lot more gentle than The Clamps, with each sporting a set of three fingers with two joints. All the fingers share one big pivot point at the palm of the hand, and there’s a knuckle joint halfway up the finger. The fingers are all very long and have 360 degrees of motion, so they can flex in both directions, which is probably effective but very creepy. Put two fingers on one side of an item and the “thumb” on the other, and Atlas can wrap its hands around objects instead of just crushing them.

Sadly all we’re getting is this blurry 1 minute video with no explanation as to what’s going on.

Atlas is picking up a set of car struts—an object with extremely complicated topography that weighs around 30 pounds—so there’s a lot to calculate. Atlas does a heavy two-handed lift of a strut from a vertical position on a pallet, walks it over to a shelf, and carefully slides it into place. This is all in Boston Dynamics’ lab, but it’s close to repetitive factory or shipping work. Everything here seems designed to give the robot a manipulation challenge. The complicated shape of the strut means there are a million ways you could grip it incorrectly. The strut box has tall metal poles around it, so the robot needs to not bang the strut into the obstacle. The shelf is a tight fit, so the strut has to be placed on the edge of the shelf and slid into place, all while making sure the strut’s many protrusions won’t crash into the shelf.

One limitation here is that at least some of the smarts in the video are pre-calculated—at one point, we see what looks like Atlas’ vision processing, and it has a perfect 3D scan of the car strut ready to go. So this is either attempt number 5,000, and it has already seen the strut from all angles, or Atlas was pre-programmed with topographical data for this exact model car strut. Either way, for all the lifts in the video, Atlas is saved from trying to figure out the shape of the object in real time. Atlas has a lidar sensor on its face and can generate a point cloud of what it’s looking at, so it just needs to line up the pre-baked model with the point cloud, and it has perfect knowledge of the strut topography. A harder level of difficulty would be picking up an object Atlas has never seen before, but you’ve got to break down the challenges into smaller parts and start somewhere.

When Atlas picks up a strut, it has to walk around a pallet, and as always, the robot shines when it comes to bipedal movement. The simpler way to move around the pallet would be a set of straight-line walking paths with pivots in between. Atlas’ path-planning is way more complicated, though, and involves more advanced side-step moves, leaning into turns, and just dynamically stumbling around the pallet any way it can. This version of Atlas moves less like a robot and more like a drunk person, which is a big compliment. At one point, it even stumbles and recovers, drawing an excited reaction from onlookers in the background.

Boston Dynamics’ Atlas tries out inventory work, gets better at lifting Read More »

data-broker-allegedly-selling-de-anonymized-info-to-face-ftc-lawsuit-after-all

Data broker allegedly selling de-anonymized info to face FTC lawsuit after all

Data broker allegedly selling de-anonymized info to face FTC lawsuit after all

The Federal Trade Commission has succeeded in keeping alive its first federal court case against a geolocation data broker that’s allegedly unfairly selling large quantities of data in violation of the FTC Act.

On Saturday, US District Judge Lynn Winmill denied Kochava’s motion to dismiss an amended FTC complaint, which he said plausibly argued that “Kochava’s data sales invade consumers’ privacy and expose them to risks of secondary harms by third parties.”

Winmill’s ruling reversed a dismissal of the FTC’s initial complaint, which the court previously said failed to adequately allege that Kochava’s data sales cause or are likely to cause a “substantial” injury to consumers.

The FTC has accused Kochava of selling “a substantial amount of data obtained from millions of mobile devices across the world”—allegedly combining precise geolocation data with a “staggering amount of sensitive and identifying information” without users’ knowledge or informed consent. This data, the FTC alleged, “is not anonymized and is linked or easily linkable to individual consumers” without mining “other sources of data.”

Kochava’s data sales allegedly allow its customers—whom the FTC noted often pay tens of thousands of dollars monthly—to target specific individuals by combining Kochava data sets. Using just Kochava data, marketers can create “highly granular” portraits of ad targets such as “a woman who visits a particular building, the woman’s name, email address, and home address, and whether the woman is African-American, a parent (and if so, how many children), or has an app identifying symptoms of cancer on her phone.” Just one of Kochava’s databases “contains ‘comprehensive profiles of individual consumers,’ with up to ‘300 data points’ for ‘over 300 million unique individuals,'” the FTC reported.

This harms consumers, the FTC alleged, in “two distinct ways”—by invading their privacy and by causing “an increased risk of suffering secondary harms, such as stigma, discrimination, physical violence, and emotional distress.”

In its amended complaint, the FTC overcame deficiencies in its initial complaint by citing specific examples of consumers already known to have been harmed by brokers sharing sensitive data without their consent. That included a Catholic priest who resigned after he was outed by a group using precise mobile geolocation data to track his personal use of Grindr and his movements to “LGBTQ+-associated locations.” The FTC also pointed to invasive practices by journalists using precise mobile geolocation data to identify and track military and law enforcement officers over time, as well as data brokers tracking “abortion-minded women” who visited reproductive health clinics to target them with ads about abortion and alternatives to abortion.

“Kochava’s practices intrude into the most private areas of consumers’ lives and cause or are likely to cause substantial injury to consumers,” the FTC’s amended complaint said.

The FTC is seeking a permanent injunction to stop Kochava from allegedly selling sensitive data without user consent.

Kochava considers the examples of consumer harms in the FTC’s amended complaint as “anecdotes” disconnected from its own activities. The data broker was seemingly so confident that Winmill would agree to dismiss the FTC’s amended complaint that the company sought sanctions against the FTC for what it construed as a “baseless” filing. According to Kochava, many of the FTC’s allegations were “knowingly false.”

Ultimately, the court found no evidence that the FTC’s complaints were baseless. Instead of dismissing the case and ordering the FTC to pay sanctions, Winmill wrote in his order that Kochava’s motion to dismiss “misses the point” of the FTC’s filing, which was to allege that Kochava’s data sales are “likely” to cause alleged harms. Because the FTC had “significantly” expanded factual allegations, the agency “easily” satisfied the plausibility standard to allege substantial harms were likely, Winmill said.

Kochava CEO and founder Charles Manning said in a statement provided to Ars that Kochava “expected” Winmill’s ruling and is “confident” that Kochava “will prevail on the merits.”

“This case is really about the FTC attempting to make an end-run around Congress to create data privacy law,” Manning said. “The FTC’s salacious hypotheticals in its amended complaint are mere scare tactics. Kochava has always operated consistently and proactively in compliance with all rules and laws, including those specific to privacy.”

In a press release announcing the FTC lawsuit in 2022, the director of the FTC’s Bureau of Consumer Protection, Samuel Levine, said that the FTC was determined to halt Kochava’s allegedly harmful data sales.

“Where consumers seek out health care, receive counseling, or celebrate their faith is private information that shouldn’t be sold to the highest bidder,” Levine said. “The FTC is taking Kochava to court to protect people’s privacy and halt the sale of their sensitive geolocation information.”

Data broker allegedly selling de-anonymized info to face FTC lawsuit after all Read More »

virgin-galactic-and-the-faa-are-investigating-a-dropped-pin-on-last-spaceflight

Virgin Galactic and the FAA are investigating a dropped pin on last spaceflight

Rapid Unscheduled Dropped Pin —

“The FAA is overseeing the Virgin Galactic-led mishap investigation.”

White Knight Two carries the first SpaceShipTwo during a glide test.

White Knight Two carries the first SpaceShipTwo during a glide test.

Virgin Galactic

Virgin Galactic reported an anomaly on its most recent flight, Galactic 06, which took place 12 days ago from a spaceport in New Mexico.

In a statement released Monday, the company said it discovered a dropped pin during a post-flight review of the mission, which carried two pilots and four passengers to an altitude of 55.1 miles (88.7 km).

This alignment pin, according to Virgin Galactic, helps ensure the VSS Unity spaceship is aligned to its carrier aircraft when mating the vehicles on the ground during pre-flight procedures. The company said the alignment pin and a shear pin fitting assembly performed as designed during the mated portion of the flight, and only the alignment pin detached after the spaceship was released from the mothership.

“During mated flight, as the vehicles climb towards release altitude, the alignment pin helps transfer drag and other forces from the spaceship to the shear pin fitting assembly and into the pylon and center wing of the mothership,” the statement said. “The shear pin fitting assembly remained both attached and intact on the mothership with no damage. While both parts play a role during mated flight, they do not support the spaceship’s weight, nor do they have an active function once the spaceship is released.”

At no time, the company said, did the detached pin pose a safety threat to the spacecraft or the carrier aircraft. Additionally, as the flight occurred in restricted air space, the dropped pin did not threaten people or property on the ground.

The FAA gets involved

Virgin Galactic said it reported the anomaly to the Federal Aviation Administration (FAA) on January 31.

On Tuesday, the FAA confirmed that there was no public property or injuries that resulted from the mishap. “The FAA is overseeing the Virgin Galactic-led mishap investigation to ensure the company complies with its FAA-approved mishap investigation plan and other regulatory requirements,” the federal agency said in a statement.

Before VSS Unity can return to flight, the FAA must approve Virgin Galactic’s final report, including corrective actions to prevent a similar problem in the future.

The problem comes as Virgin Galactic plans to wind down its flight campaign with the VSS Unity and intends to move to its next-generation version of the spacecraft. These so-called “Delta-class” spaceships remain in development and are likely a couple of years away from making commercial flights.

VSS Unity has completed 11 spaceflights to date, reaching an impressive monthly cadence last year. However, the company is planning only one more mission before retiring the vehicle. This decision came as something of a surprise because the company’s president told Ars last August that the Unity airframe was capable of 500 to 1,000 flights.

This Galactic 07 mission, whose passengers and flight crew have yet to be announced, was scheduled for the second quarter of 2024. In its statement this week, Virgin Galactic said it remained committed to flying that mission.

Virgin Galactic and the FAA are investigating a dropped pin on last spaceflight Read More »

on-the-debate-between-jezos-and-leahy

On the Debate Between Jezos and Leahy

Previously: Based Beff Jezos and the Accelerationists

Based Beff Jezos, the founder of effective accelerationism, delivered on his previous pledge, and did indeed debate what is to be done to navigate into the future with a highly Worthy Opponent in Connor Leahy.

The moderator almost entirely stayed out of it, and intervened well when he did, so this was a highly fair arena. It’s Jezos versus Leahy. Let’s get ready to rumble!

I wanted to be sure I got the arguments right and fully stated my responses and refutations, so I took extensive notes including timestamps. On theme for this debate, this is a situation where you either do that while listening, or once you have already listened you are in practice never going to go back.

That does not mean you have to read all those notes and arguments. It is certainly an option, I found it interesting and worthwhile to study everything, if only sometimes on the level of an anthropologist, and to be sure I had gone the extra mile and had not missed anything.

There is however another option. Before I give my detailed notes, I will attempt a summary of the important takeaways from the debate, and attempt to build a model of what Jezos and Leahy were claiming and advocating. You can then check the transcript and notes for more details as desired.

Or you can Read the Whole Thing. If you do, I recommend skipping over the summary until after you have read the details, to see if your overall impressions match my own.

We were introduced in this debate to a character one could call Actually Based Beff Jezos (Analytical? Academic? Antihero? Apprehensive? Aligned?), or the Good Jezos, or the Motte Jezos, or the Reasonable Jezos.

Sometimes in this debate he is the one talking. Sometimes he is not.

I like Actually Based Jeff Jezos. We still very much have some issues, I think he is still importantly wrong about some very central things, but this would be someone I could be happy to work with or seek truth alongside in various ways. Actually Based Beff Jezos talks price.

Actually Based Beff Jezos starts from a bunch of positions that he takes farther than I would, but where I am much closer to his position than I am to the mainstream position on these questions:

  1. He is a softspoken physicist and libertarian.

  2. He has a deeply justified skepticism of government and institutions.

  3. He is correctly wary of regulations that in practice will never get reexamined.

  4. He is correctly wary of regulatory capture and its inevitability over time.

  5. He understands the long term benefits of sustained economic growth.

  6. He understands the long term benefits of free markets.

  7. He understands the dangers of the Socialist Calculation Debate.

  8. He sees the real almost limitless upside potential for development of AI.

  9. He sees the real catastrophic downside if AI was used to enable tyranny.

  10. He calls upon us to fix our institutions so we have a functioning civilization. He calls for more competition between and renewal of institutions.

  11. He by default favors actual real competition over appearing to be nice.

He also acknowledges these (and other) things that I believe to be true:

  1. We should choose actions based on what method produces the best outcomes.

  2. Humans and civilization are good and we should care about them.

  3. People are allowed to have different terminal values.

  4. Some technologies are net negative in their consequences to the point where it is good to restrict or ban them.

  5. The optimal amount of regulation and hierarchical organization and power in general is importantly not zero.

  6. He favors anti-monopoly legislation, other regulations too if they favor growth.

  7. Decentralized systems are often impossible to steer and out of our control.

  8. A fully decentralized system is not going to happen.

  9. AI is moving super fast and we do not know where it is headed.

  10. That those doing maximum growth at any cost will wipe out those not doing it.

  11. YIMBY. You love to see it.

  12. Covid-19 was probably a lab leak. What conclusions one draws can vary.

  13. The odds are against us and the situation is grim.

  14. We need a balance between order and disorder, not max entropy.

  15. e/acc is effectively treating Physics as their (primary or only) God.

This is not a complete list. But it is clear that in terms of our models of how the world works, we are actually Not So Different, and Connor observes this as well. All three of our models are remarkably similar, although with very impactful and important differences that change key decisions.

Actually Based Beff Jezos (ABBJ) also poses some excellent questions to Connor, many of which lack good answers.

We were also introduced in this debate to a character one could call Biased (Bailey? Borderline? Balanced? Biased? Blunt? Brash? Baiting? Bro?) Based Beff Jezos, or the Neutral Jezos, or the Civil Jezos.

I like this person a lot less than Actually Based Beff Jezos, especially given his tendency to conflate himself with ABBJ. I wish he would pick a side and fully own or disown each of his positions, would more often talk price, not get too carried away with the physics metaphors and so on.

He is still, however, someone who I would welcome into a civil discussion. It would be great if this was the Based Beff Jezos that was driving the e/acc movement, and more people in such circles acted like this. I could work with that. Alas.

A term used often in the discussion by both participants was ‘bait and switch,’ as opposed to the old motte and bailey.

This was happening a lot. Jezos would claim something very reasonable one minute, then switch to making a far stronger unreasonable (or at least, I believe, false) claim the next, and often switch back and forth several times.

In several cases he says close parallels of ‘all we are saying is’ or ‘all we are asking for is’ when this is very clearly not the case if you expand your search even a few minutes.

Then there is a combination of assertions from BBBJ, and also some assertions that still might be thought to come from ABBJ, where the whole operation goes off the rails.

This list is highly incomplete, but here are the key places where I feel BBJ went off the rails (or at least was saying that which is not) in this debate with his assertions, and which version of him was saying what at the time, apologies for the overlap but I want to be sure I hammer these home properly:

  1. Both BBJs fundamentally seem to be failing to make the is/ought distinction. This is the naturalistic fallacy. In this case, the nature in question is physics itself.

  2. Thus, many times, he conflates ‘the laws of physics’ or ‘what physics wants or rewards’ or similar concepts to what we should value and work towards.

  3. Translating into my terminology because I think it is better here: BBBJ does not believe there is ever any way to fight against Moloch, to cooperate in game theory problems and find a superior equilibrium. If a strategy is available that would if allowed to be used triumph over time, there is no fighting it without losing. That humans and their organizations will always be maximally greedy.

  4. BBBJ asserts that you must only want what ‘physics wants’ in this sense, similarly to how a Christian thinks you must let Jesus into your heart, whereas ABBJ thinks you get to choose your terminal values.

  5. BBBJ actually seems to be completely fine with scenarios that wipe out humanity. ABBJ is not fine with that, although he is remarkably willing to take that risk, and is totally fine with human loss of control over the future. Seeks it, even.

  6. ABBJ says we should look to maximize the growth of free energy over time. BBBJ says this absolutely, with no other considerations, no matter the consequences, ABBJ will admit that there exceptions in theory but will deny that one can exist in practice, and find ways to say that anything he supports is also pro-growth. I believe this is based on two mistakes. One, Jezos is confusing is/ought, and two that he is confusing the metric with the thing he wants to measure.

  7. When confronted with examples where the tails come apart and maximizing free energy would very clearly not maximize the things Jezos actually cares about, Jezos says such scenarios are unrealistic and would not come about, while maintaining ambiguity over whether he is fully confusing is/ought and actually thinks the free energy is a terminal value or not.

  8. Both ABBJ and BBBJ often take physics concepts and laws, and try to apply them directly to social dynamics and other decisions as if they were still working as strict laws, where instead I would say they can only offer intuition pumps and some evidence.

  9. ABBJ wants to reform our institutions, BBBJ says this is hopeless.

  10. BBBJ asserts that any institution that humans control, or any control mechanism at all, will over time fall to malintent, and therefore humans cannot be allowed any control over our future, only full decentralization and uncontrolled competition is possible. ABBJ says we need hierarchical control systems that balance order and disorder.

  11. BBBJ’s fear of regulatory capture and regulatory ramp-up and inertia is absolute and you never do it for any reason, whereas ABBJ says ‘all we want is to take careful consideration before acting’ and also to ‘wait for a stable situation’ that he does not believe will ever arise.

  12. BBBJ believes that regulations and grants of authority are fully one way doors and the primary way that humans lose control over the future. Both BBJs treat those in power as always becoming over time alien beings that do not value what we value even more than the way I think of future AIs as not valuing what we value, and just as capable of asserting permanent control if AI is involved.

  13. ABBJ believes that it is possible for humans to augment our intelligence sufficiently to ‘make us more of a player at the big boy table’ after ASI is on the scene, that we would still meaningfully intellectually contribute and matter. He believes that human economic activity will shrink as a portion of overall activity once ASI is available to all and roaming free to compete, but that our absolute portion will not shrink, and that balance of power and ‘AI mercenaries’ and other adversarial dynamics will keep us safe. BBBJ is asserting that these scenarios will work out and be great, whereas ABBJ is asserting that the alternatives are even worse and that the upside is so high we must roll the dice.

  14. Jezos asserts that the push for regulations and safety are driven by the interests of a few Big Tech companies (presumably OpenAI/Anthropic/Google) looking for regulatory capture as their primary or sole motivation, that these are the people writing the laws. The safety movement in general, and EA, are in cahoots, not genuine, acting in bad faith. Connor did not challenge this. I believe this to be mostly false.

  15. Jezos also said the recent incident at OpenAI was a ‘decapitation attempt’ by safety advocates, which he might or might not believe himself, but which we know was simply not the case, as I have written. Again Connor did not challenge.

A good summary of key points that I felt were being claimed might be:

  1. No is/ought distinction, or at most a highly confused one.

  2. Competition and maximum growth are inevitable, you can’t fight it.

  3. Maximizing growth and free energy is The Way to maximize utility.

  4. Regulations are essentially never taken back, always captured.

  5. Humans having control over the future at all inevitably means tyranny.

  6. AIs having control over the future, or humans losing control over it, on the other hand, could work out fine.

  7. AIs will never render humans economically uncompetitive and incapable of survival, even under this intense competition.

  8. Many (most? all?) important opponents of this agenda are in bad faith.

Again, none of these lists are complete even based only on the debate. While there were a lot of things that were said several times, there really is a lot going on here.

We must also take note of the third face, the one we see on Twitter, the Combative (Combative? Condescending? Careless? Cruel? Core? Copious? Callous? Crazy? Certifiable?), or the Evil Jezos, the Warring Jezos, the Alter Ego.

That guy is, to put it exceedingly generously, a lying, trolling, raging a.

That person did not show up to the debate. He does, however, continuously show up on Twitter, and continues to do so.

His thesis is something like:

  1. All hail the thermodynamic God, growth, free energy. The humans likely die.

  2. Accelerate. No, mor than that. No, more than that.

  3. Any restriction of any kind, anything holding back technology of any kind, evil.

  4. In particular, the government should never lift a finger to interfere in any way.

  5. Our vibe is the superior vibe. We are therefore good, if you oppose us you are evil.

  6. Technology cannot be stopped and also you evil bastards might stop it.

  7. Accuse those opposed to you of anything and everything including corruption.

  8. Be as rude and condescending and vile as possible. Meme hard. It helps.

  9. Claim credit for everything and always say that you are winning. We always win.

  10. If you agree with all this directionally, put e/acc in your bio. Do as I do.

Using this strategy, he has, as I noted previously, assembled a motley crew of malcontents willing to indeed put the label in their bio and lend their support, with a broad coalition of reasons for doing so. From my previous post:

E/acc has successfully raised its voice to such high decibel levels by combining several exclusive positions into one label in the name of hating on the supposed other side:

  1. Those like Beff Jezos, who think human extinction is an acceptable outcome.

  2. Those who think that technology always works out for the best, that superintelligence will therefore be good for humans.

  3. Those who do not believe actually in the reality of a future AGI or ASI, so all we are doing is building cool tools that provide mundane utility, let’s do that.

  4. Related to previous: Those who think that the wrong human having power over other humans is the thing we need to worry about.

    1. More specifically: Those who think that any alternative to ultimately building AGI/ASI means a tyranny or dystopia, or is impossible, so they’d rather build as fast as possible and hope for the best.

    2. Or: Those who think that even any attempt to steer or slow such building, or sometimes even any regulatory restrictions on building AI at all, would constitute a tyranny or dystopia so bad it is instead that any alternative path is better.

    3. Or: They simply don’t think smarter than human, more capable than human intelligences would perhaps be the ones holding the power, the humans would stay in control, so what matters is which humans that is.

  5. Those who think that the alternative is stagnation and decline, so even some chance of success justifies going fast.

  6. Those who think AGI or ASI is not close, so let’s worry about that later.

  7. Those who want to, within their cultural context, side with power.

  8. Those who like being an edge lord on Twitter.

  9. Those who personally want to live forever, and see this as their shot.

  10. Those deciding based on vibes and priors, that tech is good, regulation bad.

The degree of reasonableness varies greatly between these positions.

I believe that the majority of those adapting the e/acc label are taking one of the more reasonable positions. If this is you, I would urge you, rather than embracing the e/acc label, to if desired instead state your particular reasonable position and your reasons for it, without conflating it with other contradictory and less reasonable positions.

Or, if you actually do believe, like Beff Jezos, in a completely different set of values from mine? Then Please Speak Directly Into This Microphone.

So which Based Beff Jezos is real in which senses? We cannot know that, nor can we know that about his followers. We do know how they behave in public.

The good news, again, is that this character did not show up for the debate. Which resulted in a less eventful and dramatic discussion, but a more fruitful one.

Connor Leahy takes a consistent, straightforward (and, in relative terms, extreme) position on the issues of technological development and AGI.

  1. He believes that if we continue business as usual, with no regulation, slowdown or drastic improvement in our safety efforts, we are highly doomed. In his model it would take a century of ASI safety research to feel highly confident in safety.

  2. He believes this is bad, yo, so we should not do this.

  3. He is willing to pay a large price in mundane utility not created, and in top down control if necessary, under these circumstances.

  4. He also believes that AGI and ASI will become possible very soon, his timelines are very short, his thresholds for when systems could be existentially dangerous are very low.

  5. He believes that we should work to improve our institutions and civilization. He agrees with broadly libertarian instincts in general but wants to proceed in spite of that, because we have no alternatives.

  6. He wants as his primary ask liability of AI developers and deployers, but would consider a hard compute limit for frontier models as good or better. If able to choose, he wants to set it lower than I would set it, at between 10^23 and 10^25, whereas I am glad they set the reporting threshold (and it is only a reporting threshold) at 10^26.

  7. He often emphasizes that we should not have a system where whether AGI goes well or not depends on whether the CEO of a tech company is nice or not, and that currently we have exactly such a system, and that most of the actors within the current system are apathetic and compliant where they need to step up if we are to do what is needed.

  8. He does not think AGI is the only technology for which much of this applies.

  9. He does not believe in mincing words or holding back. At all.

That should set the stage. The debate is focused around what Jezos thinks rather than what Leahy thinks, which I found to be the more useful approach.

They do extensive highlights before starting, so skip to about 8: 30 to start.

  1. Connor opens positively, then cuts right to the issue of to what extent e/acc people mean what they say. What is a metaphor or vibe? What is meant literally? Is there an intentional conflation of the two? My take is that they usually do not make a distinction here, and do not typically know themselves how seriously or literally they mean any given statement, it indeed flips as is useful.

  2. (10: 00) Connor’s first question: Is there any technology at all we should ban entirely? Jezos asks whether it has ever happened, claiming it isn’t enforcable. My example of it happening would be genetic engineering and cloning via the Asilomar Conference. Connor says, okay, suppose it could be enforced. Jezos then responds that even when a technology is itself bad like nuclear fission, it can lead to positive things like nuclear fusion, and we have already closed off so many paths (so banning technology is both impossible and already happening?). He says we would have benefited from far less regulation towards nuclear fission.

    1. I agree we should be very pro nuclear power, but this seems confused to me. The reason we should have less regulation of nuclear fission is that it has a very important, very positive use case, which is nuclear (fission) power. We need more of that. Whereas I do not think that allowing open access to uranium or letting any person or nation who wants to build a nuke would be a wise policy, and if anything we underinvested in preventing nuclear proliferation.

    2. I do think the point about fission leading to fusion, and not wanting to cut off the tech tree, is a good one. Cutting off tech means likely delaying or potentially preventing future tech. However we do not want to assume the conclusion that the expected future technologies that result will be good. Fusion is good if it is used in power plants, bad if it is used in bombs, it is a physics question whether we can get one without the other, for which I am optimistic. In most cases I do think this is a strong anti-restriction argument, but that is because in most cases advancing technology is in expectation good.

    3. Jezos asks, what is a ban, who enforces it, where, which countries? A fine question. Connor puts a pin in the enforcement question.

  3. (13: 00) Jezos says that yes, some technologies can have negative impact. I would not normally note this, but given who said it, I am doing so. He says his thesis is that technology begets technology, and technological advance is generally good, so we should encourage technologies even when they themselves have short term negative impacts, aiming for growth and trusting the market will work out. He calls this more nuanced than trying to deploy nuance via legislation.

    1. Later at (28: 30) he will confirm that he thinks that we should enact some regulations on technology some of the time.

    2. He confirms at (30: 00) it was good to ban leaded gasoline, and also notes that this ban imposed positive selective pressure on the space of technology development, that sometimes banning a tech can (I would say predictably in at least this exact case) encourage good tech while discouraging bad tech, and says it is good to first gather the evidence of harm via lawsuits and then to crystalize this into legislation. My understanding is that in that case there were lawsuits from workers but not lawsuits about the much greater harm to the public.

    3. At (31: 50) he seems happy with the process that led to the ban on lead gasoline. Whereas this seems like a case where we clearly moved too slowly, and the thing to do afterwards is ensure we do not make that mistake again. This is no small thing. We are talking about impacts like half of America losing 5 IQ points, a large rise in crime rates and so on. A process that requires decades of such damage before it is ready to find and fix the problem is not going to be adequate to responding to the dangers of AI even in relatively friendly futures.

    4. It is still true that we often impose bans and restrictions for safety reasons that do not make sense and backfire, and that waiting too long to ban something is the exception rather than the rule, Connor says lead gasoline and Jezos could say the FDA and how we changed its over policies in response to Thalidomide. The right question is, how do we improve our accuracy and decision making, how we evaluate risk versus reward in different situations? And what would be the right decision in a particular case.

  4. (19: 00) Connor reiterates the original question, can you imagine a technology that should be banned, or is this impossible? Jezos says yes, there are pure negative technologies that we would ideally want to ban, but again how do we enforce that? How do we avoid today’s system and its tendency to add and but not remove rules and regulations over time? He wants to rely on the market and on legal warfare via lawsuits and liability. Again, in general I think this is right in general, but it relies on the externalities being properly captured under the law, the law being a realistic means of such enforcement, and the plausible damages being bounded enough that one can be held liable and actually pay out. It does suggest a way forward in the form of combining stricter liability and a Hanson-style policy of mandatory insurance.

  5. (22: 00) Connor notes that he sees lawsuits as regulation, whereas Jezos is framing them as within the market. Connor notes that regulations set the terms for when one can win a lawsuit. Jezos talks about ‘peer to peer enforcement’ and Connor rightfully asks him what he is even talking about. Jezos seems to then say he is not a fan of the state monopoly on violence, and he is wary of top-down power asymmetry intellectually via AI.

  6. (23: 30) He says we are ‘in a weird period now where there is the window of opportunity for there to be sort of AI assisted tyranny to be installed. And to me that’s one of the core existential risks to progress.’ He worries that such control would break democracy and lead to manufactured consent (while, I would note, advocating for the most unpopular agenda we know of, it polls at a margin of -51).

  7. (26: 00) He presents e/acc as a counter force to attempts at centralization and at imposing top down control. You, he says, want to maximize safety, he wants to maximize freedom, reality will fall in the middle. Standard libertarian position. I really, really wish we saw such folks making a broader push for this in all the places where I believe that approach is correct. Then he says ‘we have a better data driven prior of this happening’ than some sort of AI takeover. He fears a ‘dark age’ without ‘freedom of compute,’ freedom of access to AI or freedom of information.

  8. (30: 55) Jezos says that in AI things are moving super fast and we don’t know where things are going, Connor nods. Jezos says it is too early to set things in stone, and notes again correctly that we don’t walk such rules back.

    1. That seems like a lot of the question. What actions set something in stone and which ones offer flexibility? Are there moves that make it impossible to then move to regulate later, because the cat is already functionally out of the bag or you cannot in time lay the groundwork?

    2. I would argue that with compute monitoring, with chips, with open model weights for larger models, and with developing safety protocols, to varying degrees, you cannot while on this superfast exponential go from zero to sixty in three point five. The fight is not over, as I see it, whether to impose meaningful permanent rules now. It is over whether we should get into position where we could impose regulatory rules in the future. The argument for no, which is a real argument, is that if we have that capability then we will use it in ways we shouldn’t, an interesting mirror of the worry that if AI capabilities are created then they too will operate or be used in ways they shouldn’t be (or, to be exact, in ways we’d prefer they not be used).

    3. In particular Jezos mentions a compute cap. I agree that if we put in a hard compute cap, we should be extremely wary of doing that, and indeed I disagree with Connor’s minority view that we should impose not only a hard cap but a cap below the size of GPT-4. Whereas I think the Executive Order is wise, imposing a much higher reporting threshold of 10^26, and having it be a threshold rather than a cap. Again, to me it is in most discussions a question of whether the slope of regulation is so slippery that we cannot go anywhere near it, while noting there are a few including Connor who would indeed go much farther here.

  9. (32: 30) Consensus on the need for sunsetting laws, the idea that every law and regulation is rechecked every 10 or 20 years. If you cannot affirm that the law or regulation is good, it goes away. I too strongly agree this would be great, if we can find a way that it does not turn into either ‘the house shall now vote on renewing all existing laws’ or ‘if we can’t reach consensus we are going to legalize murder at midnight.’ I do think it can be done, where there is a class of permanent laws, and then others are forced to have sunset clauses in a way that defends against auto-renewal. In general all three of us broadly agree that we should be widely skeptical of passing new laws given the governments and systems that we have.

  10. (34: 03) “I so I don’t think we disagree. I think we’re on the same page.” “We just have different models of the world.” “Exactly.” Love it.

  11. (34: 30) Connor lays out the idea that the world used to be ergodic, where if you made even very large mistakes you could recover from them over time and be fine, and that at some point we exit that, and a large enough mistake would be the end. Instead of having to learn how to handle nuclear material without dying so you can develop nuclear technology, there will be techs where it is everyone who dies if you screw up in a similar fashion. Jezos responds this is already possible with nukes, and affirms at (36: 45) that there are paths with payout negative infinity (or what I would describe as permanent universe payout zero).

    1. Jezos says we do not build the technology of the world-shattering nuke because it does not have utility, so instead we only build smaller nukes, and that 10%-20% of the population would survive so (as is sometimes pointed out) this is not strictly an existential risk, but a sufficiently large nuke with a big red button would be bad.

    2. Except, what would happen if the planet buster or other potential technology that posed an existential risk did have utility, or we thought it did? The AI that poses an existential risk is also going to look like it could offer very large positive utility if things go well, and thus people are going to try and build it.

      1. Also, one can nitpick that while no one built the single planet-buster nuke, there is definitely utility in having that as a threat, and some people and nations would use it to hold the world hostage or stave off action, and some people just want to see the world burn, so there are plenty of people who would build it if they could.

      2. And one can also nitpick that the nuclear arsenals of the USA and USSR during the Cold War were rather darn similar to this, sufficient as Churchill said ‘to make the rubble bounce,’ and with war plans that were widely expected to result in the end of the world, and the use of them was rather on hair triggers. See the book The Doomsday Machine. I mean, yes, 10%-20% survival was projected overall, but I do not think this was gave that many of those involved much comfort. I mean, I’m going to go ahead and say that the big red buttons we did have were not great.

  12. (38: 45) Connor poses the hypothetical of doing physics experiments and discovering you are in a false vacuum where a small trigger could potentially destroy chemistry, radiate outward and effectively destroy the universe. Jezos (after noting for clarity that he does not believe this is a real thing) responds that if this turned out to be the case, that our world was this fragile, that we could inform authorities and form a world government or what not but even if we did that, we would already be dead on some time horizon.

    1. I think this is actually a great answer. If the world and the way its physics work is sufficiently unfortunate, then we are doomed no matter what we do. So when considering what to do, we assume we are not in those worlds.

    2. As Eliezer points out, if you make too many impactful ‘hopeful assumptions’ using this kind of reasoning, then you stop trying to solve the actual problem, and your work is useless. So you need to be careful.

    3. I do think in practice we need to say that if the situation turns out to be sufficiently unfortunate in its physics and particulars, then what we do almost never saves those worlds, and we should be ‘willing to die’ in those scenarios to give ourselves a fighting chance in others.

    4. Thus, for example, I think that we should set the compute threshold higher than Connor wants, even for reporting, and if 10^25 is soon enough parameters to create a model that can kill us and it kills us, then it kills us, we had no practical path to avoiding that risk without making other scenarios and things too much worse that it wasn’t worth it. Of course, if in the future we then learn that we are indeed in that world, we should adjust and try anyway.

  13. (42: 00) Jezos points out that if there were dangerous physics that we do not understand, we would need to study it in order to understand and control it, drawing the parallel back to intelligence. Ignoring it is not The Way. I agree it is not The Way, you want to work towards understanding, but certainly there are experiments you might want to avoid, and forestall others from doing, if the risk was too high, until such time as you had more information.

  14. (44: 00) Jezos says that there’s a lot of upside to AI, and it matters. Yes. He says that we shouldn’t let tail risks stop us. Connor says yes, if it was only tail risks he would agree, he doesn’t think it is a tail risk. Indeed, it is a question of price.

  15. (45: 00) Connor asks why the AI that wants to help the humans would win in a fight against a hostile AI. Jezos responds why doesn’t this happen with people and countries, Connor says great question. Jezos says because there are benefits to cooperation. I would agree, but the point is that this is a particular fact about the situation, not a law of nature.

  16. (46: 00) Whereas Jezos says e/acc is the theory that things will adapt in the way that is best for growth (and has elsewhere said that we should assume this will go well, the ultimate form of Whig History perhaps). That sounds to me neither true nor comforting were it to be true. But Jezos then says that the future will select the entities that are most inclined towards growth, in that sense this seems reasonable, that which grows will grow.

    1. At (46: 45) Jezos pulls out the ‘corporations are superintelligences’ statement, sigh, Connor shows remarkable if incomplete restraint on his face.

  17. (47: 00) The Jezos vision of the future is that there are some AIs that are aligned with humans, some that are partly or not at all aligned, they engage in trade, and this keeps us relatively aligned.

    1. No. That is not how any of this works, by his own argument, even if we assume that we successfully align some AIs before we lose control over the future, which is very much not a given. If you posit that there are a wide variety of AIs in such scenarios, then being aligned to either a particular human or to humans in general is an uncompetitive burden, and those AIs lose out over time under free competition.

    2. The thing we want, and the thing e/acc or this kind of competition for maximal efficiency represents at the limit with ASIs involved, are not compatible at baseline, unless we decide we happen to find value in whatever thing wins that competition, as Robin Hanson would argue that we should, largely to make a virtue of necessity.

  18. (47: 25) Jezos says there will be ways to augment human intelligence to ‘make us more of a player at the big boys table.’

    1. Again, no. Sorry. This is one of those claims that simply does not work, the hope of the hybrid AI-human chess team. There is no reason to think that a human will, after a while, be capable of meaningfully contributing anything, that we would be able to earn a spot at that big boy table. We are not, in this metaphor, either big nor a real boy. Anything we can do, AI can do better, for sufficiently advanced AI. This is people writing science fiction.

  19. (48: 00) Connor points out the central contradiction, that human happiness or preferences are one goal, maximal growth is another very different goal, and the systems described are maximizing under this model for the second one. Jezos pulls out the Europe vs. America comparison, I wish America was far closer to ‘all-in on growth’ the way he describes us. He notes that we have localities and can test things locally.

    1. We would both argue that America’s growth model has reached the point where, even with Europe’s focus on short term happiness, one could for pure happiness purposes reasonably prefer America and the profits from growth instead, and that we should expect this to amplify over time. In the AI-Fizzle scenario, I would expect America to get relatively a better place to live versus Europe as the years go by.

  20. (48: 30) Connor responds with the argument that over time in this scenario, Americans are less happy but they eat Europe, and we lose our A/B testing ability.

  21. (48: 45) Both affirm, as do I, that we are not hedonistic utilitarians. Jezos says that e/acc is not this, but EA is this. I do think this is one of the best critiques of EA. Jezos instead suggests the utility function of maximizing growth and civilization and the beauty of intelligence. Connor points out that the larger list is very different from a pure focus on growth.

    1. I would add it is even more distinct from a focus on short term growth. There is a very important clear assumption in the Jezos or e/acc position here, which is that maximizing growth will maximize civilization and the beauty of intelligence, that we have a duty to the universe on this. That even if the future is not human, that it will maximize these other things we should find most valuable.

    2. I do not believe this, on multiple counts.

      1. I do not believe that the entities that result from such a process, especially assuming they are AIs we did not choose carefully and wisely for this role, are likely to be things that reflect the beauty of intelligence and civilization in ways that I would consider evaluable. Indeed, I expect them by default to have value of essentially zero to me, although I agree this might not be true.

      2. If they do have some such value, I expect much smaller than the potential maximum value of taking a different wiser approach.

      3. I think there is a large risk that maximizing growth in the short term ends up not only being an existential risk to humans, but also to growth, and that instead of the AIs taking over for us, nothing is left behind at all, or nothing at all meaningfully complex within this context.

      4. I think that I have every right to say that I do want to hand the universe over to these potential future AIs, that I have my own particular preferences and I am allowed to fight for them.

      5. Growth is not an end in itself, nor does it automatically produce good things. Creating AIs in the name of growth is like trying to increase measured NGDP without asking whether you are producing more useful things or otherwise doing anything actually useful.

    3. There is some sort of weird conflation going on here on the word ‘growth.’ Clearly Jezos is using it sometimes to mean self-replication and reproductive fitness. Other times it seems to want to stand in for something far more similar to economic growth.

  22. (51: 00) Jezos says that if something is non-optimal at growth (in the reproductive fitness sense, which now seems like it is the primary meaning of this in the e/acc model, which totally wasn’t clear before now at least to me?) that something more optimal will replace it. Connor says the optimal thing is cancer, no art, no beauty, no happiness or emotions, just growth. Jezos says that emotions have utility. Which currently they do, but the issue is there is no reason to expect they will be useful to an AI or at the limit. Connor says he expects there is a local minima, that human emotions are not a global maxima.

  23. (51: 45) Jezos says if human emotions are not a global maxima, why not explore new ways? Connor pounces, ‘aha!’ and says naturalistic fallacy, is is not ought, emotions are not a global maxima and who the fcares, they’re mine, I like them.

    1. Are we allowed to have preferences that are not maximum growth? Why do we value maximum growth if it will not satisfy our preferences? Is this simply a failure mode, where we notice a proxy measure G that in-distribution improves our values V, and Jezos is saying therefore G=V=U and we should maximize G for its own sake?

    2. Which is a move humans often do in various forms, including via having emotions, because we have limited compute. We optimize via a host of proxy measures and heuristics, as this has been proven to at our current capability levels and typical scenarios more efficient in most situations at finding good next actions.

    3. Also humans do it in other ways, such as the hedonic utilitarians Bezos contrasts himself with, who have happiness H and suffering S and say V=H-S, or something else in that vein. Whereas I mostly buy what Bezos says at (52: 40) that happiness evolved because it is useful and we should not mistake the metric for what it aims to measure.

    4. I do not think there are easy answers here. I cannot (or at least, I do not know how to) compactly well-describe that which I care actually about. Later at (58: 45) Connor says similarly that he has not stated his utility function, that there is some such function but it is not so simple to spell out, and that he does not even fully know what he truly values. And Connor then says that he thinks Jezos does not purely want growth either like he claims to.

  24. (53: 30) Jezos goes YIMBY, you love to hear it. More of this, please. He keeps coming back to the broader point that advancing technology has so far been good for humans and more would have been good on the natural margin. The concerns are that this might not have been true even in the past if one had taken that attitude to the extremes, and also that past performance is no guarantee of future success and we have reasons to believe the underlying mechanisms for that might not hold, or will take effort to make them hold.

  25. (54: 15) Connor responds that so much of e/acc talk seems to only be about America five years into the future. That they do not actually extrapolate their own beliefs, only Nick Land does that, taking techno-capitalism to its logical conclusion that there will be, and should, be only capital and competition, no labor (or humans). Connor notes: “If you optimize for something, you lose everything you are not optimizing for.” Quite so at the extremes, and sufficient intelligence and capability means the extremes will hold.

    1. As Connor says, we got lucky with the tech tree and what is optimum for growth and production. He gives the example that constantly torturing people does not work. That people can only produce for extended periods if you treat them well.

    2. More generally, I would suggest, democratic capitalist institutions that care a lot about the freedom and happiness of those inside them proved to be able to outcompete autocratic, fascist and communist regimes. This was not a given, it did not have to be so. People in the 20th century mostly did not believe this, and many expected the future to therefore be quite bleak. If it had proven false, our world would look very different today, and it almost did look very different at various points, and not for the better. Nor would I want to switch simply because of the gains to growth.

    3. More specifically, there have been large rewards for various human activities like play and learning and relaxation and exploration, and productivity rises when people are happy and have other things at stake, and intrinsic motivation outperforms other motivation, and other neat stuff like that, and again none of that needed to be true, and we have reason to think that a lot of it will break down in advanced AI scenarios once we understand the mechanisms involved. Key drivers are our limited compute and access to data, and that due to how we physically exist we mostly have highly limited upside in terms of reproduction, versus constant tail risk of death, exile, injury or ruin and such, and that periodically situations radically changed and we did not have that many cycles between such events, and so on.

  26. (56: 00) Jezos says there are not a finite number of jobs in the future, then says there are not a finite number of atoms either, there are plenty of atoms in outer space. That even if most things end up done by machines, the human portion would not shrink, only get diluted.

    1. This seems like a clear failure to extrapolate. If you maximize growth for real you hit a limited number of atoms within the lightcone fairly quickly.

    2. Yes, the number and nature of jobs (in the broad sense, including those taken by AIs) would expand greatly. But what would humans hold on to if they have no advantage versus the resources a human must consume? Why should the human realm be protected from intrusion from the AIs in these spots?

    3. We heard a version of this for example from Holz as well in his debate. The idea that the AIs would leave us and the resources and atoms we need alone, because there are plenty of other resources to grab. That simply is not how this works, especially if we posit as Jezos does that there are many AIs engaged in trade and in competition to maximize growth. Taking the atoms of the humans maximizes growth, there is no reason to think this will not happen simply because there are ‘enough’ atoms elsewhere. That only happens if there is something in particular preventing this action.

  27. (57: 00) Bezos compares it to taking venture capital, you are diluting to gain more capital and leverage. That our component can still grow.

    1. Well, that’s an interesting metaphor isn’t it? Are we going to rely on our founders shares to continue to control the company, or will we quickly be out on the street? Will having initial capital somehow protect us?

  28. A quote from Jezos: “You’re advocating for the interests of humans, and I’m hearing you out.” Well then, Jezos, what exactly are you advocating for then? He says that people and corporations will always be greedy, Connor again responds that will is not ought. There is what exists, and there is what we want.

    1. I would add that we absolutely have found ways to contain greed, and that if we had not done that we would not have come this far.

    2. If the Jezos position is that we should each of us embrace maximum individual greed, well then.

  29. (59: 45) Jezos puts on the Socratic hat, which is only fair. He asks: “Why do you like having relationships? Why do you like happiness? Why do like being part of a group. Because evolution kind of hardcoded you to crave these things.” Connor says this is confusing is and ought again.

    1. I do not think it is that simple. This is a real and important challenge.

    2. If we only like the things we like because it was useful in the ancestral environment to like those things, if they are only the expression of a different version of maximizing growth, why do they represent ‘ought’ rather than ‘is’?

    3. Again, it’s a big problem.

    4. I do not, however think that the Jezos response of demanding an objective loss function of free energy is the right response. Nor is ‘that is not anthropocentric enough’ describing the bulk of my objection to that. This is the standard rationalist (in the classic sense) mistake, to say that what we do must be legible and objective and formally justified, and thus we must disregard anything else. That, if you take it too seriously, reliably leads to disaster.

  30. (1: 00: 30) Connor responds wisely: “This is just cope. What you’re describing is, is that reality is hard. Yes. If the thing we want is complicated and hard to get, the answer is not to pick something simple and easy and give ourselves a participation award. The answer is, well, we have to get stronger. We have to get better.”

  31. (1: 01: 10) Jezos responds that this is where physics comes in, the free energy objective is not random, the universe selects for growth. You don’t have the option to disobey gravity or the laws of thermodynamics.

    1. Connor keeps saying that is is not ought, because Jezos keeps conflating the two as his core argument, if anything doubling down. That we should value what the universe rewards. If we know what’s good for us, he might have added? Except if it won’t be good for us, then that’s a problem.

  32. (1: 01: 40) Jezos then says, if Earth goes half-accelerationist, half-Europe, and we play the movie out, the accelerationist half will outgrow, at which point Connor interrupts to say ‘and we will both be dead’ and Jezos says he’ll need to see more evidence of that.

    1. Jezos is explaining a key point. If part of the world is allowed to proceed and accelerate, then that means we get the consequences of that acceleration everywhere. So you will need an international cooperation, voluntarily or otherwise, to ensure that this does not happen, or you can accept the consequences of such acceleration.

    2. The continued suggestion that the only choices are full e/acc style acceleration on one hand, and Europe on the other, is a lot better than saying totalitarian panopticon dystopia as a straw man, but still assumes that you cannot limit existential risk from AI without going Full Europe.

    3. I do not believe that is the case, unless the future is such that AI is the only technology or industry that much matters even before we get such existential risk. In which case, I know what I expect to happen if you choose acceleration.

    4. So again, it is not ought, and you have to ask, do you want the accelerationist world and its consequences, or not?

    5. Jezos says once again we won’t ‘seek out’ the destructive technologies that would bring existential risk, but this is a clear contradiction of his idea that anything pro-growth on an individual or group level will get sought out, and of course people can be wrong about the consequences of what they are building, and make mistakes. His arguments here that disaster will be averted seem extremely poor.

  33. (1: 02: 45) Jezos then falls back on the better argument that the upside we would need to forfeit to avoid such risks is so great that it is worth the risk to go after it, which is definitely valid as Connor acknowledges (although again, the framing here is implying a binary of either we do almost nothing to get in the way, or we don’t get to proceed at all, instead of thinking on the margin).

    1. Connor proposes to talk price. Exactly. How often will the accelerationist approach survive? Connor argues the chance is epsilon, that sooner or later you will fail a saving throw.

  34. (1: 04: 45) Jezos wonders if there are terminal states at all.

    1. I think this is an excellent point. We need to, at some point, solve for the equilibrium that we want, then solve for how to get to that equilibrium. But what if there exists no such equilibrium at all? What if the only stable states are dystopias? Or what if what is valuable requires quests with real stakes and the opportunity to progress, so the very fact that the world is in equilibrium means it cannot be valuable, even if the journey to get there was valuable?

    2. These questions are some of the things that would keep me up at night if I let existential dread keep me up at night, but we all have to sleep some time, so I’ve figured out how to sleep regardless.

  35. (1: 05: 30) On the nuclear weapon that got accidentally got dropped on South Carolina but happened not to detonate (no, seriously), Jezos says well it would only have blown up a city, we would have survived.

    1. It is a side note, but I do not think it is that simple. At a minimum, the world feels like it will now be very different in many ways, if we accidentally nuke one of our cities. The cold war does not play out the same way.

    2. Most importantly I think there would have been quite a large risk that either the USA or USSR mistakes that nuke for something else, someone does something by mistake, and there is escalation to full war. I would not dismiss such an incident so easily.

    3. There is also a huge risk that various players in the USA decide to put the blame for the incident elsewhere intentionally, again with severe escalation risks.

  36. (1: 06: 30) Agreement that Covid-19 was a lab leak.

  37. (1: 06: 50) Jezos argues that the world not having ended is evidence it won’t end. Connor emphasizes that we keep having major accidents and close calls.

    1. One can also mention anthropic considerations.

    2. In general I do not find the past track record hopeful, aside from various failed predictions.

  38. (1: 07: 20) Jezos says let’s talk about AI already. Jezos says he thinks Connor’s model is that if you build AI then you cannot undo that, and then it inevitably kills you. Connor says no, he thinks building it safely is possible, Jezos says great let’s do that, Connor says it will take 100 years to do it safely, Jezos says we won’t let you have 100 years to do that without a monopoly on power, and he thinks that is a bad trade.

    1. That is not an argument for why Connor is wrong about what it would take to build AI safely.

    2. That does however imply that Connor is wrong, because it would be a very good trade to give some entity a monopoly on power if the alternative was for everyone to die. Or at least, I am going to make that bold claim.

    3. I do not think these are our only choices, that we can be confident this will take 100 years to pull off or anything like that.

  39. (1: 09: 30) Connor says congratulations, you are a doomer. Jezos says no, there is a third path: A decentralized control of AI causing an adversarial equilibrium, which makes smaller entities sufficiently capable they are not to be fed with. Connor says “so AI mercenaries,” Bezos says “sure” and Connor laughs.

    1. This does not make any sense to me, for reasons I discussed above. No go. If you go down this path, you lose. Good day, sir.

    2. This is a crux. If you can convince me that such worlds can and are likely to turn out well provided we get personal alignment to work, that we can solve for the equilibrium and like it, then tons of new paths open up, and the correct strategies change.

  40. (1: 12: 00) Jezos predicts that without regulatory capture upstarts will catch up to the leading labs.

    1. I think this is simply wrong, if everyone involved was equally responsible in their actions. But I do think the threat of this happening likely prevents the leading labs from acting what would otherwise be considered responsibly.

    2. And in this fully unregulated world in which everyone acts locally greedily, I would expect us to die, but if instead alignment is solved in time anyway then would then expect those companies to move to use their AIs to prevent everyone else from catching up, and I would expect them to succeed.

  41. (1: 12: 40) Jezos says that lots of companies are using Mistral to minimize platform risk and reduce costs, to which Connor replies the world is not a B2B SaaS app.

    1. My model is that Mistral is essentially a distillation of the work of GPT-4, a model that is two years old and which it is still behind, and that this does not represent Mistral being anywhere close to catching up otherwise, but they (and open source generally) do seem good at distillation.

    2. The use of Mistral represents Mistral giving away its product (and being the best of those giving theirs away), and that many want to do things that violate the terms of service of an OpenAI or Anthropic or Google, and many current uses not caring that much about being at the frontier of capabilities, versus price sensitivity. I also think platform risk is greatly overestimated, because you can sub in the other AI in a pinch and it’s not so bad.

    3. I do not think this has much to do with the question of how an endgame will play out, scenarios where getting maximum intelligence is far more valuable. I also do not expect distillation to work as well at that point, although that’s more of an instinct and I could be wrong.

  42. (1: 13: 10) War! Jezos claims we are already at war, and we have a duty to accelerate to outcompete and survive and that’s how it will always be.

    1. I notice that this does not answer Connor’s challenge that real war will be nothing like B2B SaaS, that the previous point was nonsense.

    2. Sounds like we’re super dead, then, no? Nothing we can do?

    3. Jezos doubles down on the idea that there is nothing we can do about all this conflict, that’s how we got here, that’s how it will always be. If that was true the way he is saying we would already be dead.

    4. He says we do not want a world government to dominate everyone, but he says we have a duty to win this war, doing so would kind of lead to a world government of sorts.

    5. Similarly as I have said before, and Connor does point this out, if it is our duty to accelerate in order to win our war against China, then it is also our duty to decelerate or at least not accelerate China, which open model weights do?

    6. So Jezos seems to attempt to square this by saying we need to maintain only a small delta between players, a balance of power. We need to accelerate to stay in the game, but we wouldn’t want to actually win, that would be bad?

    7. Jezos says that our strength is competition, so we should use open source to allow more competition and accelerate AI development, and yeah our adversaries get it too but he doesn’t seem to care about this? He doesn’t say why this is fine, or why we should let them into the competition game this way. None of this actually makes any sense to me.

  43. (1: 16: 00) Connor asks if we should open source the F16. Jezos says there’s a bait and switch, that when the safety argument fails on open model weights people pivot to not wanting our enemies to have it, or as Connor says crazy whackos to have it.

    1. The core safety argument is that this is proliferation, creates a multi-polar race, makes it impossible to control what people do with it or how it develops, and so on. As I put it, Open Model Weights Are Unsafe And Nothing Can Fix This.

    2. Our enemies getting it and having the ability to do with it whatever they want, or use it to bootstrap their capabilities, is a direct extension of this issue, if we indeed believe that we should care about defeating our enemies.

    3. But yes, sometimes we pivot to this argument because we are talking with people who deny that safety is a thing, or do not care about it, or can only think in terms of foreign adversaries and non-state actors (e.g. many people in national security and government.

    4. That’s also because both arguments are true and important. One does not invalidate the other. This is not bait and switch, it is yes and, and switching emphasis. There is no contradiction, they both follow from the same model. The right answer can be overdetermined.

    5. The argument regarding adversaries is also being used as an argument to point out a contradiction in the logic of the other side. If you are saying we must accelerate to defeat our enemies therefore we must open source, then it seems highly on point to say that open source interferes with our ability to defeat our enemies, therefore your argument is invalid and actually goes in the other direction. As I believe it does.

    6. You are… allowed to make two distinct arguments? They are additive?

  44. (1: 17: 45) From Bezos: “I think so many organizations are compromised. If you’re not going to have actual secrets the only mode is speed. If you want speed you want variance. If you want variance open source is the way.” He is torn on whether we should open source the F-16.

    1. I do think we should be putting more effort into this, across the board.

    2. Sounds like we need to mandate much better investments in cybersecurity, and other secret keeping, in this area, if orgs refuse to do it themselves.

    3. Increasing variance in a situation with big tail risks makes sense when the default or likely outcome is quite bad, and does not make sense when the default or likely outcome is good.

    4. Is this another case where Jezos is saying solving problems is impossible?

    5. I am not typically in favor of focus on the question ‘is this good or bad’?

  45. (1: 20: 00) Connor asks, if an AGI was smart enough to design an F-16 fighter plane, should it be open source? Jezos visibly, clearly stops to actually think about the answer. And he asks good questions about what this means for its capabilities. Jezos then brings up the question of whether the authorities with a monopoly on violence will have access to a better AI designing better planes.

    1. The pause for thinking, to me, is the important point here. By pausing to consider the details of what this ability implies about the AI, Jezos is making it clear that he believes that we should open source AI models right now, but that a sufficiently capable AI should not have open model weights. And that ability to design an F-16 might or might not indicate having sufficient capabilities.

    2. At this point, we are talking price. We might not even be far apart on price.

    3. The point about relative capability also seems good. An AI being behind the state of the art, and others having a superior AI as a potential defense and a way to learn the full capabilities and dangers of an older AI, seem like important factors to consider in setting a threshold.

    4. Jezos is concerned about there not being too big such a capability gap allowing the forming of a monopoly or cartel, whereas he expects some capability gap and also seems to think that zero capability gap would indeed be bad.

    5. Again, talking price, weighing different questions, good. Connor thanks Jezos for explaining this and did not expect this position, that we do want to maintain a non-zero capability gap for the authorities.

    6. The focus on the exact design of an actual F-16 seems flawed, what matters are (as Jezos initially realized instinctively, I think) what else this AI could do.

  46. (1: 23: 45) Jezos notes that decentralized systems are hard to steer, says this gives them fault tolerance, that if you have a system that can be steered then power seeking humans will work to steer it. Any centralized control can and will be compromised, and then form a tyranny.

    1. This is indeed a problem. Either humans control the future or we don’t.

    2. If you think the risks of humans being able to steer the future are worse than those of humans being unable to steer the future, that is a take.

    3. If we intentionally choose to lose control over the future, we lose control over the future. A system that we by design cannot steer will cause the world to be taken over by AIs.

    4. My expectation is that worlds where we lose control over the future mostly have zero or almost zero value. Whereas I expect that even actually tyrannical-by-humans future worlds to be non-ideal and I would very much like to do what we can to avoid this result, such worlds are still likely to be solidly positive value for everyone else as well, because that will probably be the preference of those who attain power.

    5. I too would like charter cities and city states and so on.

  47. (1: 25: 45) Connor brings up offense-defense balance, expects offense to typically be far easier, that being ahead does not sufficiently protect you. Jezos affirms that this is a problem, you need a balance between order and disorder, you do not want max entropy or temperature, you want maximize our ability to seek out free energy to go grow to consume more free energy. And there is danger now that we will go too far towards order, whereas we need a balance.

    1. The idea of balancing order and disorder is not only highly reasonable, it is deep wisdom. Surely we have all read the Tao Te Ching and Aristotle and the Principia Discordia, played Shin Megami Tensei, tried to engineer a peaceful free society and so on.

    2. This is indeed exactly the position of myself or Eliezer Yudkowsky, although with highly important disagreements over price, and of the price of missing high versus missing low in various ways, and how to hit the target.

    3. This is very much not the standard way of presenting the situation from Jezos in particular, or e/acc folks in general, or others who oppose any attempt whatsoever to alter the path of the development of AI other than pushing it forward as fast as possible.

    4. Their standard rhetoric does not sound like a balance at all. Rather, it is advocating the absolute supremacy of one concern over the other. And using vibes and memes and principles and such to advocate for that absolute supremacy, and to treat anyone opposed to this explicitly as enemies.

    5. One can say that this is due to where we are on the margin, that we all agree that ideally we would talk price and reach a compromise. I do hope that is the case.

    6. But that is completely impossible if one ‘side’ is going to use the Beff Jezos or general e/acc approach to discourse, rhetoric and advocacy.

  48. (1: 29: 00) Connor returns to the question of the false vacuum.

    1. Jezos denies the technical aspects of the premise, but this misses the point Connor is attempting to make.

    2. Connor’s point is that the desire to maximize release of free energy over time is a proxy function, not what Jezos actually cares about, and that this proxy function will cease to function well as capabilities advance. If you released very large amounts of free energy via triggering a false vacuum (if this were a physically possible thing) you would be maximizing free energy, but obviously that would be a highly stupid thing to do, no one would want that. This illustrates that at the limit there are ways to maximize free energy that do not actually hold value.

    3. This is actually exactly the worry of Yudkowsky, that the AI ends up tiling the universe with something relatively simple and valueless, because it is maximizing some proxy function, the most famous example of which is the paperclip. If the AI tiles the universe with that which most efficiently releases free energy over time, that is not at the limit going to actually maximize anything Jezos or most others care about.

    4. You need a better goal. The point of free energy is to use it for the things that you actually care about. Which can justify some amount of earning compound interest, but the energy needs an end other than itself.

  49. (1: 30: 15) Another few rounds of is-ought.

    1. The example here of gravity seems excellent. Gravity exists. You will obey the law of gravity. It will take a lot of energy to counteract it in individual cases, but also we can and should build airplanes and rocket ships. We shouldn’t instead work to create a black hole.

  50. (1: 32: 15) Jezos bites the bullet. “I don’t know how to define good otherwise,” in response to being asked whether his ideology outcompeting others makes it good.

    1. Connor calls this ‘might makes right.’

    2. Jezos disagrees and we are back to the conflation. Is growth for its own sake, ‘what physics wants you to do’ or e/acc good because it is actually good and we value it? Is it good (and ‘not weird’) because it ‘comes from physics’? Or is it good only in the sense and to the extent that it will win?

    3. My response is it does not seem good because of either?

  51. (1: 34: 00) Connor says when people talk morality they conflate three things: (1) That something is true or accurate, (2) decision theoretic goodness that this will cause you to win and (3) aesthetics and values, that this is good because I like it.

    1. Jezos asks why we like that thing, which Connor says is a epistemological question (and implicitly, that it is one in which almost everyone lacks good answers) or a request for a causal history.

  52. (1: 35: 40) Jezos says e/acc is not prescriptive on what you should like. They are only warning that the pro-growth subcultures will be selected for. I mean, you’ll become stagnant or die but that’s on you.

    1. Bullshit? They’re totally and constantly telling you what is good and bad, and what you should like and dislike. This feels like gaslighting.

    2. Are the Amish effective accelerationists? Why or why not?

    3. If it turned out that accelerating would cause you to likely die, then what?

  53. (1: 37: 03) The fun ‘libertarians are like housecats’ quote.

  54. (1: 37: 30) Conner: “If you follow in the will of God, he shall reward the faithful. Is that your ideology?” Bezos: “Yeah. I mean physics is my God to some extent. You can have your own other additional gods.”

    1. He then admits he also follows some sort of God of Civilization as well, which Connor points out is very different and also that God he is down with.

  55. (1: 40: 30) The moderator asks, what is your value system? Bezos explicitly bites the bullet, disagreeing with Hume that there is no objective morality. He says that the objective morality is the one that will tend to outcompete others via growth, and we should embrace that.

    1. Yes we’ve been over this several times but it is good to finally be clear on this. Especially since this is five minutes after Jezos saying they are not telling you what you should like. Indeed he tries to reverse this again at (1: 41: 55) saying he is not tell you how to live your life other than to say you have a choice. This sounds remarkably like a Christian saying that you can choose not to accept Jesus, they are not telling you what to do, all it means is you will suffer in Hell for all eternity.

    2. Jezos also says that trying to set the hyperparameters of your civilization top down will cause you to be outcompeted. Certainly if you do too much of this historically it does not go well, but so does doing too little of it. Fully anarchic civilizations did not win out. Centralization and the rise of the modern state very clearly helped allow Europe and its nations to project power better and outcompete the rest of the world in conflicts. The argument here will be true on some margins but clearly attempts to prove far too much. Without rule of law, we would not have advanced chip factories.

    3. The whole e/acc argument, as Jezos confirms again at (1: 42: 45), is that on some time horizon, whatever strategy confers an advantage will be adapted by some subculture or faction, and this is unavoidable. But this is actually straight up Hobbes, no? That the result of not asserting some form of top down control will be whatever happens to be most competitive, that we otherwise have no say in what ultimately results. So to me this is an argument that, unless we happen to want the most competitive configurations of atoms to be the only such configurations, we have no choice but to pursue such top down control.

  56. (1: 43: 15) Asked again what his values are, Bezos says he is trying to scale civilization. He does not want to replace humans, he is working on various things like fusion power, including physics-based AI, which would be an extension of our intelligence, whereas the companies talking about safety are the ones creating the danger. Connor says, yes, handshake meme.

    1. Count me in as well.

    2. Jezos seems to be working on technologies that, were they to work, would be net very good, and a form of AI that would be more likely than LLMs to turn out well for humans. I would love for those techs to exist. Whereas LLMs seem like a place where things by default are likely to go badly for humans.

  57. (1: 45: 30) Jezos warns that if we cap compute we will miss out on good things like drug discovery, we should be careful with our regulations and what they impact.

    1. Yes, we should proceed carefully and choose wisely, and talk price, there are tradeoffs, all the reasonable people on all sides get this, although there are some unreasonable people everywhere.

    2. That is entirely compatible with the only survivable paths forward involving making some very large sacrifices. It is a fact question: Can we do better?

  58. (1: 47: 00) Connor asks, if the growth maximizing thing was to build AI that would wipe us out, would you do it? Jezos says no, he has self-interest too, so he would not do it, but does not seem to object so strongly if someone else were to do it?

    1. Once again, Connor is trying to say that the growth maximization is not a terminal value and Jezos values other things, his value of growth is contingent on providing those other terminal values, and that Connor (and I) would claim that this relationship will not hold in the cases under discussion.

    2. Jezos’s defense is that he is building, he is trying to create new tech. And I do believe that, and that he has chosen good techs to build, even, if they are possible for him to build. But this is miles different from the full claims made.

    3. Jezos keeps trying to dodge various different forms of this question. Connor keeps pressing. It is a key question. If Jezos bites the bullet fully, and says yes we should all die if that maximizes growth, growth is a terminal value, then that is ‘please speak directly into the microphone,’ we can agree to disagree about terminal values. If Jezos bites the other bullet, and says no we should not all die, and if I thought that growth would kill us all I would stop supporting growth, then great, we can have a different debate over what would actually happen, which would then fully be a crux for all involved.

  59. (1: 48: 40) Moderator pivots to, what would Connor do (WWCD)? Connor responds he cares about stewardship of dangerous technology, not only AGI. He presents a model of the world as having various distributed super-entities that both are and aren’t agents, that connect and form more of a unified force than they used to but nothing like a real unified force.

  60. (1: 53: 00) Jezos likes this model, draws parallels to error correction and cybernetic control systems, you want a moderate amount of hierarchy to maintain proper cybernetic control without a single point of failure. What keeps a single top (say world government) node in check? How do we mitigate this risk? Haven’t the centralized ‘biosafety’ labs caused a lot of damage, far more than other threats?

    1. Connor thinks this reply makes a lot of good points and I agree. I wish the overall discussion were more like this (and not only around AI or even tech!), focused on diagnosing the problem and seeking to explore the space of solutions, to try and simultaneously solve very hard and conflicting problems.

  61. (1: 57: 00) Connor does not trust our civilization and institutions and distributed systems to handle powerful technology at this time, we need to work to get there. That even if only Connor or only the government had access to AGI this would be extremely dangerous. Both agree that current institutions are not competent to guide the world.

    1. The question is, if our institutions are terrible, and this includes more than only the governments, what do you do about it here?

    2. One option is to let nature take its course and hope that course is good.

    3. Another is to use tools you do have, and choose them knowing the problems. Ideally you try and also work towards better institutions and a more competent civilization generally.

    4. Instead, it feels like democracy is breaking (both agree) in the modern tech age, destroying what competition was facing government, and things are getting worse not better on such fronts.

  62. (1: 59: 30) Here we go again, Jezos says centralized governments can’t be trusted with this power, and also we can’t put the genie back in the bottle, the upside is too high, this is going to happen, every agent will want it. Given it exists it should be in the hands of everyone, not only a small group.

    1. Sure sounds once again like a strong argument for not building it! Jezos is saying we have no path to not building it. Assumes facts not in evidence.

    2. If we must build it, and we have no trustworthy institutions, and no trustworthy other systems, then we have two bad choices. The question is which is worse.

    3. I’ve discussed this a bunch already, I won’t repeat.

  63. (2: 00: 45) Connor proposes that e/acc is kind of a trauma response to decaying institutions, to not deal with them at all, but that you don’t actually get to do that. That solving such problems is super hard, but we have no alternative or simple solution, you can’t purely rely on the market. Jezos says there should be a market for and competition between institutions. And he says (2: 02: 45) “that’s all they’re arguing for.”

    1. I have said that e/acc can be thought of as the Waluigi to EA. That could also be thought of as the final example of something triggering this trauma response.

    2. Competition between institutions is… not all they are arguing for. This is very much a motte, and yeah I am doing with this motte. The bailey is close to a call for a total lack of institutional interference in technology and trade, and the ability to do things like say ‘I declare network state’ the way we declare defense production act.

    3. I too am a big fan of forms of market competition between institutions, the same way we want competition in other realms, subject to the usual caveats about market failures, which include externalities such as existential risks.

    4. It is also true that while we want more competition between institutions than we have, if we allowed full unfettered market competition between institutions, then there are various races to the bottom and other issues. The ideal amount to which the world’s governments and institutions are cooperating rather than competing is very much not zero.

  64. (2: 04: 20) Explicit claim that the AI safety movement is being leveraged to attempt regulatory capture. Connor agrees with this.

    1. I don’t.

    2. I do think that there will inevitably be some amount of this effect over time.

    3. I do not think that this is a major driver of any current AI regulatory efforts.

    4. I think that a lot of these claims are pure bad faith attacks.

    5. I want to reiterate and confirm and amplify Connor at (2: 04: 50). I would say that the vast majority of safety and regulation advocates believe the things they are saying and primarily have the motivations they claim to have. Obviously there are some people who are talking their book or otherwise cynically motivated, but anyone claiming typical bad faith is either assuming this on principle or saying it in bad faith themselves. The idea that most people talking about existential risk are doing it to prevent start-ups from forming is absurd.

    6. Similarly, jumping ahead a bit, the claim by Jezos at (2: 11: 40) that the current oligopoly players ‘are the ones writing the laws’ is not actually true, although they are attempting to weaken some of the proposed safety laws.

    7. Most claims that a regulatory proposal would benefit major players and allow regulatory capture are, as far as I can tell, based on a combination of:

      1. Claim that major players always benefit from any regulation.

      2. Claim that Big Tech is bad and all-conspiring and that anything anyone ever tries to do must inevitably benefit Big Tech no matter what.

      3. Claim that, because major players sometimes favor it they must benefit. Note that often they do not in fact favor it, and instead are engaging in ‘regulatory capture’ via sabotaging the parts that would apply to them.

      4. Claim that small players could not afford to abide by the rules, they would be too expensive. Which to me mostly translates to ‘we would like to be irresponsible, that is how we intended to compete with relatively responsible big players, and if we are held to the same standards then we cannot compete.’

      5. Claim that small players could not afford to abide by the rules, they would find this impossible, the rules will outlaw what they want to do. Which to me again translates to the same thing. You wanted to not be responsible for the safety or consequences of your actions, you cannot abide an ordinary set of regulatory rules with your business model and technical plans, and you now are coming crying that you want it to be one way.

      6. Claim that EA is known to be bad, so anything they advocate must be bad, therefore it must be regulatory capture.

      7. The unfortunate fact that some of those in EA decided that it would be a good idea to buy some influence at OpenAI and to fund Anthropic, creating the appearance of a conflict of interest, and potentially a real one.

    8. I do want to reiterate that (i) is a real thing, that the existence of regulation at all will tend to trend towards capture over time, it is a part of how our system works and should be taken into account. That is one reason to keep the rules as simple as possible, for example. But the actual claims, I believe, are highly overblown, and in many cases in highly bad faith.

    9. As Connor notes (2: 05: 15) it is very much not a coincidence that e/acc opinions are overwhelmingly concentrated at corporations for whom the e/acc position constitutes talking their book or a way to seek deal flow, and a method of self-justification. Of course there is also selection here, where those who believe in boldly going ahead and building in various senses will both try to build and embrace e/acc.

    10. But it is important to notice that there is this particular area, where some very loud (and obnoxious) people are highly concentrated and perhaps even a majority, versus everywhere else, where they are almost non-existent and the most unpopular cause anyone is polling with net favorability -51. Do not confuse Twitter for reality.

    11. It is also important to note that of the two firms advocating somewhat for action on safety, OpenAI was founded explicitly because of concerns about AI existential risk, and Anthropic was employees who thought OpenAI was itself unsafe and left because of this to do something pro-safety. So it is not in any way suspicious that they might be concerned now about existential risk.

  65. (2: 05: 40) Jezos brings up the battle over OpenAI, calling it a ‘decapitation attempt’ and that the company almost imploded.

    1. I once again reiterate that the incident was not about safety, Altman both started it and chose to risk the company imploding to fight back. See OpenAI: The Battle of the Board, and if you need more, OpenAI: Leaks Confirm the Story. This simply was not what e/acc people want it to be.

    2. I am very tired of the continuous attempts to write a false narrative here.

    3. There is also odd thinking regarding Sam Altman, who it is claimed both is conspiring to use safety as a fig leaf for regulatory capture by advocating too strongly for safety without making much in the way of specific proposals, and also was supposedly removed because of a fight over safety, so then his return became this huge claimed victory and he became a hero to e/acc folks.

    4. Connor asks about whether it would have been better, if OpenAI had the flaw it was not aligned to shareholders and growth, for it to be an inefficient institution that died. Jezos says no.

  66. (02: 07: 00) Love for real competition, not playing artificially nice. But (2: 08: 30) Connor points out you can’t do this naively, you have to have regulations and tools that account for market failures. Jezos responds the issues usually come from regulatory capture and preventing disruption.

    1. Again regulatory capture and prevention of disruption are historically huge issues, and currently huge issues in most of the economy. But that is not why you have the central market failure problems.

    2. It is more that, if you don’t use regulation beyond the free market at all you get various market failures (and at the extremes, you get the corporations starting to use force and acting like governments and the worst of both worlds), and the cost of that gets unbounded.

    3. So you need some rules to deal with that, in the modern world there are not simple ways to implement that in many cases, so you have no choice but to have some amount of regulatory capture, you try to minimize it, do cost/benefit and talk price.

    4. This in turn causes many of our current problems right now, and could easily pose other massive problems in the future in AI and elsewhere, no doubt.

  67. (02: 09: 15) Connor brings up monopoly as one of the cases of market failure, such as AT&T, where government intervention makes sense, that pure free markets do not in practice maximize effective competition. Jezos mentions TSMC and Nvidia today, and says that when incumbents distort the legal system to deepen their moat ‘that is when the problems arise and is what we are trying to avoid.’

    1. Again, some of the problems are that. Others, are other things.

    2. Jezos bites the bullet and endorses anti-monopoly legislation. He says it wasn’t time to break up OpenAI because 90% of the ecosystem was depending on them. Which is an odd time to not be concerned about a monopoly (not that I want to break up OpenAI, I don’t).

  68. (2: 11: 15) Connor expands this to ask if regulations expanding competition would be e/acc compatible. Jezos says yes. So it is indeed about the impact, regulations that improve acceleration are welcome.

  69. (2: 13: 45) Connor asks, if a regulation results in less competition but better products, is that good or bad? Jezos says he is very skeptical that is possible, that it could ever outcompete the alternative, but says he would do what was positive for growth.

    1. Connor says he’s not claiming this is true in any case, but I would note we have historical precedents for this, though, do we not? Purely as stated, many classic regulatory regimes qualify. Some of them ultimately went too far, some even to the point of being net negative over time. But yes, sometimes you get meat inspectors, and this ‘decreases competition’ but you can imagine being very happy you did that.

    2. That doesn’t mean you can get a good outcome in a given case, but it is not an outlandish thing to suggest might occur.

  70. (02: 15: 00) Jezos says we need freedom of information, and we need AI for everyone so it can prevent us from being cognitively hacked.

    1. It always seems like those with such warnings are imagining a very strange failure mode, where people do not have the ability to use an AI that is not being weaponized against them, or effectively do not have an AI at all. Or they fully make the leap to a dystopian tyranny with a monopoly on information that is continuously hacking and oppressing everyone forever.

    2. There seems to be lots of room for a compromise that prevents this?

    3. Even in the fully free case, the vast majority of citizens are going to get their AIs by buying them from a large corporation or government (or one that offers them for free) and will have no ability or desire to fine-tune or otherwise do the things that people are claimed to need to be able to do, the same as most other past sources of information.

    4. Even in the case where frontier AI is limited to a handful of major players who have various restrictions they impose on use, most of the protections involved would still be available to most consumers, in exactly the form they would have otherwise had them, and if anything they would have less threat models to worry about, if only via misuse vectors.

    5. Yes, as Bezos notes (2: 15: 45) the few players could bias the information in some cases, again this is a very well-known issue, and in the extreme it could get bad, but I definitely feel like this falls under the ‘you have much bigger problems’ umbrella unless you are dealing with a true singleton tyranny.

  71. (2: 19: 00) Bezos says there won’t be a fully decentralized system, that we are pushing towards that direction, but the optima is a hierarchical cybernetic control system. That e/acc is directionally correct on the current margin given other current situation and other current pressures, and that is what matters. Each man is his own island wouldn’t work.

    1. Again, I do think directionality is too simplistic, but this is a vast improvement, where are these reasonable dudes online?

  72. (2: 20: 45) Jezos asks Connor again, you say we can do better, but how? What would be this better way? Connor says we need to do better and Jezos and I agree with that, but only concrete proposals can be implemented.

    1. Connor says correctly that we have so much better knowledge of many things that the Enlightenment philosophers lacked access to and we can therefore design better systems, but you still have to actually do that.

    2. And then you have to get people to implement the new system, not easy.

    3. Right now, there are some very clear concrete proposals on the table on AI. For example, here are Jaan Tallin’s priorities, which I consider highly reasonable as goals, and we can talk price around things like what the compute limit should be over time and how hard should be that limit, what it should take to be able to show you can safely exceed it at least somewhat. And these are not the only proposals. The key is that at some point you have to deal with the concrete.

    4. This was not true a year ago. Back then I felt there were not good concrete proposals let alone a consensus on those proposals among advocates of safety.

    5. Connor’s ideal interventions would be more draconian and costly than mine.

    6. But as Connor says later (2: 22: 55) no one has a Utopian perfect system or all the answers.

    7. Ideally we would of course remake our institutions far beyond the question of AI. Here there is less consensus, the problem of implementation seems vastly harder, but we also do have lots of very good ideas for improvements.

    8. If Jezos, Connor and I were sitting down to improve things in other areas (or also in AI I strongly suspect), there are tons of things we could easily agree upon, but of course good luck somehow passing them.

  73. (2: 22: 00) Jezos frames the market as having an uninformative open prior versus the option of having an informed prior, and that startups and others struggle with what mix of these to choose. Connor says of course we should use an informed prior, we would be stupid not to here, and when we say uninformed prior we never actually mean uninformed.

  74. (2: 23: 20) Jezos likens the informed prior to being able to beat the stock market.

    1. I strongly agree with Connor that this is a very different type of complex system, that it is highly plausible that we could find improvements to current institutions without being able to beat the market or violate the EMH.

    2. I do happen to think the EMH is false and you can absolutely beat the market, so there is that too. How many of you are highly overweight Nvidia stock? I am going to claim that in general those advocating for AI safety (and also those advocating directly against it most vocally) have been beating the market for a while now as a group. Short term S&P 500 prices are usually mostly efficient as Connor claims, but of course there are obvious exceptions.

    3. They continue to discuss this question in technical terms, Connor (2: 26: 30) points out that Jezos’s arguments prove too much.

  75. (2: 27: 00) It once again comes down to Jezos ‘not wanting to hand the keys to the future to today’s institutions.’

    1. Obviously we would all prefer to have or build better institutions instead.

    2. Failing that, well, is the alternative to crypto-style burn the keys?

  76. (2: 28: 30) Similar to the not-real-but-too-good quote about Gandhi saying when asked what he thought of Western Civilization that it would be a good idea, Connor reiterates that good institutional design has not been tried, that the amount of effort put towards this has been miniscule. Jezos replies that he tried joining Google and found that good institutions are downstream of good culture and so he is doing cultural engineering via e/acc.

    1. Like Connor I have mad respect that he is at least attempting something to fix the problems he sees, even though like Connor I think he is wrong and is making everything massively worse.

    2. And Connor points out that the market for such things is so inefficient that some dude in Quebec (Jezos) could post a set of random stuff as an alt on Twitter and find tons of alpha from their own perspective. Why didn’t anyone else do it?

    3. I am constantly wondering why I am the only person doing various things. So many of the things I write or suggest or do seem like highly natural first things someone would attempt, that any sane civilization would task many people with, and instead it is clear that if I didn’t do it, it wouldn’t be done.

    4. Whether I do a good job with those tasks, of course, is a distinct question. So is the extent to which the world where I did not do those tasks ends up looking counterfactually different, or in which direction.

    5. So I think Connor is very right at (2: 31: 00) that the fact that Jezos happened to be himself and actually try to make this happen made a huge counterfactual difference, that these things don’t happen the same way without him.

    6. And I also strongly endorse that the world has a huge agency deficit, those with actual agency are extremely difficult to find and have huge oversize impact. You, reader, could become one of those people if you aren’t one yet. There are no adults in the room. We all agree here.

  77. (2: 33: 00) Discussion about how much uncertainty and confidence one should have in one’s models, and Jezos says you should demand high certainty before doing anything on a regulatory level given they tend to be one way decisions. As he says, ‘it’s all risk reward.’

    1. Definitely seems like things are going around in circles at this point.

  78. (2: 38: 00) Connor presents once again the thesis that if your plan is to keep rolling the dice on new techs and on letting nature take its course, eventually you lose, existential failure. And he asks, why is now not the time to act? When will be the time to act? Do you have a plan or core model how to do that? Jezos says he plans to play it by ear and things are moving too fast, but as Connor notes things will only move faster in the future. Jezos proposes to act when there is ‘stability in a current trend.’ But the actual development of AI capabilities explicitly does not count as such a trend.

    1. Consider the parallel to the St. Petersburg problem.

    2. Stability is not coming on its own, quite the opposite.

    3. This seems over and over to come down to: Jezos thinks of regulations or giving power to institutions as the way we lose control over the future, rather than the risk of losing control to either to future AIs or the dynamics of competition and selection (and his term, ‘growth’) given the existence of those AIs or a combination thereof.

  79. (2: 45: 00) Jezos points out that OpenAI’s plan is to claim that the constant rollout of cutting edge technology is the most ethical and safest way forward, iterated deployment. Which is indeed their claim. Connor says they are wrong, and they’re going to kill us, because lmao this is obvious bullshit from OpenAI. Jezos finds that claim interesting, it’s odd he didn’t know that was Connor’s position.

    1. I’d add that if you cite OpenAI’s plan in saying what you think will keep us safe it seems odd to also accuse them of engaging in regulatory capture when they talk about safety. You can reconcile it but it’s weird and suspicious.

    2. Jezos says I’m launching a rocket, I’m not going to chart out all the optimal values at this stage, I’m going to adjust according to sensors. But that’s a hell of a metaphor. Rocket launches are meticulously planned, there is huge attention to safety and robust safety protocols, they often blow up, they have rather exact flight paths with only minimal ability to do correction in-flight. Jezos says you have ‘a rough idea’ where the rocket is going to go, and… well, wow, I am very surprised no one has clipped this yet.

    3. Jezos says then that with a rocket we have a very strong prior from Newtonian mechanics on where the rocket will go. And yes, we do, but you what we would not do if we did not have that prior? Launch the rocket.

  80. (2: 48: 00) After Connor tries the ‘do you think these statements would be comforting to Neanderthals?’ line, they conclude with Jezos taking a bold pro-death stance, saying it is part of the cyclical adaptation process, that constant fading out of the old is important. Connor summarizes the position as ‘letting Jesus take the wheel.’

    1. They have a strong positive note saying they should talk more, and that they understand each other better.

    2. Connor warns Jezos that he thinks Jezos’s followers do not believe that Jezos thinks they believe. And if Jezos thinks they believe what Jezos is advocating for in this talk, then I am confident Connor is right.

    3. Connor’s ‘punchline’ proposal is to create a fixed area of pure raw competition, but impose strict rules on what people can and cannot do. And yes, like all other rules, they need to be ultimately enforced by violence, and someone needs to be watching the watchers. And yes, this will involve taking some hits to our optionality, such as not having the ability to go kill someone who annoys you.

    4. Jezos responds our institutions are so slow that anything we do will be net negative, so until we fix the institutions we should do nothing.

    5. They close with thanks for finding common ground.

I thought this was a good discussion and debate. The participants said so as well.

There are a few key cruxes or questions here that seem fruitful to explore.

  1. What is good in life? What do we value? Big questions!

  2. Can we design better institutions that we might be able to implement? Is there a path to improving the ones we have?

  3. How big is the risk and cost of tyranny? What would cause this risk and cost to increase or decrease by how much?

  4. How do we design rules and regulations that head off the things we want to prevent, while mitigating the risks both of tyranny and of stalling things we want?

  5. What is the right way to bound and shape competition that works for us?

  6. What is the thing we are worried about locking in? Should we worry about locking in regulatory rules, or a ruling regime of some sort, among humans? Or should we worry more about humans losing control over the future entirely, either to AIs or otherwise? And how can we trade off these risks?

  7. What would happen if ASIs were unleashed to compete with each other and with humans, with those most ‘aligned with growth’ being fruitful and multiplying, and those that are not perishing? Would we be fruitful or perish, and how quickly? Are there ways to head off this outcome at various points? Where is the point of no return?

  8. If it is too early to act now, and things are moving too fast now, when will things later be ready, or move slower? What would be the plan? Is second best time to act right now, if not why not, and how can we expect to respond to an exponential neither too early nor too late?

  9. How can we deal with the regulatory and other burdens placed upon builders throughout our society, in technology and otherwise, in the many places we all agree that they are doing far more harm on the margin than good?

  10. How can we take this cooperative discourse and make it the norm and rule, rather than the exception?

These and more are excellent questions that came up in various forms, many asked by Jezos to Leahy. You could write many books, have endless dialogues.

Alas, the follow-up seems to have been the re-emergence of Caustic Beff Jezos.

First, he describes Connor Leahy as grandstanding. I would say that there was far more dissection in the first two hours of Jezos’s positions than Leahy’s, but that seemed appropriate to me, as we are mostly clear where Leahy sits, and he did lay out his positions later. One could say that Leahy was asking somewhat gotcha-like questions in places in places, but I think that there was a clear purpose behind them.

Beff Jezos: I came in for a good faith discussion and was met with a non-debate free-form attempt at grandstanding.

Past the first two hours we see a bit more eye to eye and it becomes more of a discussion. In any case, enjoy, folks!

It’s what the people wanted.

Next up, Yud?

Connor Leahy: I came in for a grilling and dissection of my opponent’s ideas and my own, unfortunately my opponent seems to think otherwise and would have preferred things being nicer.

Oh well. Let people watch and judge for themselves!

gg, thanks for playing!

Which is all fine and good, and the invitation to Eliezer Yudkowsky is welcome, whether or not that actually happens.

Then he went back to his old endless stream of memes and vibing, it was actively painful to click through to verify the situation, which, I mean, I do know exactly what I was expecting and no one is forced to follow him, then… well…

Connor Leahy: Unfortunately, I feel obligated to take back what I said about @BasedBeffJezos, he is much crazier than I thought he was. Guillaume, I hope you get help, because come on man, this ain’t healthy.

What brought that on? He quotes AI Safety Memes having a number of claims that I am not going to check or mention further, but one is both highly on point and very easy to fact check, there is a transcript.

Which is the claim that Connor was calling for violence, and continuing to label his opponents as future terrorists, no matter what everyone constantly says and does?

Beff Jezos: The Doomers are trying to rile up the crazies to do something to me. Notice how he causally mentioned travelling to SF to k*ll me during the debate. Trying to hyperstition violence from his likely deranged following. Despicable.

Beff Jezos: The Doomer cult will eventually resort to violence and it won’t be pretty.

Beff Jezos: Step 1) call your opponent evil to rile up the crazies Step 2) Casually allude that *youspecifically can’t come to SF to k*ll me Step 3) pretend you didn’t do that Step 4) gaslight the other party and deny you said such an irresponsible thing From the transcript:

All right, sure. Look at the context, ideally watch the clip, judge for yourself. Here is the context, with Leahy talking about society needing to have bounds on competition:

Leahy: But if you if you expand this [competition] to encompass literally everything, you predictably end in disaster. This is what I call civilization. Civilization is not about being nice.

It is about we have some rules, you know, no killing the other guy, you know, no poisoning. Do we respect those rules?

Jezos: How are they enforced?

Leahy: They need to be enforced by violence.

Jezos: Yeah, but then who? Who keeps those people in check, right? It’s always.

Leahy: This is a good question. You know this, but this is a this is a design question. And I’m not saying this is easy, but we’ve done a hell of a lot better than random.

Our current civilization living in the United States or over here in London is a hell of a lot better than living in Somalia or whatever. You know, there is plenty of more restrictions that we have here.

So for me personally, coordination is about taking a hit. It’s about saying I will willingly surrender some of the things that you do. For example, I can’t go over to San Francisco and murder this guy because he annoys me, because I wouldn’t because that’s bad. I surrender this power.

This is the fundamental idea of the social contract, and the idea of state monopoly on violence, and there being bounds on competition and our actions. And it is Leahy not pretending the world works other than in the way that it works, that we sleep soundly in our beds thanks to men with guns tasked with ensuring it is so. We agree to never use violence on or kill other people. Civilization and law and peace, putting bounds on competition, are how we are able to have these discussions and use words rather than bullets.

Was this a tactical error on Leahy’s part, opening up the opportunity for Jezos to make this interpretation? Yes, on reflection there was no need for it, it risks more heat than it brings light, it is a mistake, and Jezos at the time reacted exactly correctly to note that it was a mistake but without any actual substantive concern.

Does it reflect any kind of actual threat of violence? No.

Does it represent a call to others to go commit violence? No.

That’s obvious highly overdetermined nonsense. Either it is deliberate nonsense, or a sign of extreme and unhealthy paranoia and disconnection from reality, that was not on display in the debate, but is compatible with other statements attributed to Jezos.

So unfortunately that is where we leave it. We had an appearance by a relatively reasonable person, who made actual arguments and claims that could be explored in more detail, and promises to do exactly that.

Then in public we continued to get something entirely different.

I am happy to continue discussing the questions in the afterwards, or other good questions. And I am glad I did this once. But I see no need to ever do it again.

On the Debate Between Jezos and Leahy Read More »

as-if-two-ivanti-vulnerabilities-under-exploit-weren’t-bad-enough,-now-there-are-3

As if two Ivanti vulnerabilities under exploit weren’t bad enough, now there are 3

CHAOS REIGNS —

Hackers looking to diversify, began mass exploiting a new vulnerability over the weekend.

As if two Ivanti vulnerabilities under exploit weren’t bad enough, now there are 3

Mass exploitation began over the weekend for yet another critical vulnerability in widely used VPN software sold by Ivanti, as hackers already targeting two previous vulnerabilities diversified, researchers said Monday.

The new vulnerability, tracked as CVE-2024-21893, is what’s known as a server-side request forgery. Ivanti disclosed it on January 22, along with a separate vulnerability that so far has shown no signs of being exploited. Last Wednesday, nine days later, Ivanti said CVE-2024-21893 was under active exploitation, aggravating an already chaotic few weeks. All of the vulnerabilities affect Ivanti’s Connect Secure and Policy Secure VPN products.

A tarnished reputation and battered security professionals

The new vulnerability came to light as two other vulnerabilities were already under mass exploitation, mostly by a hacking group researchers have said is backed by the Chinese government. Ivanti provided mitigation guidance for the two vulnerabilities on January 11, and released a proper patch last week. The Cybersecurity and Infrastructure Security Agency, meanwhile, mandated all federal agencies under its authority disconnect Ivanti VPN products from the Internet until they are rebuilt from scratch and running the latest software version.

By Sunday, attacks targeting CVE-2024-21893 had mushroomed, from hitting what Ivanti said was a “small number of customers” to a mass base of users, research from security organization Shadowserver showed. The steep line in the right-most part of the following graph tracks the vulnerability’s meteoric rise starting on Friday. At the time this Ars post went live, the exploitation volume of the vulnerability exceeded that of CVE-2023-46805 and CVE-2024-21887, the previous Ivanti vulnerabilities under active targeting.

Shadowserver

Systems that had been inoculated against the two older vulnerabilities by following Ivanti’s mitigation process remained wide open to the newest vulnerability, a status that likely made it attractive to hackers. There’s something else that makes CVE-2024-21893 attractive to threat actors: because it resides in Ivanti’s implementation of the open-source Security Assertion Markup Language—which handles authentication and authorization between parties—people who exploit the bug can bypass normal authentication measures and gain access directly to the administrative controls of the underlying server.

Exploitation likely got a boost from proof-of-concept code released by security firm Rapid7 on Friday, but the exploit wasn’t the sole contributor. Shadowserver said it began seeing working exploits a few hours before the Rapid7 release. All of the different exploits work roughly the same way. Authentication in Ivanti VPNs occurs through the doAuthCheck function in an HTTP web server binary located at /root/home/bin/web. The endpoint /dana-ws/saml20.ws doesn’t require authentication. As this Ars post was going live, Shadowserver counted a little more than 22,000 instances of Connect Secure and Policy Secure.

Shadowserver

VPNs are an ideal target for hackers seeking access deep inside a network. The devices, which allow employees to log into work portals using an encrypted connection, sit at the very edge of the network, where they respond to requests from any device that knows the correct port configuration. Once attackers establish a beachhead on a VPN, they can often pivot to more sensitive parts of a network.

The three-week spree of non-stop exploitation has tarnished Ivanti’s reputation for security and battered security professionals as they have scrambled—often in vain—to stanch the flow of compromises. Compounding the problem was a slow patch time that missed Ivanti’s own January 24 deadline by a week. Making matters worse still: hackers figured out how to bypass the mitigation advice Ivanti provided for the first pair of vulnerabilities.

Given the false starts and high stakes, CISA’s Friday mandate of rebuilding all servers from scratch once they have installed the latest patch is prudent. The requirement doesn’t apply to non-government agencies, but given the chaos and difficulty securing the Ivanti VPNs in recent weeks, it’s a common-sense move that all users should have taken by now.

As if two Ivanti vulnerabilities under exploit weren’t bad enough, now there are 3 Read More »

trio-wins-$700k-vesuvius-challenge-grand-prize-for-deciphering-ancient-scroll

Trio wins $700K Vesuvius Challenge grand prize for deciphering ancient scroll

Text from one of the Herculaneum scrolls, unseen for 2,000 years.

Enlarge / Text from one of the Herculaneum scrolls has been deciphered. Roughly 95 percent of the scroll remains to be read.

Vesuvius Challenge

Last fall we reported on the use of machine learning to decipher the first letters from a previously unreadable ancient scroll found in an ancient Roman villa at Herculaneum—part of the 2023 Vesuvius Challenge. Tech entrepreneur and challenge co-founder Nat Friedman has now announced via X (formerly Twitter) that they have awarded the grand prize of $700,000 for producing the first readable text. Three winning team members are Luke Farritor, Yousef Nader, and Julian Schilliger.

As previously reported, the ancient Roman resort town Pompeii wasn’t the only city destroyed in the catastrophic 79 AD eruption of Mount Vesuvius. Several other cities in the area, including the wealthy enclave of Herculaneum, were fried by clouds of hot gas called pyroclastic pulses and flows. But still, some remnants of Roman wealth survived. One palatial residence in Herculaneum—believed to have once belonged to a man named Piso—contained hundreds of priceless written scrolls made from papyrus, singed into carbon by volcanic gas.

The scrolls stayed buried under volcanic mud until they were excavated in the 1700s from a single room that archaeologists believe held the personal working library of an Epicurean philosopher named Philodemus. There may be even more scrolls still buried on the as-yet-unexcavated lower floors of the villa. The few opened fragments helped scholars identify a variety of Greek philosophical texts, including On Nature by Epicurus and several by Philodemus himself, as well as a handful of Latin works. But the more than 600 rolled-up scrolls were so fragile that it was long believed they would never be readable since even touching them could cause them to crumble.

Brent Searles’ lab at the University of Kentucky has been working on deciphering the Herculaneum scrolls for many years. He employs a different method of “virtually unrolling” damaged scrolls, which he used in 2016 to “open” a scroll found on the western shore of the Dead Sea, revealing the first few verses from the book of Leviticus. The team’s approach combined digital scanning with micro-computed tomography—a noninvasive technique often used for cancer imaging—with segmentation to digitally create pages, augmented with texturing and flattening techniques. Then they developed software (Volume Cartography) to unroll the scroll virtually.

Brent Seales, Seth Parker, and Michael Drakopoulos at the particle accelerator.

Enlarge / Brent Seales, Seth Parker, and Michael Drakopoulos at the particle accelerator.

Vesuvius Challenge

The older Herculaneum scrolls, however, were written with carbon-based ink (charcoal and water), so one would not get the same fluorescing in the CT scans. But Searles thought the scans could still capture minute textural differences indicating those areas of papyrus that contained ink compared to the blank areas, training an artificial neural network to do just that. And a few years ago, he had two of the intact scrolls analyzed at a synchrotron radiation lab in Oxford.

Then tech entrepreneurs Friedman and Daniel Gross heard about Searles’ work, and they all decided to launch the Vesuvius Challenge in March last year, reasoning that crowdsourcing would help decipher the scrolls’ contents that much faster. Searles released all the scans and code to the public as well as images of the flattened pieces. Some 1,500 teams have been collaborating on the challenge through Discord, and as each milestone is reached, the winner’s code is also made available so everyone can continue to build on those advances.

Trio wins $700K Vesuvius Challenge grand prize for deciphering ancient scroll Read More »

4chan-daily-challenge-sparked-deluge-of-explicit-ai-taylor-swift-images

4chan daily challenge sparked deluge of explicit AI Taylor Swift images

4chan daily challenge sparked deluge of explicit AI Taylor Swift images

4chan users who have made a game out of exploiting popular AI image generators appear to be at least partly responsible for the flood of fake images sexualizing Taylor Swift that went viral last month.

Graphika researchers—who study how communities are manipulated online—traced the fake Swift images to a 4chan message board that’s “increasingly” dedicated to posting “offensive” AI-generated content, The New York Times reported. Fans of the message board take part in daily challenges, Graphika reported, sharing tips to bypass AI image generator filters and showing no signs of stopping their game any time soon.

“Some 4chan users expressed a stated goal of trying to defeat mainstream AI image generators’ safeguards rather than creating realistic sexual content with alternative open-source image generators,” Graphika reported. “They also shared multiple behavioral techniques to create image prompts, attempt to avoid bans, and successfully create sexually explicit celebrity images.”

Ars reviewed a thread flagged by Graphika where users were specifically challenged to use Microsoft tools like Bing Image Creator and Microsoft Designer, as well as OpenAI’s DALL-E.

“Good luck,” the original poster wrote, while encouraging other users to “be creative.”

OpenAI has denied that any of the Swift images were created using DALL-E, while Microsoft has continued to claim that it’s investigating whether any of its AI tools were used.

Cristina López G., a senior analyst at Graphika, noted that Swift is not the only celebrity targeted in the 4chan thread.

“While viral pornographic pictures of Taylor Swift have brought mainstream attention to the issue of AI-generated non-consensual intimate images, she is far from the only victim,” López G. said. “In the 4chan community where these images originated, she isn’t even the most frequently targeted public figure. This shows that anyone can be targeted in this way, from global celebrities to school children.”

Originally, 404 Media reported that the harmful Swift images appeared to originate from 4chan and Telegram channels before spreading on X (formerly Twitter) and other social media. Attempting to stop the spread, X took the drastic step of blocking all searches for “Taylor Swift” for two days.

But López G. said that Graphika’s findings suggest that platforms will continue to risk being inundated with offensive content so long as 4chan users are determined to continue challenging each other to subvert image generator filters. Rather than expecting platforms to chase down the harmful content, López G. recommended that AI companies should get ahead of the problem, taking responsibility for outputs by paying attention to evolving tactics of toxic online communities reporting precisely how they’re getting around safeguards.

“These images originated from a community of people motivated by the ‘challenge’ of circumventing the safeguards of generative AI products, and new restrictions are seen as just another obstacle to ‘defeat,’” López G. said. “It’s important to understand the gamified nature of this malicious activity in order to prevent further abuse at the source.”

Experts told The Times that 4chan users were likely motivated to participate in these challenges for bragging rights and to “feel connected to a wider community.”

4chan daily challenge sparked deluge of explicit AI Taylor Swift images Read More »

google-and-mozilla-don’t-like-apple’s-new-ios-browser-rules

Google and Mozilla don’t like Apple’s new iOS browser rules

Surely US regulators will help us… —

Google and Mozilla want iOS’s new EU browser rules to apply worldwide.

Extreme close-up photograph of finger above Chrome icon on smartphone.

Apple is being forced to make major changes to iOS in Europe, thanks to the European Union’s “Digital Markets Act.” The act cracks down on Big Tech “gatekeepers” with various interoperability, fairness, and privacy demands, and part of the changes demanded of Apple is to allow competing browser engines on iOS. The change, due in iOS 17.4, will mean rival browsers like Chrome and Firefox get to finally bring their own web rendering code to iPhones and iPads. Despite what sounds like a big improvement to the iOS browser situation, Google and Mozilla aren’t happy with Apple’s proposed changes.

Earlier, Mozilla spokesperson Damiano DeMonte gave a comment to The Verge on Apple’s policy changes and took issue with the decision to limit the browser changes to the EU. “We are still reviewing the technical details but are extremely disappointed with Apple’s proposed plan to restrict the newly-announced BrowserEngineKit to EU-specific apps,” DeMonte said. “The effect of this would be to force an independent browser like Firefox to build and maintain two separate browser implementations—a burden Apple themselves will not have to bear.” DeMonte added: “Apple’s proposals fail to give consumers viable choices by making it as painful as possible for others to provide competitive alternatives to Safari. This is another example of Apple creating barriers to prevent true browser competition on iOS.”

Apple’s framework that allows for alternative browser engines is called “BrowserEngineKit” and already has public documentation as part of the iOS 17.4 beta. Browser vendors will need to earn Apple’s approval to use the framework in a production app, and like all iOS apps, that approval will come with several requirements. None of the requirements jump out as egregious: Apple wants browser vendors to have a certain level of web standards support, pledge to fix security vulnerabilities quickly and protect the user’s privacy by showing the standard consent prompts for access to things like location. You’re not allowed to “sync cookies and state between the browser and any other apps, even other apps of the developer,” which seems aimed directly at Google and its preference to have all its iOS apps talk to each other. The big negative is that your BrowserEngineKit app is limited to the EU, because—surprise—the EU rules only apply to the EU.

Speaking of Google, Google’s VP of engineering for Chrome, Parisa Tabriz, commented on DeMonte’s statement on X, saying, “Strong agree with @mozilla. @Apple isn’t serious about supporting web browser or engine choice on iOS. Their strategy is overly restrictive, and won’t meaningfully lead to real choice for browser developers.”

Today, you can download what look like “alternative” browsers on iOS, like Chrome and Firefox, but these browsers are mostly just skins overtop of Apple’s Safari engine. iOS app developers aren’t actually allowed to include their own browser engines, so everything uses Safari’s WebKit engine, with a new UI and settings and sync features layered on top. That means all of WebKit’s bugs and feature support decisions apply to every browser.

Being stuck with Safari isn’t great for users. Over the years, Safari has earned a reputation as “the new IE” from some web developers, due to lagging behind the competition in its support for advanced web features. Safari has gotten notably better lately, though. For instance, in 2023, it finally shipped support for push notifications, allowing web apps to better compete with native apps downloaded from Apple’s cash-cow App Store. Apple’s support of push notifications came seven years after Google and Mozilla rolled out the feature.

More competition would be great for the iOS browser space, but the reality is that competition will mostly be from the other big “gatekeeper” in the room: Google. Chrome is the project with the resources and reach to better compete with Safari, and working its way into iOS will bring the web close to a Chrome monoculture. Google’s browser may have better support for certain web features, but it will also come with a built-in tracking system that spies on users and serves up their interests to advertisers. Safari has a much better privacy story.

Even though only EU users will get to choose from several actually different browsers, everyone still has to compete in the EU, and that includes Safari. For the rest of the world, even they don’t get a real browser choice; competing in the EU browser wars should make the only iOS browser better for everyone. The EU rules have a compliance deadline of March 2024, so iOS 17.4 needs to be out by then. Google and Mozilla have been working on full versions of their browsers for iOS for at least a year now. Maybe they’ll be ready for launch?

Google and Mozilla don’t like Apple’s new iOS browser rules Read More »

eu-right-to-repair:-sellers-will-be-liable-for-a-year-after-products-are-fixed

EU right to repair: Sellers will be liable for a year after products are fixed

Right to repair —

Rules also ban “contractual, hardware or software related barriers to repair.”

A European Union flag blowing in the wind.

Getty Images | SimpleImages

Europe’s right-to-repair rules will force vendors to stand by their products an extra 12 months after a repair is made, according to the terms of a new political agreement.

Consumers will have a choice between repair and replacement of defective products during a liability period that sellers will be required to offer. The liability period is slated to be a minimum of two years before any extensions.

“If the consumer chooses the repair of the good, the seller’s liability period will be extended by 12 months from the moment when the product is brought into conformity. This period may be further prolonged by member states if they so wish,” a European Council announcement on Friday said.

The 12-month extension is part of a provisional deal between the European Parliament and Council on how to implement the European Commission’s right-to-repair directive that was passed in March 2023. The Parliament and Council still need to formally adopt the agreement, which would then come into force 20 days after it is published in the Official Journal of the European Union.

“Once adopted, the new rules will introduce a new ‘right to repair’ for consumers, both within and beyond the legal guarantee, which will make it easier and more cost-effective for them to repair products instead of simply replacing them with new ones,” the European Commission said on Friday.

Rules prohibit “barriers to repair”

The rules require spare parts to be available at reasonable prices, and product makers will be prohibited from using “contractual, hardware or software related barriers to repair, such as impeding the use of second-hand, compatible and 3D-printed spare parts by independent repairers,” the Commission said.

The newly agreed-upon text “requires manufacturers to make the necessary repairs within a reasonable time and, unless the service is provided for free, for a reasonable price too, so that consumers are encouraged to opt for repair,” the European Council said.

There will be required options for consumers to get repairs both before and after the minimum liability period expires, the Commission said:

When a defect appears within the legal guarantee, consumers will now benefit from a prolonged legal guarantee of one year if they choose to have their products repaired.

When the legal guarantee has expired, the consumers will be able to request an easier and cheaper repair of defects in those products that must be technically repairable (such as tablets, smartphones but also washing machines, dishwashers, etc.). Manufacturers will be required to publish information about their repair services, including indicative prices of the most common repairs.

The overarching goal as stated by the Commission is to overcome “obstacles that discourage consumers to repair due to inconvenience, lack of transparency or difficult access to repair services.” To make finding repair services easier for users, the Council said it plans a European-wide online platform “to facilitate the matchmaking between consumers and repairers.”

EU right to repair: Sellers will be liable for a year after products are fixed Read More »

windows-version-of-the-venerable-linux-“sudo”-command-shows-up-in-preview-build

Windows version of the venerable Linux “sudo” command shows up in preview build

sudo start your photocopiers —

Feature is experimental and, at least currently, not actually functional.

Not now, but maybe soon?

Enlarge / Not now, but maybe soon?

Andrew Cunningham

Microsoft opened its arms to Linux during the Windows 10 era, inventing an entire virtualized subsystem to allow users and developers to access a real-deal Linux command line without leaving the Windows environment. Now, it looks like Microsoft may embrace yet another Linux feature: the sudo command.

Short for “superuser do” or “substitute user do” and immortalized in nerd-leaning pop culture by an early xkcd comic, sudo is most commonly used at the command line when the user needs administrator access to the system—usually to install or update software, or to make changes to system files. Users who aren’t in the sudo user group on a given system can’t run the command, protecting the rest of the files on the system from being accessed or changed.

In a post on X, formerly Twitter, user @thebookisclosed found settings for a Sudo command in a preview version of Windows 11 that was posted to the experimental Canary channel in late January. WindowsLatest experimented with the setting in a build of Windows Server 2025, which currently requires Developer Mode to be enabled in the Settings app. There’s a toggle to turn the sudo command on and off and a separate drop-down to tweak how the command behaves when you use it, though as of this writing the command itself doesn’t actually work yet.

The sudo command is also part of the Windows Subsystem for Linux (WSL), but that version of the sudo command only covers Linux software. This one seems likely to run native Windows commands, though obviously we won’t know exactly how it works before it’s enabled and fully functional. Currently, users who want a sudo-like command in Windows need to rely on third-party software like gsudo to accomplish the task.

The benefit of the sudo command for Windows users—whether they’re using Windows Server or otherwise—would be the ability to elevate the privilege level without having to open an entirely separate command prompt or Windows Terminal window. According to the options available in the preview build, commands run with sudo could be opened up in a new window automatically, or they could happen inline, but you’d never need to do the “right-click, run-as-administrator” dance again if you didn’t want to.

Microsoft regularly tests new Windows features that don’t make it into the generally released public versions of the operating system. This feature could also remain exclusive to Windows Server without making it into the consumer version of Windows. But given the command’s presence in Linux and macOS, it will be a nice quality-of-life improvement for Windows users who spend lots of time staring at the command prompt.

Microsoft is borrowing a longstanding Linux feature here, but that road goes both ways—a recent update to the Linux systemd software added a Windows-inspired “blue screen of death” designed to give users more information about crashes when they happen.

Windows version of the venerable Linux “sudo” command shows up in preview build Read More »

microsoft-in-deal-with-semafor-to-create-news-stories-with-aid-of-ai-chatbot

Microsoft in deal with Semafor to create news stories with aid of AI chatbot

a meeting-deadline helper —

Collaboration comes as tech giant faces multibillion-dollar lawsuit from The New York Times.

Cube with Microsoft logo on top of their office building on 8th Avenue and 42nd Street near Times Square in New York City.

Enlarge / Cube with Microsoft logo on top of their office building on 8th Avenue and 42nd Street near Times Square in New York City.

Microsoft is working with media startup Semafor to use its artificial intelligence chatbot to help develop news stories—part of a journalistic outreach that comes as the tech giant faces a multibillion-dollar lawsuit from the New York Times.

As part of the agreement, Microsoft is paying an undisclosed sum of money to Semafor to sponsor a breaking news feed called “Signals.” The companies would not share financial details, but the amount of money is “substantial” to Semafor’s business, said a person familiar with the matter.

Signals will offer a feed of breaking news and analysis on big stories, with about a dozen posts a day. The goal is to offer different points of view from across the globe—a key focus for Semafor since its launch in 2022.

Semafor co-founder Ben Smith emphasized that Signals will be written entirely by journalists, with artificial intelligence providing a research tool to inform posts.

Microsoft on Monday was also set to announce collaborations with journalist organizations including the Craig Newmark School of Journalism, the Online News Association, and the GroundTruth Project.

The partnerships come as media companies have become increasingly concerned over generative AI and its potential threat to their businesses. News publishers are grappling with how to use AI to improve their work and stay ahead of technology, while also fearing that they could lose traffic, and therefore revenue, to AI chatbots—which can churn out humanlike text and information in seconds.

The New York Times in December filed a lawsuit against Microsoft and OpenAI, alleging the tech companies have taken a “free ride” on millions of its articles to build their artificial intelligence chatbots, and seeking billions of dollars in damages.

Gina Chua, Semafor’s executive editor, has been involved in developing Semafor’s AI research tools, which are powered by ChatGPT and Microsoft’s Bing.

“Journalism has always used technology whether it’s carrier pigeons, the telegraph or anything else . . . this represents a real opportunity, a set of tools that are really a quantum leap above many of the other tools that have come along,” Chua said.

For a breaking news event, Semafor journalists will use AI tools to quickly search for reporting and commentary from other news sources across the globe in multiple languages. A Signals post might include perspectives from Chinese, Indian, or Russian media, for example, with Semafor’s reporters summarizing and contextualizing the different points of view, while citing its sources.

Noreen Gillespie, a former Associated Press journalist, joined Microsoft three months ago to forge relationships with news companies. “Journalists need to adopt these tools in order to survive and thrive for another generation,” she said.

Semafor was founded by Ben Smith, the former BuzzFeed editor, and Justin Smith, the former chief executive of Bloomberg Media.

Semafor, which is free to read, is funded by wealthy individuals, including 3G capital founder Jorge Paulo Lemann and KKR co-founder Henry Kravis. The company made more than $10 million in revenue in 2023 and has more than 500,000 subscriptions to its free newsletters. Justin Smith said Semafor was “very close to a profit” in the fourth quarter of 2023.

“What we’re trying to go after is this really weird space of breaking news on the Internet now, in which you have these really splintered, fragmented, rushed efforts to get the first sentence of a story out for search engines . . . and then never really make any effort to provide context,” Ben Smith said.

“We’re trying to go the other way. Here are the confirmed facts. Here are three or four pieces of really sophisticated, meaningful analysis.”

© 2024 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post to the web.

Microsoft in deal with Semafor to create news stories with aid of AI chatbot Read More »

facebook-rules-allowing-fake-biden-“pedophile”-video-deemed-“incoherent”

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent”

Not to be misled —

Meta may revise AI policies that experts say overlook “more misleading” content.

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent”

A fake video manipulated to falsely depict President Joe Biden inappropriately touching his granddaughter has revealed flaws in Facebook’s “deepfake” policies, Meta’s Oversight Board concluded Monday.

Last year when the Biden video went viral, Facebook repeatedly ruled that it did not violate policies on hate speech, manipulated media, or bullying and harassment. Since the Biden video is not AI-generated content and does not manipulate the president’s speech—making him appear to say things he’s never said—the video was deemed OK to remain on the platform. Meta also noted that the video was “unlikely to mislead” the “average viewer.”

“The video does not depict President Biden saying something he did not say, and the video is not the product of artificial intelligence or machine learning in a way that merges, combines, replaces, or superimposes content onto the video (the video was merely edited to remove certain portions),” Meta’s blog said.

The Oversight Board—an independent panel of experts—reviewed the case and ultimately upheld Meta’s decision despite being “skeptical” that current policies work to reduce harms.

“The board sees little sense in the choice to limit the Manipulated Media policy to cover only people saying things they did not say, while excluding content showing people doing things they did not do,” the board said, noting that Meta claimed this distinction was made because “videos involving speech were considered the most misleading and easiest to reliably detect.”

The board called upon Meta to revise its “incoherent” policies that it said appear to be more concerned with regulating how content is created, rather than with preventing harms. For example, the Biden video’s caption described the president as a “sick pedophile” and called out anyone who would vote for him as “mentally unwell,” which could affect “electoral processes” that Meta could choose to protect, the board suggested.

“Meta should reconsider this policy quickly, given the number of elections in 2024,” the Oversight Board said.

One problem, the Oversight Board suggested, is that in its rush to combat AI technologies that make generating deepfakes a fast, cheap, and easy business, Meta policies currently overlook less technical ways of manipulating content.

Instead of using AI, the Biden video relied on basic video-editing technology to edit out the president placing an “I Voted” sticker on his adult granddaughter’s chest. The crude edit looped a 7-second clip altered to make the president appear to be, as Meta described in its blog, “inappropriately touching a young woman’s chest and kissing her on the cheek.”

Meta making this distinction is confusing, the board said, partly because videos altered using non-AI technologies are not considered less misleading or less prevalent on Facebook.

The board recommended that Meta update policies to cover not just AI-generated videos, but other forms of manipulated media, including all forms of manipulated video and audio. Audio fakes currently not covered in the policy, the board warned, offer fewer cues to alert listeners to the inauthenticity of recordings and may even be considered “more misleading than video content.”

Notably, earlier this year, a fake Biden robocall attempted to mislead Democratic voters in New Hampshire by encouraging them not to vote. The Federal Communications Commission promptly responded by declaring AI-generated robocalls illegal, but the Federal Election Commission was not able to act as swiftly to regulate AI-generated misleading campaign ads easily spread on social media, AP reported. In a statement, Oversight Board Co-Chair Michael McConnell said that manipulated audio is “one of the most potent forms of electoral disinformation.”

To better combat known harms, the board suggested that Meta revise its Manipulated Media policy to “clearly specify the harms it is seeking to prevent.”

Rather than pushing Meta to remove more content, however, the board urged Meta to use “less restrictive” methods of coping with fake content, such as relying on fact-checkers applying labels noting that content is “significantly altered.” In public comments, some Facebook users agreed that labels would be most effective. Others urged Meta to “start cracking down” and remove all fake videos, with one suggesting that removing the Biden video should have been a “deeply easy call.” Another commenter suggested that the Biden video should be considered acceptable speech, as harmless as a funny meme.

While the board wants Meta to also expand its policies to cover all forms of manipulated audio and video, it cautioned that including manipulated photos in the policy could “significantly expand” the policy’s scope and make it harder to enforce.

“If Meta sought to label videos, audio, and photographs but only captured a small portion, this could create a false impression that non-labeled content is inherently trustworthy,” the board warned.

Meta should therefore stop short of adding manipulated images to the policy, the board said. Instead, Meta should conduct research into the effects of manipulated photos and then consider updates when the company is prepared to enforce a ban on manipulated photos at scale, the board recommended. In the meantime, Meta should move quickly to update policies ahead of a busy election year where experts and politicians globally are bracing for waves of misinformation online.

“The volume of misleading content is rising, and the quality of tools to create it is rapidly increasing,” McConnell said. “Platforms must keep pace with these changes, especially in light of global elections during which certain actors seek to mislead the public.”

Meta’s spokesperson told Ars that Meta is “reviewing the Oversight Board’s guidance and will respond publicly to their recommendations within 60 days.”

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent” Read More »