Author name: Shannon Garcia

at&t-acknowledges-data-leak-that-hit-73-million-current-and-former-users

AT&T acknowledges data leak that hit 73 million current and former users

A lot of leaked data —

Data leak hit 7.6 million current AT&T users, 65.4 million former subscribers.

A person walks past an AT&T store on a city street.

Getty Images | VIEW press

AT&T reset passcodes for millions of customers after acknowledging a massive leak involving the data of 73 million current and former subscribers.

“Based on our preliminary analysis, the data set appears to be from 2019 or earlier, impacting approximately 7.6 million current AT&T account holders and approximately 65.4 million former account holders,” AT&T said in an update posted to its website on Saturday.

An AT&T support article said the carrier is “reaching out to all 7.6 million impacted customers and have reset their passcodes. In addition, we will be communicating with current and former account holders with compromised sensitive personal information.” AT&T said the leaked information varied by customer but included full names, email addresses, mailing addresses, phone numbers, Social Security numbers, dates of birth, AT&T account numbers, and passcodes.

AT&T’s acknowledgement of the leak described it as “AT&T data-specific fields [that] were contained in a data set released on the dark web.” But the same data appears to be on the open web as well. As security researcher Troy Hunt wrote, the data is “out there in plain sight on a public forum easily accessed by a normal web browser.”

The hacking forum has a public version accessible with any browser and a hidden service that requires a Tor network connection. Based on forum posts we viewed today, the leak seems to have appeared on both the public and Tor versions of the hacking forum on March 17 of this year. Viewing the AT&T data requires a hacking forum account and site “credits” that can be purchased or earned by posting on the forum.

Hunt told Ars today that the term “dark web” is “incorrect and misleading” in this case. The forum where the AT&T data appeared “does not meet the definition of dark web,” he wrote in an email. “No special software, no special network, just a plain old browser. It’s easily discoverable via a Google search and immediately shows many PII [Personal Identifiable Information] records from the AT&T breach. Registration is then free for anyone with the only remaining barrier being obtaining credits.”

We contacted AT&T today and will update this article if we get a response.

49 million email addresses

Hunt’s post on March 19 said the leaked information included a file with 73,481,539 lines of data that contained 49,102,176 unique email addresses. Another file with decrypted Social Security numbers had 43,989,217 lines, he wrote.

Hunt, who runs the “Have I Been Pwned” database that lets you check if your email was in a data breach, says the 49 million email addresses in the AT&T leak have been added to his database.

BleepingComputer covered the leak two weeks ago, writing that it is the same data involved in a 2021 incident in which a hacker shared samples of the data and attempted to sell the entire data set for $1 million. In 2021, AT&T told BleepingComputer that “the information that appeared in an Internet chat room does not appear to have come from our systems.”

AT&T maintained that position last month. “AT&T continues to tell BleepingComputer today that they still see no evidence of a breach in their systems and still believe that this data did not originate from them,” the news site’s March 17, 2024, article said.

AT&T says data may have come from itself or vendor

AT&T’s update on March 30 acknowledged that the data may have come from AT&T itself, but said it also may have come from an AT&T vendor:

AT&T has determined that AT&T data-specific fields were contained in a data set released on the dark web approximately two weeks ago. While AT&T has made this determination, it is not yet known whether the data in those fields originated from AT&T or one of its vendors. With respect to the balance of the data set, which includes personal information such as Social Security numbers, the source of the data is still being assessed.

“Currently, AT&T does not have evidence of unauthorized access to its systems resulting in exfiltration of the data set,” the company update also said. AT&T said it “is communicating proactively with those impacted and will be offering credit monitoring at our expense where applicable.”

AT&T said the passcodes that it reset are generally four digits and are different from AT&T account passwords. The passcodes are used when calling customer support, when managing an account at a retail store, and when signing in to the AT&T website “if you’ve chosen extra security.”

AT&T acknowledges data leak that hit 73 million current and former users Read More »

redis’-license-change-and-forking-are-a-mess-that-everybody-can-feel-bad-about

Redis’ license change and forking are a mess that everybody can feel bad about

Licensing is hard —

Cloud firms want a version of Redis that’s still open to managed service resale.

AWS data centers built right next to suburban cul-de-sac housing

Enlarge / An Amazon Web Services (AWS) data center under construction in Stone Ridge, Virginia, in March 2024. Amazon will spend more than $150 billion on data centers in the next 15 years.

Getty Images

Redis, a tremendously popular tool for storing data in-memory rather than in a database, recently switched its licensing from an open source BSD license to both a Source Available License and a Server Side Public License (SSPL).

The software project and company supporting it were fairly clear in why they did this. Redis CEO Rowan Trollope wrote on March 20 that while Redis and volunteers sponsored the bulk of the project’s code development, “the majority of Redis’ commercial sales are channeled through the largest cloud service providers, who commoditize Redis’ investments and its open source community.” Clarifying a bit, “cloud service providers hosting Redis offerings will no longer be permitted to use the source code of Redis free of charge.”

Clarifying even further: Amazon Web Services (and lesser cloud giants), you cannot continue reselling Redis as a service as part of your $90 billion business without some kind of licensed contribution back.

This generated a lot of discussion, blowback, and action. The biggest thing was a fork of the Redis project, Valkey, that is backed by The Linux Foundation and, critically, also Amazon Web Services, Google Cloud, Oracle, Ericsson, and Snap Inc. Valkey is “fully open source,” Linux Foundation execs note, with the kind of BSD-3-Clause license Redis sported until recently. You might note the exception of Microsoft from that list of fork fans.

As noted by Matt Asay, who formerly ran open source strategy and marketing at AWS, most developers are “largely immune to Redis’ license change.” Asay suggests that, aside from the individual contributions of AWS engineer and former Redis core contributor Madelyn Olson (who contributed in her free time) and Alibaba’s Zhao Zhao, “The companies jumping behind the fork of Redis have done almost nothing to get Redis to its current state.”

Olson told TechCrunch that she was disappointed by Redis’ license change but not surprised. “I’m more just disappointed than anything else.” David Nally, AWS’ current director for open source strategy and marketing, demurred when asked by TechCrunch if AWS considered buying a Redis license from Redis Inc. before forking. “[F]rom an open-source perspective, we’re now invested in ensuring the success of Valkey,” Nally said.

Shifts in open source licensing have triggered previous keep-it-open forks, including OpenSearch (from ElasticSearch) and OpenTofu (from Terraform). With the backing of the Linux Foundation and some core contributors, though, Valkey will likely soon evolve far beyond a drop-in Redis replacement, and Redis is likely to follow suit.

If you’re reading all this and you don’t own a gigascale cloud provider or sit on the board of a source code licensing foundation, it’s hard to know what to make of the fiasco. Every party in this situation is doing what is legally permissible, and software from both sides will continue to be available to the wider public. Taking your ball and heading home is a longstanding tradition when parties disagree on software goals and priorities. But it feels like there had to be another way this could have worked out.

Redis’ license change and forking are a mess that everybody can feel bad about Read More »

google-podcasts-shuts-down-tomorrow,-april-2

Google Podcasts shuts down tomorrow, April 2

Google Listen was the last good Google podcast app —

Building a podcast player into Google Search was always a weird plan.

Each headstone in this miniature, decorative cemetery is for a defunct Google product.

Enlarge / A spooky Halloween display from Google’s Seattle campus.

RIP Google Podcasts. Google’s self-branded podcasting service shuts down tomorrow, April 2, and existing users have until July to export any subscriptions that are still on the service. Google originally announced the shutdown in September and has been plastering shutdown notices all over the Google Podcasts site and app for a few days now.

Google Podcasts was Google’s third podcasting service, after Google Listen (2009–2012) and Google Play Music Podcasts (2016–2020). The shutdown will clear the deck for Google’s media consolidation under the YouTube brand with podcasting app No. 4: YouTube Podcasts.

Google Podcasts has always had an awkward life.  Despite an eight-year existence, it has only been a viable podcasting app for maybe half that time. The project grew out of the Google Search team’s desire to index podcast content. That started in 2016 when searching for a podcast would show a player embedded right in the Google Search results. This only worked on google.com and on the Android search app.

The Google Podcasts shutdown notice.

The Google Podcasts shutdown notice.

Google

Actually subscribing to a podcast didn’t come until two years later, in 2018, allowing users to finally do the bare minimum of opening the app and seeing the latest episodes of shows they’re subscribed to. Again, though, this was all in the Google Search app, which didn’t make sense to anyone, especially when Google already had a decent podcast ecosystem going in its primary music app, Google Play Music. A month later it launched a formal “Google Podcasts” app on the Play Store, helping the app make a little more sense, even though under the hood the “app” was just a link to the podcast interface in the Google Search app. This was also the first podcast player to integrate with another Google Search project, the Google Home smart speaker.

Two years later, in 2020, Google finally launched an iOS app. At this point, four years after launch, as a third-party observer, you could begin to think that “maybe Google is actually serious about Google Podcasts.” The very next year rumors of “YouTube Podcasts” started, and the writing was on the wall for the search team’s weird little podcast app.

Google Podcasts is one of the major examples of Google’s disorganization. Along with Google Play Music Podcasts, Google launched two competing and disconnected podcast services within the same week! The Google Search team never had a clear reason for building a podcast app, and no clear vision; it felt like it was going rogue inside the company. Along with a glacially slow development pace, Google Podcasts feels like it should have never existed to begin with.

Google Podcasts shuts down tomorrow, April 2 Read More »

ncuti-gatwa’s-fifteenth-doctor-rocks-the-fashion-in-new-doctor-who-trailer

Ncuti Gatwa’s Fifteenth Doctor rocks the fashion in new Doctor Who trailer

The Fifteenth Doctor is in —

The return of Russell T. Davies as show runner has been a welcome one.

Ncuti Gatwa officially begins his tenure as the Fifteenth Doctor in May, when the new Doctor Who season premieres.

Heads up, Whovians! We’ve got a newly regenerated Fifteenth Doctor in Ncuti Gatwa and a new season of the long-running British sci-fi series Doctor Who on the way. Judging by the latest trailer, we’re in for another wild ride of time-traveling hijinks, punctuated by an irresistibly charismatic Gatwa sporting some very colorful outfits with confident aplomb.

(Spoilers for most recent seasons and specials below.)

Look, I loved Jodie Whittaker’s incarnation of the Doctor, but her tenure was hampered by the unavoidable fact that showrunner Chris Chibnall just didn’t give her a lot of great material to work with. Among other issues, there was an unfortunate tendency toward didacticism and preachiness in the writing at the expense of genuine emotional resonance. While there were a number of notable episodes, and Chibnall gamely trotted out all the fan-favorite monsters and tropes, nothing ever fully captured the imagination in quite the same way as the show has always done at its best. Whittaker deserved better.

But then the BBC announced the return of Russell T. Davies—who revived the series in 2005 with Christopher Eccleston as the Ninth Doctor—as showrunner, setting up another reset of this beloved series. When Gatwa’s casting was announced, everyone assumed Whittaker’s Thirteenth Doctor would regenerate accordingly at the end of “The Power of the Doctor.” Instead, the newly regenerated Fourteenth Doctor was played by none other than David Tennant, everyone’s favorite Tenth Doctor—a little older with a few tweaks to his trademark look.

It was great casting for the 60th anniversary specials, in which Tennant’s Fourteenth Doctor reunited with Donna Noble (Catherine Tate)—one of my favorite companions. Donna had her memories of the Doctor wiped by the Tenth Doctor to save her life since she had taken on some Time Lord knowledge that human beings just aren’t designed to carry. Donna now had a teenage daughter named Rose, and of course, a major crisis forced the Doctor to restore the erased knowledge to save London yet again. Donna should have died, but her Time Lord knowledge ended up being safely split between her and Rose instead.

The Doctor and Donna next encountered an abandoned spaceship filled with doppelgängers (Not-Things) in “Wild Blue Yonder.” In “The Giggle,” they faced off against the Toymaker (Neil Patrick Harris), and during the climactic battle, the Fourteenth Doctor was shot. Fans expecting the usual regeneration were in for a surprise. The Fourteenth Doctor “bigenerated” instead, resulting in both a Fourteenth Doctor and Gatwa’s Fifteenth Doctor, a separate physical entity.

  • Ncuti Gatwa is ready for his first full season as the Fifteenth Doctor.

    YouTube/BBC

  • His new companion is Ruby Sunday (Millie Gibson).

    YouTube/BBC

  • “Space babies!”

    YouTube/BC

  • The Doctor and the dinosaurs.

    YouTube/BBC

  • Going full-on Bridgerton.

    YouTube/BBC

  • “We are going to rock through time…”

    YouTube/BBC

  • Sporting a snazzy tangerine-colored knit.

    YouTube/BBC

  • Looking very Mod Squad, Doctor!

    YouTube/BBC

  • Recreating a famous album cover because why not?

    YouTube/BBC

The two Doctors teamed up to defeat the Toymaker and then figured out how to duplicate the TARDIS by drawing on the power of the remnants of the villain’s reality-warping domain. And Gatwa’s Doctor embarked on a fresh adventure with the 2023 Christmas special “The Church on Ruby Road,” which also introduced us to his new companion, Ruby Sunday (Millie Gibson).

All of that brings us to season 14. All we really know about this new season is that it will have eight episodes, beginning with the Davies-penned “Space Babies” and “The Devil’s Chord.” Davies wrote six out of the eight episodes, in fact, closing out with “The Legend of Ruby Sunday” and the finale, “Empire of Death.”  The latest trailer doesn’t give us much more than some exciting visual teases of what’s in store, including the aforementioned space babies, dinosaurs, a mysterious spacecraft—and all those outfits.

The Fifteenth Doctor is apparently something of a clothes horse. Each incarnation of the Doctor has always had a trademark “look,” but costume designer Pam Downe decided to broaden the scope for Gatwa, incorporating design elements from previous Doctors all the way back to Jon Pertwee’s Third Doctor, whose style Gatwa particularly admired. That Regency-era burgundy velvet jacket is definitely a nod to the Third Doctor. There’s even a 1960s suit and Afro reminiscent of the Mod Squad or Austin Powers (with a sly allusion to The Beatles’ Abbey Road). Gatwa is clearly having a blast, which bodes well for the upcoming new season.

Season 14 of Doctor Who premieres on BBC and Disney+ on May 10, 2024, in the US and May 11 in the UK.

Listing image by YouTube/BBC

Ncuti Gatwa’s Fifteenth Doctor rocks the fashion in new Doctor Who trailer Read More »

russia-has-a-plan-to-“restore”-its-dominant-position-in-the-global-launch-market

Russia has a plan to “restore” its dominant position in the global launch market

Russian President Vladimir Putin (L) and Roscosmos Space Corporation Chief Yuri Borisov peruse an exhibit while visiting the Korolev Rocket and Space Corporation Energia, October 26, 2023, in Korolev, Russia.

Enlarge / Russian President Vladimir Putin (L) and Roscosmos Space Corporation Chief Yuri Borisov peruse an exhibit while visiting the Korolev Rocket and Space Corporation Energia, October 26, 2023, in Korolev, Russia.

Contributor/Getty Images

It has been a terrible decade for the Russian launch industry, which once led the world. The country’s long-running workhorse, the Proton rocket, ran into reliability issues and will soon be retired. Russia’s next-generation rocket, Angara, is fully expendable and still flying dummy payloads on test flights a decade after its debut. And the ever-reliable Soyuz vehicle lost access to lucrative Western markets after the Russian invasion of Ukraine.

Yet there has been a more fundamental, underlying disease pushing the once-vaunted Russian launch industry toward irrelevance. The country has largely relied on decades-old technology in a time of serious innovation within the launch industry. So what worked at the turn of the century to attract the launches of commercial satellites no longer does against the rising tide of competition from SpaceX, as well as other players in India and China.

Through the first quarter of this year, Russia has launched a total of five rockets, all variants of the Soyuz vehicle. SpaceX alone has launched 32 rockets. China, too, has launched nearly three times as many boosters as Russia.

However, Russia has a plan to reclaim the dominance it once held in the global launch industry. In a recent interview published on the Roscosmos website (a non-geo-blocked version is available here) the chief of the Russian space corporation, Yuri Borisov, outlined the strategy by which the country will do so.

The first step, Borisov said, is to develop a partially reusable replacement for the Soyuz rocket, called Amur-CNG. The country’s spaceflight enterprise is also working on “ultralight” boosters that will incorporate an element of reusability.

“I hope that by the 2028–2029 timeframe we will have a completely new fleet of space vehicles and will be able to restore our position in the global launch services market,” Borisov said in the interview, which was translated for Ars by Rob Mitchell.

A miracle, Amur

Russia has previously discussed plans to develop the Amur rocket (the CNG refers to the propellant, liquified methane). The multi-engine vehicle looks somewhat similar to SpaceX’s Falcon 9 rocket in that preliminary designs incorporated landing legs and grid fins to enable a powered first-stage landing.

The country’s space industry first unveiled its Amur plans back in 2020, when officials said they were targeting a low price of just $22 million for a launch on Amur, which would be capable of delivering 10.5 tons to low-Earth orbit. Essentially, then, it would offer about half the carrying capacity of a Falcon 9 rocket for one-third of the price.

At the time, Roscosmos officials were targeting a 2026 debut for Amur. Had they been able to deliver such a capability, it would undoubtedly be an attractively priced offering. Alas, the year 2026 appears to be off the table now. Through his comments, Borisov indicated that Amur will not be ready before 2028 or 2029.

Since there has been almost a year-for-year slippage in that date since Amur’s announcement in 2020, it seems likely that even this target late in the decade is unrealistic.

Russia has a plan to “restore” its dominant position in the global launch market Read More »

notes-on-dwarkesh-patel’s-podcast-with-sholto-douglas-and-trenton-bricken

Notes on Dwarkesh Patel’s Podcast with Sholto Douglas and Trenton Bricken

Dwarkesh Patel continues to be on fire, and the podcast notes format seems like a success, so we are back once again.

This time the topic is how LLMs are trained, work and will work in the future. Timestamps are for YouTube. Where I inject my own opinions or takes, I do my best to make that explicit and clear.

This was highly technical compared to the average podcast I listen to, or that Dwarkesh does. This podcast definitely threated to technically go over my head at times, and some details definitely did go over my head outright. I still learned a ton, and expect you will too if you pay attention.

This is an attempt to distill what I found valuable, and what questions I found most interesting. I did my best to make it intuitive to follow even if you are not technical, but in this case one can only go so far. Enjoy.

  • (1: 30) Capabilities only podcast, Trenton has ‘solved alignment.’ April fools!

  • (2: 15) Huge context tokens is underhyped, a huge deal. It occurs to me that the issue is about the trivial inconvenience of providing the context. Right now I mostly do not bother providing context on my queries. If that happened automatically, it would be a whole different ballgame.

  • (2: 50) Could the models be sample efficient if you can fit it all in the context window? Speculation is it might work out of the box.

  • (3: 45) Does this mean models are already in some sense superhuman, with this much context and memory? Well, yeah, of course. Computers have been superhuman at math and chess and so on for a while. Now LLMs have quickly gone from having worse short term working memory than humans to vastly superior short term working memory. Which will make a big difference. The pattern will continue.

  • (4: 30) In-context learning is similar to gradient descent. It gets problematic for adversarial attacks, but of course you can ignore that because as Tenton reiterates alignment is solved, and certainly it is solved for such mundane practical concerns. But it does seem like he’s saying if you do this then ‘you’re fine-tuning but in a way where you cannot control what is going on’?

  • (6: 00) Models need to learn how to learn from examples in order to take advantage of long context. So does that mean the task of intelligence requires long context? That this is what causes the intelligence, in some sense, they ask? I don’t think you can reverse it that way, but it is possible that this will orient work in directions that are more effective?

  • (7: 00) Dwarkesh asks about how long contexts link to agent reliability. Douglas says this is more about lack of nines of reliability, and GPT-4-level models won’t cut it there. And if you need to get multiple things right, the reliability numbers have to multiply together, which does not go well in bulk. If that is indeed the issue then it is not obvious to me the extent to which scaffolding and tricks (e.g. Devin, probably) render this fixable.

  • (8: 45) Performance on complex tasks follows log scores. It gets it right one time in a thousand, then one in a hundred, then one in ten. So there is a clear window where the thing is in practice useless, but you know it soon won’t be. And we are in that window on many tasks. This goes double if you have complex multi-step tasks. If you have a three-step task and are getting each step right one time in a thousand, the full task is one in a billion, but you are not so far being able to in practice do the task.

  • (9: 15) The model being presented here is predicting scary capabilities jumps in the future. LLMs can actually (unreliably) do all the subtasks, including identifying what the subtasks are, for a wide variety of complex tasks, but they fall over on subtasks too often and we do not know how to get the models to correct for that. But that is not so far from the whole thing coming together, and that would include finding scaffolding that lets the model identify failed steps and redo them until they work, if which tasks fail is sufficiently non-deterministic from the core difficulties.

  • (11: 30) Attention costs for context window size are quadratic, so how is Google getting the window so big? Suggestion is the cost is still actually dwarfed by the MLP block, and while generating tokens the cost is no longer n-squared, your marginal cost becomes linear.

  • (13: 30) Are we shifting where the models learn, with more and more in the forward pass? Douglas says essentially no, the context length allows useful working memory, but is not ‘the key thing towards actual reasoning.’

  • (15: 10) Which scaling up counts? Tokens, compute, model size? Can you loop through the model or brain or language? Yes, but in practice notice humans only in practice do 5-7 steps in complex sentences because of working memory limits.

  • (17: 15) Where is the model reasoning? No crisp answer. The residual stream that the model carries forward packs in a lot of different vectors that encode all the info. Attention is about what to pick up and put into what is effectively RAM.

  • (20: 40) Does the brain work via this residual stream? Yes. Humans implement a bunch of efficient algorithms and really scale up our cerebral cortex investment. A key thing we do is very similar to the attention algorithm.

  • (24: 00) How does the brain reason? Trenton thinks mostly intelligence is pattern matching. ‘Association is all you need.’

  • (25: 45) Paper from Demis in 2008 noted that memory is reconstructive, so it is linked to creativity and also is horribly unreliable.

  • (26: 45) What makes Sherlock Homes so good? Under this theory: A really long context length and working memory, and better high-level association. Also a good algorithm for his queries and how to build representations. Also proposed: A Sherlock Homes evaluation. Give a mystery novel or story, ask for probability distribution over ‘The suspect is X.’

  • (28: 30) A vector in the residual stream is the composite of all the tokens to which I have previously paid attention, even by layer two.

  • (30: 30) Could we do an unsupervised benchmark? It has been explored, such as with constitutional AI. Again, alignment-free podcast here.

  • (31: 45) If intelligence is all associations, should we be less worried about superintelligence, because there’s not this sense in which it is Sherlock++ and it can’t solve physics from a world frame? The response is, they would need to learn the associations, but also the tech makes that quick to do, and silicon can be about as generally intelligent as humans and can recursively improve anyway.

  • My response here would strongly be that if this is true, we should be more worried rather than less worried, because it means there is no secret or trick, and scale really would be all you would need, if you scale enough distinct aspects, and we should expect that we would do that.

  • (32: 45) Dwarkesh asks if this means disagreeing with the premise of them not being that much more powerful. To which I would strongly say yes. If it turns out that the power comes from associations, then that still leads to unbounded power, so what if it does not sound impressive? What matters is if it works.

  • (33: 30) If we got thousands of you do we get an intelligence explosion? We do dramatically speed up research but compute is a binding constraint. Trenton thinks we would need longer contexts, more reliability and lower cost to get an intelligence explosion, but getting there within a few years seems plausible.

  • (37: 30) Trenton expects this to speed up a lot of the engineering soon, accelerating research and compounding, but not (yet) a true intelligence explosion.

  • (39: 00) What about the costs of training orders-of-magnitude bigger models? Does this break recursive intelligence explosion? It’s a breaking mechanism. We should be trying hard to estimate how much of this is automatable. I agree that the retraining costs and required time are a breaking mechanism, but also efficiency gains could quickly reduce those costs, and one could choose to work around the need to do that via other methods. One should not be confident here.

  • (41: 00) Understanding what goes wrong is key to making AI progress. There are lots of ideas but figuring out which ideas are worth exploring is vital. This includes anticipating which trend lines will hold when scaled up and which won’t. There’s an invisible graveyard of trend lines that looked promising and then failed to hold.

  • (44: 20) A lot of good research works backwards from solving actual problems. Trying to understand what is going on, figuring out how to run experiments. Performance is lots of low-level hard engineering work. Ruthless prioritization is key to doing high quality research, the most effective people attack the problem, do really fast experiments and do not get attached to solutions. Everything is empirical.

  • (48: 00) “Even though we wouldn’t want to admit it, the whole community is kind of doing greedy evolutionary optimization over the landscape of possible AI architectures and everything else. It’s no better than evolution. And that’s not even a slight against evolution.” Does not fill one with confidence on safety.

  • (49: 30) Compute and taste on what to do are the current limiting factors for capabilities. Scaling to properly use more humans is hard. For interpretability they need more good engineers.

  • (51: 00) “I think the Gemini program would probably be maybe five times faster with 10 times more compute or something like that. I think more compute would just directly convert into progress.”

  • (51: 30) If compute is such a bottleneck is it being insufficiently allocated to such research and smaller training tasks? You also need the big training runs to avoid getting off track.

  • (53: 00) What does it look like for AI to speed up AI research? Could be algorithmic progress from AI. That takes more compute, but seems quite reasonable this could act as a force multiplier for humans. Also could be synthetic data.

  • (55: 30) Reasoning traces are missing from data sets, and seem important.

  • (56: 15) Is progress going to be about making really amazing AI maps of the training data? Douglas says clearly a very important part. Doing next token on a sufficiently good data set requires so many other things.

  • (58: 30) Language as synthetic data by humans for humans? With verifier via real world.

  • (59: 30) Yeah, whole development process is largely evolutionary, more people means more recombination, more shots on target. That does to me seem in conflict with the best people being the ones who can discriminate over potential tasks and ideas. But also they point out serendipity is a big deal and it scales. They expect AGI to be the sum of a bunch of marginal things.

  • (1: 01: 30) If we don’t get AGI by GPT-7-levels-of-OOMs are we stuck? Sholto basically buys this, that orders of magnitude have at core diminishing returns, although they unlock reliability, reasoning progress is sublinear in OOMs. Dwarkesh notes this is highly bearish, which seems right.

  • (1: 03: 15) Sholto points out that even with smaller progress, another 3.5→4 jump in GPT-levels is still pretty huge. We should expect smart plus a lot of reliability. This is not to undersell what is coming, rather the jumps so far are huge, and even smaller jumps from here unlock lots of value. I agree.

  • (1: 07: 30) Bigger models allow you to minimize superposition (overloading more features onto less parameters), making results less noisy, whereas smaller ones are under parameterized given their goal of representing the entire internet. Speculation that superposition is why interpretability is so hard. I wonder if that means it could get easier with more parameters? Could we use ‘too many’ parameters on purpose in order to help with this?

  • (1: 11: 00) What’s happening with distilled models? Dwarkesh suggests GPT-4-Turbo is distilled, Sholto suggests it could instead be new architecture.

  • (1: 12: 30) Distillation is powerful because the full probability distribution gives you much richer data to work with.

  • (1: 13: 30) Adaptive compute means spend more cycles on harder questions. How do you do that via chain of thought? You get to pass a KV-value during forward passes, not only passing only the token, which helps, so the KV-cache is (headcanon-level, not definitively) pushing forward the CoT without having to link to the output tokens. This is ‘secret communication’ (from the user’s perspective) of the model to its forward inferences, and we don’t know how much of that is happening. Not always the thing going on, but there is high weirdness.

  • (1: 19: 15) Anthropic sleeper agents paper, notice the CoT reasoning does seem to impact results and the reasoning it does is pretty creepy. But in another paper, the model will figure out the multiple choice answer is always ‘A’ but the reasoning in its CoT will be something else that sounds plausible. Dwarkesh notes humans also come up with crazy explanations for what they are doing, such as when they have split brains. “It’s just that some people will hail chain-of-thought reasoning as a great way to solve AI safety, but actually we don’t know whether we can trust it.”

  • (1: 23: 30) Agents, how will they work once they work well enough? Short term expectation from Sholto is agent talking together. Sufficiently long context windows could make fine-tuning unnecessary or irrelevant.

  • (1: 26: 00) With sufficient context could you train everything on a global goal like ‘did the firm make money?’ In the limit, yes, that is ‘the dream of reinforcement learning.’ Can you feel the instrumental convergence? At first, though, they say, in practice, no, it won’t work.

  • (1: 27: 45) Suggestion that languages evolve to be good at encoding things to teach children important things, such as ‘don’t die.’

  • (1: 29: 30) In other modalities figuring out exactly what you are predicting is key to success. For language you predict the next token, it is easy mode in that sense.

  • (1: 31: 30) “there are interesting interpretability pieces where if we fine-tune on math problems, the model just gets better at entity recognition.” It makes the model better at attending to positions of things and such.

  • (1: 32: 30) Getting better at code makes the model a better thinking. Code is reasoning, you can see how it would transfer. I certainly see this happening in humans.

  • (1: 35: 00) Section on their careers. Sholto’s story is a lot of standard things you hear from high-agency, high-energy high-achieving people. They went ahead and did things, and also pivot and go in different directions and follow curiosity, read all the papers. Strong ideas, loosely held, carefully selected, vigorously pursued. Dwarkesh notes one of the most important things is to go do the things, and managers are desperate for people who will make sure things get done. If you get bottlenecked because you need lawyers, well, why didn’t you go get the lawyers? Lots of impact is convincing people to work with you to do a thing.

  • (1: 43: 30) Sholto is working on AI largely because he thinks it can lead to a wonderful future, and was sucked into scaling by Gwern’s scaling hypothesis post. That is indeed the right reason, if you are also taking into account the downside risks including existential risks, and still think this is a good idea. It almost certainly is not a neutral idea, it is either a very good idea or extremely ill-advised.

  • (1: 43: 35) Sholto says McKinsey taught him how to actually do work, and the value of not taking no for an answer, whereas often things don’t happen because no individual cares enough to make it happen. The consultant can be that person, and you can be that person otherwise without being a consultant. He got hired largely by being seen on the internet asking questions about how things work, causing Google to reach out. It turns out at Google you can ask the algorithm and systems experts and they will gladly teach you everything they know.

  • (1: 51: 30) Being in the office all the time, collaborating with others including pair programming with Sergey Brin sometimes, knowing the people who make decisions, matters a lot.

  • (1: 54: 00) Trenton’s story begins, his was more standard and direct.

  • (1: 55: 30) Dwarkesh notes that these stories are framed as highly contingent, that people tend to think their own stories are contingent and those of others are not. Sholto mentions the idea of shots on goal, putting yourself in position to get lucky. I buy this. There are a bunch of times I got lucky and something important happened. If you take those times away, or add different ones, my life could look very different. Also a lot of what was happening was, effectively, engineering the situation to allow those events to happen, without having a particular detailed event in mind. Same with these two.

  • (1: 57: 00) Google is continuing the experiment to find high-agency people and bootstrap them. Seems highly promising. Also Chris Olah was hired off a cold email. You need to send and look out for unusual signals. I agree with Dwarkesh that is very good for the world that a lot of this hiring is not done legibly, and instead is people looking out for agency and contributions generally. If you write a great paper or otherwise show you have the goods, the AI labs will find you.

  • (2: 01: 45) You still need to do the interview process, make sure people can code or what not and you are properly debiased, but that process should be designed not to get in the way otherwise.

  • (2: 03: 00) Emphasis on need to care a ton, and go full blast towards what you want, doing everything that would help.

  • (2: 04: 30) When you get your job then is that the time to relax or to put petal to the metal? There’s pros and cons. Not everyone can go all out, many people want to focus on their families or otherwise relax. Others need to be out there working every hour of the week, and the returns are highly superlinear. And yes, this seems very right to me, returns to going fully in on something have been much higher than returns to ordinary efforts. Jane Street would have been great for me if I could have gone fully in, but I was not in a position to do that.

  • (2: 06: 00) Dwarkesh: “I just try to come up with really smart questions to send to them. In that entire process I’ve always thought, if I just cold email them, it’s like a 2% chance they say yes. If I include this list, there’s a 10% chance. Because otherwise, you go through their inbox and every 34 seconds, there’s an interview for some podcast or interview. Every single time I’ve done this they’ve said yes.” And yep, story checks out.

  • (2: 09: 30) A discussion of what is a feature. It is whatever you call a feature, or it is anything you can turn on and off, it any of the things. Is that a useful definition? Not if the features were not predictive, or if the features did not do anything. The point is to compose the features into something higher level.

  • (2: 17: 00) Trenton thinks you can detect features that correspond to deceptive behavior, or malicious behavior, when evaluating a request. I’ve discussed my concerns on this before. It is only a feature if you can turn it on and off, perhaps?

  • (2: 20: 00) There are a bunch of circuits that have various jobs they try to do, sometimes as simple as ‘copy the last token,’ and then there are other heads that suppress that behavior. Reasons to do X, versus reasons not to do X.

  • (2: 20: 45) Deception circuit gets labeled as whatever fires in examples where you find deception, or similar? Well, sure, basically.

  • (2: 22: 00) RLHF induces theory of mind.

  • (2: 22: 05) What do we do if the model is superhuman, will our interpretability strategies still work, would we understand what was going on? Trenton says that the models are deterministic (except when finally sampling) so we have a lot to work with, and we can do automated interpretability. And if it is all associations, then in theory that means what in my words would be ‘no secret’ so you can break down whatever it is doing into parts that we can understand and thus evaluate. A claim that evaluation in this sense is easier than generation, basically.

  • (2: 24: 00) Can we find things without knowing in advance what they are? It should be possible to identify a feature and how it relates to other features even if you do not know what the feature is in some sense. Or you can train in the new thing and see what activates, or use other strategies.

  • (2: 26: 00) Is red teaming Gemma helping jailbreak Gemini? How universal are features across models? To some extent.

  • (2: 27: 00) Curriculum learning, which is trying to teach the model things in an intentional order to facilitate learning, is interesting and mentioned in the Gemini paper.

  • (2: 29: 45) Very high confidence that this general model of what is going on with superposition is right, based on success of recent work.

  • (2: 31: 00) A fascinating question: Should humans learn a real representation of the world, or would a distorted one be more useful in some cases? Should venomous animals flash neon pink, a kind of heads-up display baked into your eyes? The answer is that you have too many different use cases, distortions do more harm than good, you want to use other ways to notice key things, and so that is what we do. So Trenton is optimistic the LLMs are doing this too.

  • (2: 32: 00) “Another dinner party question. Should we be less worried about misalignment? Maybe that’s not even the right term for what I’m referring to, but alienness and Shoggoth-ness? Given feature universality there are certain ways of thinking and ways of understanding the world that are instrumentally useful to different kinds of intelligences. So should we just be less worried about bizarro paperclip maximizers as a result?” I quote this question because I do not understand it. If we have feature universality, how is that not saying that the features are compatible with any set of preferences, over next tokens or otherwise? So why is this optimistic? The response is that components of LLMs are often very Shoggoth-like.

  • (2: 34: 00) You can talk to any of the current models in Base64 and it works great.

  • (2: 34: 10) Dwarkesh asks, doesn’t the fact that you needed a Base64 expert to happen to be there to recognize what the Base64 feature was mean that interpretability on smarter models is going to be really hard, if no human can grok it? Anomaly detection is suggested, you look for something different. Any new feature is a red flag. Also you can ask the model for help sometimes, or automate the process. All of this strikes me as exactly how you train a model how not to be interpretable.

  • (2: 36: 45) Feature splitting is where if you only have so much space in the model for birds it will learn ‘birds’ and call it a day, whereas if it has more room it will learn features for different specific birds.

  • (2: 38: 30) We have this mess of neurons and connections. The dream is bootstrapping to making sense of all that. Not claiming we have made any progress here.

  • (2: 39: 45) What parts of the process for GPT-7 will be expensive? Training the sparce encoder and doing projection into a wider space of features, or labeling those features? Trenton says it depends on how much data goes in and how dimensional is your space, which I think means how overloaded and full of superpositions you are or are measuring.

  • (2: 42: 00) Dwarkesh asks: Why should the features be things we can understand? In Mixtral of Experts they noticed their experts were not distinctive in ways they could understand. They are excited to study this question more but so far don’t know much. It is empirical, and they will know when they look and find out. They claim there is usually clear breakdown of expert types, but that you can also get distinctions that break up what you would naively expect.

  • (2: 45: 00) Try to disentangle all these neurons, audience. Sholto’s challenge to you.

  • (2: 48: 00) Bruno Olshausen theorizes that all the brain regions you do not here about are doing a ton of computation in superposition. And sure, why not? The human brain sure seems under-parameterized.

  • (2: 49: 25) Superposition is a combinatorial code, not an artifact of one neuron.

  • (2: 51: 20) GPT-7 has been trained. Your interpretability research succeeded. What will you do next? Try to get it to do the work, of course. But no, before that, what do you need to do to be convinced it is safe to deploy? ‘I mean we have our RSP.’ I mean, no you don’t, not yet, not for GPT-7-level models, it says ‘fill this in later’ over there. So Trenton rightfully says we would need a lot more interpretability progress. Right now he would not give the green light, he’d be crying and hoping the tears interfered with GPUs.

  • (2: 53: 00) He says ‘Ideally we can find some compelling deception circuit which lights up when the model knows that it’s not telling the full truth to you.’ Dwarkesh asks about linear probes, Trenton says that does not look good.

  • I would ask, what makes you think that you have found the only such circuit? If the model had indeed found a way around your interpretability research, would you not expect it to give you a deception circuit to find, in addition to the one you are not supposed to find, because you are optimizing for exactly that which will fool you? Wouldn’t you expect the unsupervised learning to give you what you want to find either way? Fundamentally, this seems like saying ‘oh sure he lies all the time, but when he lies he never looks the person in the eye, so there is nothing to worry about, there is no way he would ever lie while looking you in the eye.’ And you do this with a thing much smarter than you, that knows you will notice this, and expect it to go well. For you, that is.

  • Also I would reiterate all my ‘not everything you should be worried about requires the model to be deceptive in way that is distinct from its normal behavior, even in the worlds where this distinction is maximally real,’ and also ‘deception is not a distinct thing from what is imbued into almost every communication.’ And that’s without things smarter than us. None of this seems to me to have any hope, on a very fundamental level.

  • (2: 56: 15) Yet Trenton continues to be optimistic such techniques will understand GPT-7. A third of team is scaling up dictionary learning, a second group is identifying circuits, a third is working to identify attention heads.

  • (3: 01: 00) A good test would be, we found feature X, we ablated it, and now we can’t elicit X to happen. That does sound a little better?

  • (3: 02: 00) What are the unknown unknowns for superhuman models? The answer is ‘we’ll see,’ our hope is automated interpretability. And I mean, yes, ‘we’ll see’ is in some sense the right way to discuss unknown unknowns, there are far worse answers, but my despair is palpable.

  • (3: 03: 00) Should we worry if alignment succeeds ‘too hard’ and people get fine-grained control over AIs? “That is the whole Valley lock-in argument in my mind. It’s definitely one of the strongest contributing factors for why I am working on capabilities at the moment. I think the current player set is actually extremely well-intentioned.”

  • (3: 07: 00) “If it works well, it’s probably not being published.” Finally.

  • Notes on Dwarkesh Patel’s Podcast with Sholto Douglas and Trenton Bricken Read More »

    cows-in-texas-and-kansas-test-positive-for-highly-pathogenic-bird-flu

    Cows in Texas and Kansas test positive for highly pathogenic bird flu

    viral spread —

    The risk to the public is low, and the milk supply is safe.

    Image of cows

    Wild migratory birds likely spread a deadly strain of bird flu to dairy cows in Texas and Kansas, state and federal officials announced this week.

    It is believed to be the first time the virus, a highly pathogenic avian influenza (HPAI), has been found in cows in the US. Last week, officials in Minnesota confirmed finding an HPAI case in a young goat, marking the first time the virus has been found in a domestic ruminant in the US.

    According to the Associated Press, officials with the Texas Animal Health Commission confirmed the flu virus is the Type A H5N1 strain, which has been ravaging bird populations around the globe for several years. The explosive, ongoing spread of the virus has led to many spillover events into mammals, making epidemiologists anxious that the virus could adapt to spread widely in humans.

    For now, the risk to the public is low. According to a release from the US Department of Agriculture (USDA), genetic testing by the National Veterinary Services Laboratories indicated that H5N1 strain that spread to the cows doesn’t appear to contain any mutations that would make it more transmissible to humans. Though the flu strain was found in some milk samples from the infected cows, the USDA emphasized that all the milk from affected animals is being diverted and destroyed. Dairy farms are required to send only milk from healthy animals to be processed for human consumption. Still, even if some flu-contaminated milk was processed for human consumption, the standard pasteurization process inactivates viruses, including influenza, as well as bacteria.

    So far, officials believe the virus is primarily affecting older cows. The virus was detected in milk from sick cows on two farms in Kansas and one in Texas, as well as in a throat swab from a cow on a second Texas farm. The USDA noted that farmers have found dead birds on their properties, indicating exposure to infected birds. Sick cows have also been reported in New Mexico. Symptoms of the bird flu in cows appear to include decreased milk production and low appetite.

    But so far, the USDA believes the spread of H5N1 will not significantly affect milk production or the herds. Milk loss has been limited; only about 10 percent of affected herds have shown signs of the infection, and there has been “little to no associated mortality.” The USDA suggested it will remain vigilant, calling the infections a “rapidly evolving situation.”

    While federal and state officials continue to track the virus, Texas officials aim to assure consumers. “There is no threat to the public and there will be no supply shortages,” Texas Agriculture Commissioner Sid Miller said in a statement. “No contaminated milk is known to have entered the food chain; it has all been dumped. In the rare event that some affected milk enters the food chain, the pasteurization process will kill the virus.”

    Cows in Texas and Kansas test positive for highly pathogenic bird flu Read More »

    taylor-swift-fans-dancing-and-jumping-created-last-year’s-“swift-quakes”

    Taylor Swift fans dancing and jumping created last year’s “Swift quakes”

    Good vibrations —

    “Shake It Off” produced tremors equivalent to a local magnitude earthquake of 0.851.

    Taylor Swift on the Eras Tour in 2023

    Enlarge / Taylor Swift during her Eras Tour. Crowd motions likely caused mini “Swift quakes” recorded by seismic monitoring stations.

    When mega pop star Taylor Swift gave a series of concerts last August at the SoFi Stadium in Los Angeles, regional seismic network stations recorded unique harmonic vibrations known as “concert tremor.” A similar “Swift quake” had occurred the month before in Seattle, prompting scientists from the California Institute of Technology and UCLA to take a closer look at seismic data collected during Swift’s LA concert.

    The researchers concluded that the vibrations were largely generated by crowd motion as “Swifties” jumped and danced enthusiastically to the music and described their findings in a new paper published in the journal Seismological Research Letters. The authors contend that gaining a better understanding of atypical seismic signals like those generated by the Swift concert could improve the analysis of seismic signals in the future, as well as bolster emerging applications like using signals from train noise for seismic interferometry.

    Concert tremor consists of low-frequency signals of extended duration with harmonic frequency peaks between 1 and 10 Hz, similar to the signals generated by volcanoes or trains. There has been considerable debate about the source of these low-frequency concert tremor signals: Are they produced by the synchronized movement of the crowd, or by the sound systems or instruments coupled to the stage? Several prior studies of stadium concerts have argued for the former hypothesis, while a 2015 study found that a chanting crowd at a football game produced similar harmonic seismic tremors. However, a 2008 study concluded that such signals generated during an outdoor electronic dance music festival came from the sound system vibrating to the musical beat.

    The Caltech/UCLA team didn’t just rely on the data from the regional network stations. The scientists placed additional motion sensors throughout the stadium prior to the concert, enabling them to characterize all the seismic signals produced during the concert. The signals had such unique characteristics that it was relatively easy to identify them with a spectrogram. In fact, the authors were able to identify 43 of the 45 songs Swift performed based on the distinctive signal of each song.

    They also calculated how much radiated energy was produced by each song. “Shake It Off” produced the most radiated energy, equivalent to a local magnitude earthquake of 0.851. “Keep in mind this energy was released over a few minutes compared to a second for an earthquake of that size,” said co-author Gabrielle Tepp of Caltech.

    Tepp is a volcanologist and musician in her own right. That combination came in handy when it was time to conduct a lab-based experiment to test the team’s source hypothesis using a portable public announcement speaker system. They played Swift’s “Love Story” and Tepp gamely danced and jumped with the beat during the last chorus while sensors recorded the seismic vibrations. “Even though I was not great at staying in the same place—I ended up jumping around in a small circle, like at a concert—I was surprised at how clear the signal came out,” said Tepp. They also tested a steady beat as Tepp played her bass guitar in order to isolate the signal from a single instrument.

    The resulting fundamental harmonic during the jumping was consistent with the song’s beat rate. However, the bass beats didn’t produce a harmonic signal, which was surprising since those beats were better synchronized with the actual musical beats than Tepp’s jumping motions. This might be due to the rounder shape of the bass beat signals compared to sharper spiking signals in response to the jumping.

    Map showing the concert venue and nearby seismic stations (circles) that recorded signals from the Swift concerts (blue).

    Enlarge / Map showing the concert venue and nearby seismic stations (circles) that recorded signals from the Swift concerts (blue).

    Gabrielle Tepp et al., 2024

    The authors noted that their experiment did not involve a stage or stadium-grade sound system, “so we cannot completely rule out loudspeakers as a vibrational energy source,” they wrote. Nonetheless, “Overall the evidence suggests that crowd movement is the primary source of the low-frequency signals, with the speaker system or instruments potentially contributing via stage of building vibrations.” The fact that the same kind of low-frequency seismic signals were not detected during pre-concert sound checks seems to support that conclusion, although there were higher frequency signals during sound checks.

    The team also studied the structural response of the stadium and conducted a similar analysis of seismic readings from three other concerts at SoFi Stadium that summer: country music’s Morgan Waller, Beyoncé, and Metallica, as well as picking up clear signals at one monitoring station for the three opening acts: Pantera, DJ Khaled, and Five Finger Death Punch, respectively. The results were markedly similar to the seismic data gathered from the Taylor Swift concerts, although none of the signals matched the strongest of those detected during the Swift concerts.

    The researchers were surprised to find that the seismic signals from the Metallica concert were the weakest among all the concerts and markedly different from the others, “slanted and kind of weird looking,” per Tepp. They found several comments in music forums from fans complaining about poor sound quality at the Metallica concert. “If fans had a hard time discerning the song or beat, it may explain the more variable signals because it would have influenced their movements,” the authors wrote.

    It’s also possible that heavy metal live performances are less tightly choreographed than Beyoncé or Swift performances, or that heavy metal fans don’t move with the music in quite the same way. “Metal fans like to headbang a lot, so they’re not necessarily bouncing,” said Tepp. “It might just be that the ways in which they move don’t create as strong of a signal.”

    Seismological Research Letters, 2024. DOI: 10.1785/0220230385  (About DOIs).

    Taylor Swift fans dancing and jumping created last year’s “Swift quakes” Read More »

    scotus-mifepristone-case:-justices-focus-on-anti-abortion-groups’-legal-standing

    SCOTUS mifepristone case: Justices focus on anti-abortion groups’ legal standing

    Demonstrators participate in an abortion-rights rally outside the Supreme Court as the justices of the court hear oral arguments in the case of the <em>US Food and Drug Administration v. Alliance for Hippocratic Medicine</em> on March 26, 2024 in Washington, DC.” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/03/GettyImages-2115237711-800×533.jpeg”></img><figcaption>
<p><a data-height=Enlarge / Demonstrators participate in an abortion-rights rally outside the Supreme Court as the justices of the court hear oral arguments in the case of the US Food and Drug Administration v. Alliance for Hippocratic Medicine on March 26, 2024 in Washington, DC.

    The US Supreme Court on Tuesday heard arguments in a case seeking to limit access to the abortion and miscarriage drug mifepristone, with a majority of justices expressing skepticism that the anti-abortion groups that brought the case have the legal standing to do so.

    The case threatens to dramatically alter access to a drug that has been safely used for decades and, according to the Guttmacher Institute, was used in 63 percent of abortions documented in the health care system in 2023. But, it also has sweeping implications for the Food and Drug Administration’s authority over drugs, marking the first time that courts have second-guessed the agency’s expert scientific analysis and moved to restrict access to an FDA-approved drug.

    As such, the case has rattled health experts, reproductive health care advocates, the FDA, and the pharmaceutical industry alike. But, based on the line of questioning in today’s oral arguments, they have reason to breathe a sigh of relief.

    Standing

    The case was initially filed in 2022 by a group of anti-abortion organizations led by the Alliance for Hippocratic Medicine. They collectively claimed that the FDA’s approval of mifepristone in 2000 was unlawful, as were FDA actions in 2016 and 2021 that eased access to the drug, allowing for it to be prescribed via telemedicine and dispensed through the mail. The anti-abortion groups justified bringing the lawsuit by claiming that the doctors in their ranks are harmed by the FDA’s actions because they are forced to treat girls and women seeking emergency medical care after taking mifepristone and experiencing complications.

    The FDA and numerous medical organizations have emphatically noted that mifepristone is extremely safe and the complications the lawsuit references are exceedingly rare. Serious side effects occur in less than 1 percent of patients, and major adverse events, including infection, blood loss, or hospitalization, occur in less than 0.3 percent, according to the American College of Obstetricians and Gynecologists. Deaths are almost non-existent.

    Still, a conservative federal judge in Texas sided with the anti-abortion groups last year, revoking the FDA’s 2000 approval. A conservative panel of judges for the Court of Appeals for the 5th Circuit in New Orleans then partially overturned the ruling, undoing the lower court’s ruling on the 2000 approval, allowing the FDA’s approval to stand, but still finding the FDA’s 2016 and 2021 actions unlawful. The ruling was frozen until the Supreme Court weighed in.

    Today, many of the Supreme Court Justices went back to the very beginning: the claimed scenario that the plaintiff doctors have been or will imminently be harmed by the FDA’s actions. At the outset of the hearings, Solicitor General Elizabeth Prelogar argued that the plaintiffs had not been harmed, and, even if they were, they already had federal protections and recourse. Any doctor who consciously objects to caring for a patient who has had an abortion already has federal protections that prevent them from being forced to provide that care, Prelogar argued. As such, hospitals have legal obligations and have set up contingency and staffing plans to prevent violating those doctors’ federal conscious objection protections.

    SCOTUS mifepristone case: Justices focus on anti-abortion groups’ legal standing Read More »

    thousands-of-phones-and-routers-swept-into-proxy-service,-unbeknownst-to-users

    Thousands of phones and routers swept into proxy service, unbeknownst to users

    ANONYMIZERS ON THE CHEAP —

    Two new reports show criminals may be using your device to cover their online tracks.

    Thousands of phones and routers swept into proxy service, unbeknownst to users

    Getty Images

    Crooks are working overtime to anonymize their illicit online activities using thousands of devices of unsuspecting users, as evidenced by two unrelated reports published Tuesday.

    The first, from security firm Lumen Labs, reports that roughly 40,000 home and office routers have been drafted into a criminal enterprise that anonymizes illicit Internet activities, with another 1,000 new devices being added each day. The malware responsible is a variant of TheMoon, a malicious code family dating back to at least 2014. In its earliest days, TheMoon almost exclusively infected Linksys E1000 series routers. Over the years it branched out to targeting the Asus WRTs, Vivotek Network Cameras, and multiple D-Link models.

    In the years following its debut, TheMoon’s self-propagating behavior and growing ability to compromise a broad base of architectures enabled a growth curve that captured attention in security circles. More recently, the visibility of the Internet of Things botnet trailed off, leading many to assume it was inert. To the surprise of researchers in Lumen’s Black Lotus Lab, during a single 72-hour stretch earlier this month, TheMoon added 6,000 ASUS routers to its ranks, an indication that the botnet is as strong as it’s ever been.

    More stunning than the discovery of more than 40,000 infected small office and home office routers located in 88 countries is the revelation that TheMoon is enrolling the vast majority of the infected devices into Faceless, a service sold on online crime forums for anonymizing illicit activities. The proxy service gained widespread attention last year following this profile by KrebsOnSecurity.

    “This global network of compromised SOHO routers gives actors the ability to bypass some standard network-based detection tools—especially those based on geolocation, autonomous system-based blocking, or those that focus on TOR blocking,” Black Lotus researchers wrote Tuesday. They added that “80 percent of Faceless bots are located in the United States, implying that accounts and organizations within the US are primary targets. We suspect the bulk of the criminal activity is likely password spraying and/or data exfiltration, especially toward the financial sector.”

    The researchers went on to say that more traditional ways to anonymize illicit online behavior may have fallen out of favor with some criminals. VPNs, for instance, may log user activity despite some service providers’ claims to the contrary. The researchers say that the potential for tampering with the Tor anonymizing browser may also have scared away some users.

    The second post came from Satori Intelligence, the research arm of security firm HUMAN. It reported finding 28 apps available in Google Play that, unbeknownst to users, enrolled their devices into a residential proxy network of 190,000 nodes at its peak for anonymizing and obfuscating the Internet traffic of others.

    HUMAN

    ProxyLib, the name Satori gave to the network, has its roots in Oko VPN, an app that was removed from Play last year after being revealed using infected devices for ad fraud. The 28 apps Satori discovered all copied the Oko VPN code, which made them nodes in the residential proxy service Asock.

    HUMAN

    The researchers went on to identify a second generation of ProxyLib apps developed through lumiapps[.]io, a software developer kit deploying exactly the same functionality and using the same server infrastructure as Oko VPN. The LumiApps SDK allows developers to integrate their custom code into a library to automate standard processes. It also allows developers to do so without having to create a user account or having to recompile code. Instead they can upload their custom code and then download a new version.

    HUMAN

    “Satori has observed individuals using the LumiApps toolkit in the wild,” researchers wrote. “Most of the applications we identified between May and October 2023 appear to be modified versions of known legitimate applications, further indicating that users do not necessarily need to have access to the applications’ source code in order to modify them using LumiApps. These apps are largely named as ‘mods’ or indicated as patched versions and shared outside of the Google Play Store.”

    The researchers don’t know if the 190,000 nodes comprising Asock at its peak were made up exclusively of infected Android devices or if they included other types of devices compromised through other means. Either way, the number indicates the popularity of anonymous proxies.

    People who want to prevent their devices from being drafted into such networks should take a few precautions. The first is to resist the temptation to keep using devices once they’re no longer supported by the manufacturer. Most of the devices swept into TheMoon, for instance, have reached end-of-life status, meaning they no longer receive security updates. It’s also important to install security updates in a timely manner and to disable UPnP unless there’s a good reason for it remaining on and then allowing it only for needed ports. Users of Android devices should install apps sparingly and then only after researching the reputation of both the app and the app maker.

    Thousands of phones and routers swept into proxy service, unbeknownst to users Read More »

    missouri-ag-sues-media-matters-over-its-x-research,-demands-donor-names

    Missouri AG sues Media Matters over its X research, demands donor names

    A photo of Elon Musk next to the logo for X, the social network formerly known as Twitter,.

    Getty Images | NurPhoto

    Missouri Attorney General Andrew Bailey yesterday sued Media Matters in an attempt to protect Elon Musk and X from the nonprofit watchdog group’s investigations into hate speech on the social network. Bailey’s lawsuit claims that “Media Matters has used fraud to solicit donations from Missourians in order to trick advertisers into removing their advertisements from X, formerly Twitter, one of the last platforms dedicated to free speech in America.”

    Bailey didn’t provide much detail on the alleged fraud but claimed that Media Matters is guilty of “fraudulent manipulation of data on X.com.” That’s apparently a reference to Media Matters reporting that X placed ads for major brands next to posts touting Hitler and Nazis. X has accused Media Matters of manipulating the site’s algorithm by endlessly scrolling and refreshing.

    Bailey yesterday issued an investigative demand seeking names and addresses of all Media Matters donors who live in Missouri and a range of internal communications and documents regarding the group’s research on Musk and X. Bailey anticipates that Media Matters won’t provide the requested materials, so he filed the lawsuit asking Cole County Circuit Court for an order to enforce the investigative demand.

    “Because Media Matters has refused such efforts in other states and made clear that it will refuse any such efforts, the Attorney General seeks an order… compelling Media Matters to comply with the CID [Civil Investigative Demand] within 20 days,” the lawsuit said.

    Media Matters slams Musk and Missouri AG

    Media Matters, which is separately fighting similar demands made by Texas, responded to Missouri’s legal action in a statement provided to Ars today.

    “Far from the free speech advocate he claims to be, Elon Musk has actually intensified his efforts to undermine free speech by enlisting Republican attorneys general across the country to initiate meritless, expensive, and harassing investigations against Media Matters in an attempt to punish critics,” Media Matters President Angelo Carusone said. “This Missouri investigation is the latest in a transparent endeavor to squelch the First Amendment rights of researchers and reporters; it will have a chilling effect on news reporters.”

    Musk thanked Bailey for filing the lawsuit in a post that said, “Media Matters is doing everything it can to undermine the First Amendment. Truly an evil organization.”

    Bailey is seeking the names and addresses of all Media Matters donors from Missouri since January 1, 2023, and the amounts of each donation. He wants all promotional or marketing material sent to potential donors and documents showing how the donations were used.

    Ads next to pro-Nazi content

    Several of Bailey’s demands relate to the Media Matters article titled, “As Musk endorses antisemitic conspiracy theory, X has been placing ads for Apple, Bravo, IBM, Oracle, and Xfinity next to pro-Nazi content.” Bailey wants all “documents related to the article, or to the events described in the article.”

    The Media Matters article displayed images of advertisements next to pro-Nazi posts. Musk previously sued Media Matters over the article, claiming the group “manipulated the algorithms governing the user experience on X to bypass safeguards and create images of X’s largest advertisers’ paid posts adjacent to racist, incendiary content.”

    X said Media Matters did this by “endlessly scrolling and refreshing its unrepresentative, hand-selected feed, generating between 13 and 15 times more advertisements per hour than viewed by the average X user repeating this inauthentic activity until it finally received pages containing the result it wanted: controversial content next to X’s largest advertisers’ paid posts.”

    X also sued the Center for Countering Digital Hate, but the lawsuit was thrown out by a federal judge yesterday.

    Missouri AG sues Media Matters over its X research, demands donor names Read More »

    wwdc-2024-starts-on-june-10-with-announcements-about-ios-18-and-beyond

    WWDC 2024 starts on June 10 with announcements about iOS 18 and beyond

    WWDC —

    Speculation is rampant that Apple will make its first big moves in generative AI.

    A colorful logo that says

    Enlarge / The logo for WWDC24.

    Apple

    Apple has announced dates for this year’s Worldwide Developers Conference (WWDC). WWDC24 will run from June 10 through June 14 at the company’s Cupertino, California, headquarters, but everything will be streamed online.

    Apple posted about the event with the following generic copy:

    Join us online for the biggest developer event of the year. Be there for the unveiling of the latest Apple platforms, technologies, and tools. Learn how to create and elevate your apps and games. Engage with Apple designers and engineers and connect with the worldwide developer community. All online and at no cost.

    As always, the conference will kick off with a keynote presentation on the first day, which is Monday, June 10. You can be sure Apple will use that event to at least announce the key features of its next round of annual software updates for iOS, iPadOS, macOS, watchOS, visionOS, and tvOS.

    We could also see new hardware—it doesn’t happen every year, but it has of late. We don’t yet know exactly what that hardware might be, though.

    Much of the speculation among analysts and commentators concerns Apple’s first move into generative AI. There have been reports that Apple may work with a partner like Google to include a chatbot in its operating system, that it has been considering designing its own AI tools, or that it could offer an AI App Store, giving users a choice between many chatbots.

    Whatever the case, Apple is playing catch-up with some of its competitors in generative AI and large language models even though it has been using other applications of AI across its products for a couple of years now. The company’s leadership will probably talk about it during the keynote.

    After the keynote, Apple usually hosts a “Platforms State of the Union” talk that delves deeper into its upcoming software updates, followed by hours of developer-focused sessions detailing how to take advantage of newly planned features in third-party apps.

    WWDC 2024 starts on June 10 with announcements about iOS 18 and beyond Read More »