Author name: 9u50fv

rfk-jr-claws-back-$11.4b-in-cdc-funding-amid-wave-of-top-level-departures

RFK Jr. claws back $11.4B in CDC funding amid wave of top-level departures

Those departures follow Kevin Griffis, head of the CDC’s office of communications, who left last week; Robin Bailey, the agency’s chief operating officer, left late last month; and Nirav Shah, a former CDC principal deputy director.

Pulled funding

Meanwhile, NBC News reported this afternoon that the Department of Health and Human Services (HHS) is pulling back $11.4 billion in funding from the agency, which it allocated to state and local health departments as well as partners.

NBC reported that the funds were largely used for COVID-19 testing and vaccination, and to support community health workers and initiatives that address pandemic health disparities among high-risk and underserved populations, such as rural communities and minority populations. The funds also supported global COVID-19 projects.

“The COVID-19 pandemic is over, and HHS will no longer waste billions of taxpayer dollars responding to a non-existent pandemic that Americans moved on from years ago,” HHS Director of Communications Andrew Nixon said in a statement. “HHS is prioritizing funding projects that will deliver on President Trump’s mandate to address our chronic disease epidemic and Make America Healthy Again.”

State health departments told NBC News that they’re still evaluating the impact of the withdrawn funding. On Monday, some grantees received notices that read: “Now that the pandemic is over, the grants and cooperative agreements are no longer necessary as their limited purpose has run out.”

Since the public health emergency for COVID-19 was declared over in the US on May 11, 2023, over 92,000 Americans died from the pandemic virus, according to CDC data. In total, the pandemic killed over 1.2 million in the US.

RFK Jr. claws back $11.4B in CDC funding amid wave of top-level departures Read More »

fbi-probes-arson-of-tesla-cars-and-facilities,-says-“this-is-domestic-terrorism”

FBI probes arson of Tesla cars and facilities, says “this is domestic terrorism”

Anarchist blog in FBI’s reading list

The New York Post report said the anarchist blog being eyed by the FBI is run out of Salt Lake City, Utah. “In addition, the FBI identified the site Dogeque.st that has information [for] doxxing Tesla employees and locations across the country and [is] being run out of the African country of Sao Tome,” the news report said.

A Democratic congressman criticized the FBI’s decision to create a task force on Tesla-related crime.

“This is the political weaponization of the DOJ,” wrote US Rep. Dan Goldman (D-N.Y.), who previously served as lead counsel in Trump’s first impeachment trial. “Trump uses his official authority to defend his benefactor Elon Musk. The FBI then creates a task force to use our law enforcement to ‘crack down’ on adversaries of Musk’s.”

“Tesla Takedown” calls for peaceful protest

The New York Post report said the FBI is also “tracking a mass protest called ‘Tesla Takedown’ scheduled for March 29 calling for 500 demonstrations at Tesla showrooms and charging stations.” The group behind the protest is calling for peaceful demonstrations and said it opposes vandalism and violence.

A Tesla Takedown website says the planned demonstrations are part of the group’s “peaceful protest movement. We oppose violence, vandalism and destruction of property.” Tesla Takedown says that “Elon Musk is destroying our democracy, and he’s using the fortune he built at Tesla to do it” and urges people to sell their Teslas, dump their Tesla stock, and join the demonstrations.

CNBC quoted a Tesla Takedown spokesperson as saying that the “movement has been and always will be nonviolent. They want to scare us away from protesting Musk’s destruction—but standing up for free speech is essential to democracy. We will not be deterred.”

Three arrests

US Attorney General Pamela Bondi last week issued a statement highlighting three arrests of suspected arsonists. Each defendant faces five to 20 years in prison if convicted. One defendant threw “approximately eight Molotov cocktails at a Tesla dealership located in Salem, Oregon,” another tried to light Tesla cars on fire with Molotov cocktails in Colorado, and a third in South Carolina “wrote profane messages against President Trump around Tesla charging stations before lighting the charging stations on fire with Molotov cocktails,” the press release said.

“The days of committing crimes without consequence have ended,” Bondi said. “Let this be a warning: if you join this wave of domestic terrorism against Tesla properties, the Department of Justice will put you behind bars.”

FBI probes arson of Tesla cars and facilities, says “this is domestic terrorism” Read More »

after-borking-my-pixel-4a-battery,-google-borks-me,-too

After borking my Pixel 4a battery, Google borks me, too


The devil is in the details.

The Pixel 4a. It’s finally here! Credit: Google

It is an immutable law of nature that when you receive a corporate email with a subject line like “Changes coming to your Pixel 4a,” the changes won’t be the sort you like. Indeed, a more honest subject line would usually be: “You’re about to get hosed.”

So I wasn’t surprised, as I read further into this January missive from Google, that an “upcoming software update for your Pixel 4a” would “affect the overall performance and stability of its battery.”

How would my battery be affected? Negatively, of course. “This update will reduce your battery’s runtime and charging performance,” the email said. “To address this, we’re providing some options to consider. “

Our benevolent Google overlords were about to nerf my phone battery—presumably in the interests of “not having it erupt in flames,” though this was never actually made clear—but they recognized the problem, and they were about to provide compensation. This is exactly how these kinds of situations should be handled.

Google offered three options: $50 cash money, a $100 credit to Google’s online store, or a free battery replacement. It seemed fair enough. Yes, not having my phone for a week or two while I shipped it roundtrip to Google could be annoying, but at least the company was directly mitigating the harm it was about to inflict. Indeed, users might actually end up in better shape than before, given the brand-new battery.

So I was feeling relatively sunny toward the giant monopolist when I decided to spring for the 50 simoleons. My thinking was that 1) I didn’t want to lose my phone for a couple of weeks, 2) the update might not be that bad, in which case I’d be ahead by 50 bucks, and 3) I could always put the money towards a battery replacement if assumption No. 2 turned out to be mistaken.

The naïveté of youth!

I selected my $50 “appeasement” through an online form, and two days later, I received an email from Bharath on the Google Support Team.

Bharath wanted me to know that I was eligible for the money and it would soon be in my hands… once I performed a small, almost trivial task: giving some company I had never heard of my name, address, phone number, Social Security number, date of birth, and bank account details.

About that $50…

Google was not, in fact, just “sending” me $50. I had expected, since the problem involved their phones and their update, that the solution would require little or nothing from me. A check or prepaid credit card would arrive in the mail, perhaps, or a drone might deliver a crisp new bill from the sky. I didn’t know and didn’t care, so long as it wasn’t my problem.

But it was my problem. To get the cash, I had to create an account with something called “Payoneer.” This is apparently a reputable payments company, but I had never heard of it, and much about its operations is unclear. For instance, I was given three different ways to sign up depending on whether I 1) “already have a Payoneer account from Google,” 2) “don’t have an account,” or 3) “do have a Payoneer account that was not provided nor activated through Google.”

Say what now?

And though Google promised “no transaction fees,” Payoneer appears to charge an “annual account fee” of $29.95… but only to accounts that receive less than $2,000 through Payoneer in any consecutive 12-month period.

Does this fee apply to me if I sign up through the Google offer? I was directed to Payoneer support with any questions, but the company’s FAQ on the annual account fee doesn’t say.

If the fee does apply to me, do I need to sign up for a Payoneer account, give them all of my most personal financial information, wait the “10 to 18 business days” that Google says it will take to get my money, and then return to Payoneer so that I can cancel my account before racking up some $30 charge a year from now? And I’m supposed to do all this just to get…. fifty bucks? One time?

It was far simpler for me to get a recent hundred-dollar rebate on a washing machine… and they didn’t need my SSN or bank account information.

(Reddit users also report that, if you use the wrong web browser to cancel your Payoneer account, you’re hit with an error that says: “This end point requires that the body of all requests be formatted as JSON.”)

Like Lando Calrissian, I realized that this deal was getting worse all the time.

I planned to write Bharath back to switch my “appeasement,” but then I noticed the fine print: No changes are possible after making a selection.

So—no money for me. On the scale of life’s crises, losing $50 is a minor one, and I resolved to move on, facing the world with a cheerful heart and a clear mind, undistracted by the many small annoyances our high-tech overlords continually strew upon the path.

Then the software update arrived.

A decimation situation

When Google said that the new Pixel 4a update would “reduce your battery’s runtime and charging performance,” it was not kidding. Indeed, the update basically destroyed the battery.

Though my phone was three years old, until January of this year, the battery still held up for all-day usage. The screen was nice, the (smallish) phone size was good, and the device remained plenty fast at all the basic tasks: texting, emails, web browsing, snapping photos. I’m trying to reduce both my consumerism and my e-waste, so I was planning to keep the device for at least another year. And even then, it would make a decent hand-me-down device for my younger kids.

After the update, however, the phone burned through a full battery charge in less than two hours. I could pull up a simple podcast app, start playing an episode, and watch the battery percentage decrement every 45 seconds or so. Using the phone was nearly impossible unless one was near a charging cable at all times.

To recap: My phone was shot, I had to jump through several hoops to get my money, and I couldn’t change my “appeasement” once I realized that it wouldn’t work for me.

Within the space of three days, I went from 1) being mildly annoyed at the prospect of having my phone messed with remotely to 2) accepting that Google was (probably) doing it for my own safety and was committed to making things right to 3) berating Google for ruining my device and then using a hostile, data collecting “appeasement” program to act like it cared. This was probably not the impression Google hoped to leave in people’s minds when issuing the Pixel 4a update.

Pixel 4a, disassembled, with two fingers holding its battery above the front half.

Removing the Pixel 4a’s battery can be painful, but not as painful as catching fire. Credit: iFixit

Cheap can be quite expensive

The update itself does not appear to be part of some plan to spy on us or to extract revenue but rather to keep people safe. The company tried to remedy the pain with options that, on the surface, felt reasonable, especially given the fact that batteries are well-known as consumable objects that degrade over time. And I’ve had three solid years of service with the 4a, which wasn’t especially expensive to begin with.

That said, I do blame Google in general for the situation. The inflexibility of the approach, the options that aren’t tailored for ease of use in specific countries, the outsourced tech support—these are all hallmarks of today’s global tech behemoths.

It is more efficient, from an algorithmic, employ-as-few-humans-as-possible perspective, to operate “at scale” by choosing global technical solutions over better local options, by choosing outsourced email support, by trying to avoid fraud (and employee time) through preventing program changes, by asking the users to jump through your hoops, by gobbling up ultra-sensitive information because it makes things easier on your end.

While this makes a certain kind of sense, it’s not fun to receive this kind of “efficiency.” When everything goes smoothly, it’s fine—but whenever there’s a problem, or questions arise, these kinds of “efficient, scalable” approaches usually just mean “you’re about to get screwed.”

In the end, Google is willing to pay me $50, but that money comes with its own cost. I’m not willing to pay with my time nor with the risk of my financial information, and I will increasingly turn to companies that offer a better experience, that care more about data privacy, that build with higher-quality components, and that take good care of customers.

No company is perfect, of course, and this approach costs a bit more, which butts up against my powerful urge to get a great deal on everything. I have to keep relearning the old lesson— as I am once again with this Pixel 4a fiasco—that cheap gear is not always the best value in the long run.

Photo of Nate Anderson

After borking my Pixel 4a battery, Google borks me, too Read More »

current-sec-chair-cast-only-vote-against-suing-elon-musk,-report-says

Current SEC chair cast only vote against suing Elon Musk, report says

SEC v. Musk still moving ahead

Before Musk bought Twitter for $44 billion, he purchased a 9 percent stake in the company and failed to disclose it within 10 days as required under US law. “Defendant Elon Musk failed to timely file with the SEC a beneficial ownership report disclosing his acquisition of more than five percent of the outstanding shares of Twitter’s common stock in March 2022, in violation of the federal securities laws,” the SEC said in the January 2025 lawsuit filed in US District Court for the District of Columbia. “As a result, Musk was able to continue purchasing shares at artificially low prices, allowing him to underpay by at least $150 million for shares he purchased after his beneficial ownership report was due.”

The SEC lawsuit against Musk is still moving forward, at least for now. Musk last week received a summons giving him 21 days to respond, according to a court filing.

Enforcement priorities are expected to change under the Trump administration, of course. Trump’s pick to replace Gensler, Paul Atkins, is waiting for Senate confirmation. Atkins testified to Congress in 2019 that the SEC should reduce its disclosure requirements.

Trump last month issued an executive order declaring sweeping power over independent agencies, including the SEC, Federal Trade Commission, and Federal Communications Commission. Trump also fired both FTC Democrats despite a US law and Supreme Court precedent stating that the president cannot fire commission members without good cause.

Another Trump executive order targets the alleged “weaponization of the federal government” and ordered an investigation into Biden-era enforcement actions taken by the SEC, FTC, and Justice Department. The Trump order’s language recalls Musk’s oft-repeated claim that the SEC was “harassing” him.

Current SEC chair cast only vote against suing Elon Musk, report says Read More »

can-we-make-ai-less-power-hungry?-these-researchers-are-working-on-it.

Can we make AI less power-hungry? These researchers are working on it.


As demand surges, figuring out the performance of proprietary models is half the battle.

Credit: Igor Borisenko/Getty Images

Credit: Igor Borisenko/Getty Images

At the beginning of November 2024, the US Federal Energy Regulatory Commission (FERC) rejected Amazon’s request to buy an additional 180 megawatts of power directly from the Susquehanna nuclear power plant for a data center located nearby. The rejection was due to the argument that buying power directly instead of getting it through the grid like everyone else works against the interests of other users.

Demand for power in the US has been flat for nearly 20 years. “But now we’re seeing load forecasts shooting up. Depending on [what] numbers you want to accept, they’re either skyrocketing or they’re just rapidly increasing,” said Mark Christie, a FERC commissioner.

Part of the surge in demand comes from data centers, and their increasing thirst for power comes in part from running increasingly sophisticated AI models. As with all world-shaping developments, what set this trend into motion was vision—quite literally.

The AlexNet moment

Back in 2012, Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, AI researchers at the University of Toronto, were busy working on a convolution neural network (CNN) for the ImageNet LSRVC, an image-recognition contest. The contest’s rules were fairly simple: A team had to build an AI system that could categorize images sourced from a database comprising over a million labeled pictures.

The task was extremely challenging at the time, so the team figured they needed a really big neural net—way bigger than anything other research teams had attempted. AlexNet, named after the lead researcher, had multiple layers, with over 60 million parameters and 650 thousand neurons. The problem with a behemoth like that was how to train it.

What the team had in their lab were a few Nvidia GTX 580s, each with 3GB of memory. As the researchers wrote in their paper, AlexNet was simply too big to fit on any single GPU they had. So they figured out how to split AlexNet’s training phase between two GPUs working in parallel—half of the neurons ran on one GPU, and the other half ran on the other GPU.

AlexNet won the 2012 competition by a landslide, but the team accomplished something way more profound. The size of AI models was once and for all decoupled from what was possible to do on a single CPU or GPU. The genie was out of the bottle.

(The AlexNet source code was recently made available through the Computer History Museum.)

The balancing act

After AlexNet, using multiple GPUs to train AI became a no-brainer. Increasingly powerful AIs used tens of GPUs, then hundreds, thousands, and more. But it took some time before this trend started making its presence felt on the grid. According to an Electric Power Research Institute (EPRI) report, the power consumption of data centers was relatively flat between 2010 and 2020. That doesn’t mean the demand for data center services was flat, but the improvements in data centers’ energy efficiency were sufficient to offset the fact we were using them more.

Two key drivers of that efficiency were the increasing adoption of GPU-based computing and improvements in the energy efficiency of those GPUs. “That was really core to why Nvidia was born. We paired CPUs with accelerators to drive the efficiency onward,” said Dion Harris, head of Data Center Product Marketing at Nvidia. In the 2010–2020 period, Nvidia data center chips became roughly 15 times more efficient, which was enough to keep data center power consumption steady.

All that changed with the rise of enormous large language transformer models, starting with ChatGPT in 2022. “There was a very big jump when transformers became mainstream,” said Mosharaf Chowdhury, a professor at the University of Michigan. (Chowdhury is also at the ML Energy Initiative, a research group focusing on making AI more energy-efficient.)

Nvidia has kept up its efficiency improvements, with a ten-fold boost between 2020 and today. The company also kept improving chips that were already deployed. “A lot of where this efficiency comes from was software optimization. Only last year, we improved the overall performance of Hopper by about 5x,” Harris said. Despite these efficiency gains, based on Lawrence Berkely National Laboratory estimates, the US saw data center power consumption shoot up from around 76 TWh in 2018 to 176 TWh in 2023.

The AI lifecycle

LLMs work with tens of billions of neurons approaching a number rivaling—and perhaps even surpassing—those in the human brain. The GPT 4 is estimated to work with around 100 billion neurons distributed over 100 layers and over 100 trillion parameters that define the strength of connections among the neurons. These parameters are set during training, when the AI is fed huge amounts of data and learns by adjusting these values. That’s followed by the inference phase, where it gets busy processing queries coming in every day.

The training phase is a gargantuan computational effort—Open AI supposedly used over 25,000 Nvidia Ampere 100 GPUs running on all cylinders for 100 days. The estimated power consumption is 50 GW-hours, which is enough to power a medium-sized town for a year. According to numbers released by Google, training accounts for 40 percent of the total AI model power consumption over its lifecycle. The remaining 60 percent is inference, where power consumption figures are less spectacular but add up over time.

Trimming AI models down

The increasing power consumption has pushed the computer science community to think about how to keep memory and computing requirements down without sacrificing performance too much. “One way to go about it is reducing the amount of computation,” said Jae-Won Chung, a researcher at the University of Michigan and a member of the ML Energy Initiative.

One of the first things researchers tried was a technique called pruning, which aimed to reduce the number of parameters. Yann LeCun, now the chief AI scientist at Meta, proposed this approach back in 1989, terming it (somewhat menacingly) “the optimal brain damage.” You take a trained model and remove some of its parameters, usually targeting the ones with a value of zero, which add nothing to the overall performance. “You take a large model and distill it into a smaller model trying to preserve the quality,” Chung explained.

You can also make those remaining parameters leaner with a trick called quantization. Parameters in neural nets are usually represented as a single-precision floating point number, occupying 32 bits of computer memory. “But you can change the format of parameters to a smaller one that reduces the amount of needed memory and makes the computation faster,” Chung said.

Shrinking an individual parameter has a minor effect, but when there are billions of them, it adds up. It’s also possible to do quantization-aware training, which performs quantization at the training stage. According to Nvidia, which implemented quantization training in its AI model optimization toolkit, this should cut the memory requirements by 29 to 51 percent.

Pruning and quantization belong to a category of optimization techniques that rely on tweaking the way AI models work internally—how many parameters they use and how memory-intensive their storage is. These techniques are like tuning an engine in a car to make it go faster and use less fuel. But there’s another category of techniques that focus on the processes computers use to run those AI models instead of the models themselves—akin to speeding a car up by timing the traffic lights better.

Finishing first

Apart from optimizing the AI models themselves, we could also optimize the way data centers run them. Splitting the training phase workload evenly among 25 thousand GPUs introduces inefficiencies. “When you split the model into 100,000 GPUs, you end up slicing and dicing it in multiple dimensions, and it is very difficult to make every piece exactly the same size,” Chung said.

GPUs that have been given significantly larger workloads have increased power consumption that is not necessarily balanced out by those with smaller loads. Chung figured that if GPUs with smaller workloads ran slower, consuming much less power, they would finish roughly at the same time as GPUs processing larger workloads operating at full speed. The trick was to pace each GPU in such a way that the whole cluster would finish at the same time.

To make that happen, Chung built a software tool called Perseus that identified the scope of the workloads assigned to each GPU in a cluster. Perseus takes the estimated time needed to complete the largest workload on a GPU running at full. It then estimates how much computation must be done on each of the remaining GPUs and determines what speed to run them so they finish at the same. “Perseus precisely slows some of the GPUs down, and slowing down means less energy. But the end-to-end speed is the same,” Chung said.

The team tested Perseus by training the publicly available GPT-3, as well as other large language models and a computer vision AI. The results were promising. “Perseus could cut up to 30 percent of energy for the whole thing,” Chung said. He said the team is talking about deploying Perseus at Meta, “but it takes a long time to deploy something at a large company.”

Are all those optimizations to the models and the way data centers run them enough to keep us in the green? It takes roughly a year or two to plan and build a data center, but it can take longer than that to build a power plant. So are we winning this race or losing? It’s a bit hard to say.

Back of the envelope

As the increasing power consumption of data centers became apparent, research groups tried to quantify the problem. A Lawerence Berkley Laboratory team estimated that data centers’ annual energy draw in 2028 would be between 325 and 580 TWh in the US—that’s between 6.7 and 12 percent of the total US electricity consumption. The International Energy Agency thinks it will be around 6 percent by 2026. Goldman Sachs Research says 8 percent by 2030, while EPRI claims between 4.6 and 9.1 percent by 2030.

EPRI also warns that the impact will be even worse because data centers tend to be concentrated at locations investors think are advantageous, like Virginia, which already sends 25 percent of its electricity to data centers. In Ireland, data centers are expected to consume one-third of the electricity produced in the entire country in the near future. And that’s just the beginning.

Running huge AI models like ChatGPT is one of the most power-intensive things that data centers do, but it accounts for roughly 12 percent of their operations, according to Nvidia. That is expected to change if companies like Google start to weave conversational LLMs into their most popular services. The EPRI report estimates that a single Google search today uses around 0.3 watts of power, while a single Chat GPT query bumps that up to 2.9 watts. Based on those values, the report estimates that an AI-powered Google search would require Google to deploy 400,000 new servers that would consume 22.8 TWh per year.

“AI searches take 10x the electricity of a non-AI search,” Christie, the FERC commissioner, said at a FERC-organized conference. When FERC commissioners are using those numbers, you’d think there would be rock-solid science backing them up. But when Ars asked Chowdhury and Chung about their thoughts on these estimates, they exchanged looks… and smiled.

Closed AI problem

Chowdhury and Chung don’t think those numbers are particularly credible. They feel we know nothing about what’s going on inside commercial AI systems like ChatGPT or Gemini, because OpenAI and Google have never released actual power-consumption figures.

“They didn’t publish any real numbers, any academic papers. The only number, 0.3 watts per Google search, appeared in some blog post or other PR-related thingy,” Chodwhury said. We don’t know how this power consumption was measured, on what hardware, or under what conditions, he said. But at least it came directly from Google.

“When you take that 10x Google vs ChatGPT equation or whatever—one part is half-known, the other part is unknown, and then the division is done by some third party that has no relationship with Google nor with Open AI,” Chowdhury said.

Google’s “PR-related thingy” was published back in 2009, while the 2.9-watts-per-ChatGPT-query figure was probably based on a comment about the number of GPUs needed to train GPT-4 made by Jensen Huang, Nvidia’s CEO, in 2024. That means the “10x AI versus non-AI search” claim was actually based on power consumption achieved on entirely different generations of hardware separated by 15 years. “But the number seemed plausible, so people keep repeating it,” Chowdhury said.

All reports we have today were done by third parties that are not affiliated with the companies building big AIs, and yet they arrive at weirdly specific numbers. “They take numbers that are just estimates, then multiply those by a whole lot of other numbers and get back with statements like ‘AI consumes more energy than Britain, or more than Africa, or something like that.’ The truth is they don’t know that,” Chowdhury said.

He argues that better numbers would require benchmarking AI models using a formal testing procedure that could be verified through the peer-review process.

As it turns out, the ML Energy Initiative defined just such a testing procedure and ran the benchmarks on any AI models they could get ahold of. The group then posted the results online on their ML.ENERGY Leaderboard.

AI-efficiency leaderboard

To get good numbers, the first thing the ML Energy Initiative got rid of was the idea of estimating how power-hungry GPU chips are by using their thermal design power (TDP), which is basically their maximum power consumption. Using TDP was a bit like rating a car’s efficiency based on how much fuel it burned running at full speed. That’s not how people usually drive, and that’s not how GPUs work when running AI models. So Chung built ZeusMonitor, an all-in-one solution that measured GPU power consumption on the fly.

For the tests, his team used setups with Nvidia’s A100 and H100 GPUs, the ones most commonly used at data centers today, and measured how much energy they used running various large language models (LLMs), diffusion models that generate pictures or videos based on text input, and many other types of AI systems.

The largest LLM included in the leaderboard was Meta’s Llama 3.1 405B, an open-source chat-based AI with 405 billion parameters. It consumed 3352.92 joules of energy per request running on two H100 GPUs. That’s around 0.93 watt-hours—significantly less than 2.9 watt-hours quoted for ChatGPT queries. These measurements confirmed the improvements in the energy efficiency of hardware. Mixtral 8x22B was the largest LLM the team managed to run on both Ampere and Hopper platforms. Running the model on two Ampere GPUs resulted in 0.32 watt-hours per request, compared to just 0.15 watt-hours on one Hopper GPU.

What remains unknown, however, is the performance of proprietary models like GPT-4, Gemini, or Grok. The ML Energy Initiative team says it’s very hard for the research community to start coming up with solutions to the energy efficiency problems when we don’t even know what exactly we’re facing. We can make estimates, but Chung insists they need to be accompanied by error-bound analysis. We don’t have anything like that today.

The most pressing issue, according to Chung and Chowdhury, is the lack of transparency. “Companies like Google or Open AI have no incentive to talk about power consumption. If anything, releasing actual numbers would harm them,” Chowdhury said. “But people should understand what is actually happening, so maybe we should somehow coax them into releasing some of those numbers.”

Where rubber meets the road

“Energy efficiency in data centers follows the trend similar to Moore’s law—only working at a very large scale, instead of on a single chip,” Nvidia’s Harris said. The power consumption per rack, a unit used in data centers housing between 10 and 14 Nvidia GPUs, is going up, he said, but the performance-per-watt is getting better.

“When you consider all the innovations going on in software optimization, cooling systems, MEP (mechanical, electrical, and plumbing), and GPUs themselves, we have a lot of headroom,” Harris said. He expects this large-scale variant of Moore’s law to keep going for quite some time, even without any radical changes in technology.

There are also more revolutionary technologies looming on the horizon. The idea that drove companies like Nvidia to their current market status was the concept that you could offload certain tasks from the CPU to dedicated, purpose-built hardware. But now, even GPUs will probably use their own accelerators in the future. Neural nets and other parallel computation tasks could be implemented on photonic chips that use light instead of electrons to process information. Photonic computing devices are orders of magnitude more energy-efficient than the GPUs we have today and can run neural networks literally at the speed of light.

Another innovation to look forward to is 2D semiconductors, which enable building incredibly small transistors and stacking them vertically, vastly improving the computation density possible within a given chip area. “We are looking at a lot of these technologies, trying to assess where we can take them,” Harris said. “But where rubber really meets the road is how you deploy them at scale. It’s probably a bit early to say where the future bang for buck will be.”

The problem is when we are making a resource more efficient, we simply end up using it more. “It is a Jevons paradox, known since the beginnings of the industrial age. But will AI energy consumption increase so much that it causes an apocalypse? Chung doesn’t think so. According to Chowdhury, if we run out of energy to power up our progress, we will simply slow down.

“But people have always been very good at finding the way,” Chowdhury added.

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Can we make AI less power-hungry? These researchers are working on it. Read More »

david-blaine-shows-his-hand-in-do-not-attempt

David Blaine shows his hand in Do Not Attempt


NatGeo docuseries follows Blaine around the world to learn the secrets of ordinary people doing remarkable feats.

Magician David Blaine smiles while running his hand through a flame. Credit: National Geographic/Dana Hayes

Over the course of his long career, magician and endurance performer David Blaine has taken on all kinds of death-defying feats: catching a bullet in his teeth, fasting for 44 days, or holding his breath for a record-breaking 17 minutes and 4 seconds, to name a few. Viewers will get to see a different side of Blaine as he travels the world to meet kindred spirits from a wide range of cultures in David Blaine Do Not Attempt, a new six-episode docuseries from National Geographic.

(Some spoilers below.)

The series was shot over three calendar years (2022-2024) in nine different countries and features Blaine interacting with, and learning from, all manner of daredevils, athletes, street performers, and magicians. In Southeast Asia, for instance, he watches practitioners of an Indonesian martial art called Debus manipulate razor blades in their mouths and eat nails. (There is no trick to this, just conditioned endurance to pain, as Blaine discovers when he attempts to eat nails: his throat was sore for days.) He braves placing scorpions on his body, breaks a bottle with his head, and sets himself on fire in Brazil while jumping off a high bridge.

One of the elements that sets this series apart from Blaine’s previous magical specials is his willingness to be filmed practicing and training to do the various featured stunts, including early failed attempts. This makes him seem more vulnerable and immensely likable—even if it made him personally uncomfortable during filming.

David Blaine and Amandeep Singh prepare to break bottles with their fists. National Geographic

“I’ve always kept that part hidden,” Blaine told Ars. “Normally I work for a few years and I develop [a stunt] until I feel pretty good about it, and then I go and do the stunt and push myself as far as possible. But in this scenario, it was so many places, so many people, so many events, so many feats, so many things to learn so fast. So it was me in a way that I never liked to show myself: awkward and uncomfortable and screaming and laughing. It’s the things that as a magician, I always hide. As a magician, I try to be very monotone and let the audience react. For this series, I was the spectator to the magic, and it was, for me, very uncomfortable. But I was watching these amazing performers—what I consider to be magicians.”

Safety first

The task of keeping Blaine and the entire crew safe in what are unquestionably dangerous situations falls to safety expert Sebastian “Bas” Pot. “I joke that my title is Glorifed Nanny,” Pot told Ars. “I specialize in taking people to very remote locations where they want to do insane things. I have three basic rules: No one dies, everyone gets paid, and we all smile and laugh every day. If I achieve those three things, my job is done.” He deliberately keeps himself out of the shot; there is only one scene in Do Not Attempt where we see Pot’s face as he’s discussing the risks of a stunt with Blaine.

Blaine has always taken on risks, but because he has historically hidden his preparation from public view, viewers might not realize how cautious he really is. “What people tend to forget about guys like David is that they’re very calculated,” said Pot. The biggest difference between working with Blaine and other clients? “Normally I’ll do everything, I will never ask anyone to do anything that I wouldn’t do myself,” said Pot. “David is taking huge risks, and there’s a lot that he does that I wouldn’t do.”

Like Blaine, Pot also emphasized the importance of repetition to safety. In addition, “A huge amount of it is keeping the calm on set, listening and observing and not getting caught up in the excitement of what’s going on,” he said. While he uses some basic technology for tasks like measuring wind speed, checking for concussion, or monitoring vital signs, for the most part keeping the set safe “is very much about switching off from the technology,” he said.

Ken Stornes leaps from a platform in a Norwegian death dive. National Geographic/Dana Hayes

And when everyone else on set is watching Blaine, “I’m looking outwards, because I’ve got enough eyes on him,” said Pot. There was only one bad accident during filming, involving a skydiving crew member during the Arctic Circle episode who suffered a spinal fracture after a bad landing. The crew member recuperated and was back in the wind tunnel practicing within a month.

This is the episode where Blaine attempts a Viking “death dive” into a snow drift under the tutelage of a Norwegian man named Ken Stornes, with one key difference: Stornes jumps from much greater heights. He also participates in a sky dive. But the episode mostly focuses on Blaine’s training with free divers under the ice to prepare for a stunt in which Blaine swims from one point under Finnish ice to another, pulling himself along with a rope while holding his breath. A large part of his motivation for attempting it was his failed 2006 “Drowned Alive” seven-day stunt in front of Lincoln Center in New York. (He sustained liver and kidney damage as a result.)

“One of my favorite quotes is Churchill, when he says, ‘Success is the ability to go from one failure to the next failure with enthusiasm,'” said Blaine. “That’s what this entire series is. It’s these incredible artists and performers and conservationists and people that do these incredible feats, but it’s the thousands of hours of work, training, failure, repeat that you don’t see that makes what they do seem magical. There’s no guidebook for what they’re doing. But they’ve developed these things to the point that when I was watching them, I’m crying with joy. I can’t believe that what I’m seeing is really happening in front of my eyes. It is magical. And it’s because of the amount of repetition, work, failure, repeat that they put in behind the curtain that you don’t see.”

This time, Blaine succeeded. “It was an incredible experience with these artists that have taken this harsh environment and turned it into a wonderland,” said Blaine of his Arctic experience. “The free divers go under three and a half feet of ice, hold their breath. There’s no way out. They have to find the exit point.”

“When you stop and look, you forget that you’re in this extreme environment and suddenly it’s the most beautiful surroundings, unlike anything that I’ve ever seen,” he said. “It’s almost like being in outer space. And when you’re in that extreme and dangerous situation, there’s this camaraderie, they’re all in it together. At the same time, they’re all very alert. There’s no distractions. Nobody’s thinking about messages, phones, bills. Everybody’s right there in that moment. And you’re very aware of everything around you in a way that normally in the real world doesn’t exist.”

Blaine admits that his attitude toward risk has changed somewhat with age. “I’m older and I have a daughter, and therefore I don’t want to do something where, oh, it went wrong and it’s the worst-case scenario,” he said. “So I have been very careful. If something seemed like the risk wasn’t worth it, I backed away. For some of these things, I would just have to watch, study, learn, take time off, come back. I wouldn’t do it unless I felt that the master who was sharing their skillset with me felt that I could pull it off. There was a trust, and I was able to listen and follow exactly. That ability to listen to directions and commit to something is a very necessary part to pulling something off like this.”

Granted, he didn’t always listen. When he deliberately attracted a swarm of bees to make a “bee beard,” he was advised to wear a white T-shirt to avoid getting stung. But black is Blaine’s signature color, and he decided to stick with it. He did indeed get stung about a dozen times but took the pain in stride. “He takes responsibility for him,” Pot (who is a beekeeper) said of that decision. “I’d tell a crew member to go change their T-shirt and they would.”

The dedication to proper preparation and training is evident throughout Do Not Attempt, but particularly in the Southeast Asia-centric episode where Blaine attempts to kiss a venomous King Cobra—what Pot considers to be the most dangerous stunt in the series. “The one person I’ve ever had die was a snake expert in Venezuela years ago, who got bitten by his own snake because he chose not to follow the safety protocols we had put in place,” said Pot.

Kissing a cobra

So there were weeks of preparation before Blaine even attempted the stunt, guided by an Indonesian Debus practitioner named Fiitz, who can read the creatures’ body language so effortlessly he seems to be dancing with the snakes. (Note: no animals were harmed over the course of filming.) The final shot (see clip below) took 10 days to film. Antivenom was naturally on hand, but while antivenom might save your life if you’re bitten by a King Cobra, “the journey you’re going to go on will be hell,” Pol said. “You can still have massive necrosis, lose a limb, it might take weeks—there’s no guarantees at all [for recovery].” And administering antivenom can induce cardiac shock if it’s not done correctly. “You don’t want some random set medic reading instructions off Google on how to give antivenom” said Pot.

David Blaine kisses a King Cobra with the expert guidance of Debus practitioner Fiitz.

Blaine’s genuine appreciation for the many performers he encounters in his journey is evident in every frame. “[The experience] changed me in a way that you can’t simply explain,” Blaine said. “It was incredible to discover these kindred spirits all around the world, people who had these amazing passions. Many of them had to go against what everybody said was possible. Many of them had to fail, repeat, embarrass themselves, risk everything, and learn. That was one of the greatest experiences: discovering this unification of all these people from all different parts of the world that I felt had that theme in common. It was nice to be there firsthand, getting a glimpse into their world or seeing what drives them.”

“The other part that was really special: I became a person that gets to watch real magic happening in front of my eyes,” Blaine continued. “When I’m up in the sky watching [a skydiver named] Inka, I’m actually crying tears of joy because it’s so compelling and so beautiful. So many of these places around the world had these amazing performers. Across the board, each place, every continent, every person, every performer has given me a gift that I’ll cherish for the rest of my life.”

David Blaine Do Not Attempt premieres tonight on National Geographic and starts streaming tomorrow on Disney+ and Hulu.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

David Blaine shows his hand in Do Not Attempt Read More »

anthropic’s-new-ai-search-feature-digs-through-the-web-for-answers

Anthropic’s new AI search feature digs through the web for answers

Caution over citations and sources

Claude users should be warned that large language models (LLMs) like those that power Claude are notorious for sneaking in plausible-sounding confabulated sources. A recent survey of citation accuracy by LLM-based web search assistants showed a 60 percent error rate. That particular study did not include Anthropic’s new search feature because it took place before this current release.

When using web search, Claude provides citations for information it includes from online sources, ostensibly helping users verify facts. From our informal and unscientific testing, Claude’s search results appeared fairly accurate and detailed at a glance, but that is no guarantee of overall accuracy. Anthropic did not release any search accuracy benchmarks, so independent researchers will likely examine that over time.

A screenshot example of what Anthropic Claude's web search citations look like, captured March 21, 2025.

A screenshot example of what Anthropic Claude’s web search citations look like, captured March 21, 2025. Credit: Benj Edwards

Even if Claude search were, say, 99 percent accurate (a number we are making up as an illustration), the 1 percent chance it is wrong may come back to haunt you later if you trust it blindly. Before accepting any source of information delivered by Claude (or any AI assistant) for any meaningful purpose, vet it very carefully using multiple independent non-AI sources.

A partnership with Brave under the hood

Behind the scenes, it looks like Anthropic partnered with Brave Search to power the search feature, from a company, Brave Software, perhaps best known for its web browser app. Brave Search markets itself as a “private search engine,” which feels in line with how Anthropic likes to market itself as an ethical alternative to Big Tech products.

Simon Willison discovered the connection between Anthropic and Brave through Anthropic’s subprocessor list (a list of third-party services that Anthropic uses for data processing), which added Brave Search on March 19.

He further demonstrated the connection on his blog by asking Claude to search for pelican facts. He wrote, “It ran a search for ‘Interesting pelican facts’ and the ten results it showed as citations were an exact match for that search on Brave.” He also found evidence in Claude’s own outputs, which referenced “BraveSearchParams” properties.

The Brave engine under the hood has implications for individuals, organizations, or companies that might want to block Claude from accessing their sites since, presumably, Brave’s web crawler is doing the web indexing. Anthropic did not mention how sites or companies could opt out of the feature. We have reached out to Anthropic for clarification.

Anthropic’s new AI search feature digs through the web for answers Read More »

judge-orders-musk-and-doge-to-delete-personal-data-taken-from-social-security

Judge orders Musk and DOGE to delete personal data taken from Social Security

The lawsuit was filed by the American Federation of State, County and Municipal Employees; the Alliance for Retired Americans; and American Federation of Teachers. “Never before has a group of unelected, unappointed, and unvetted individuals—contradictorily described as White House employees, employees of either existing or putative agencies (multiple and many), and undefined ‘advisors’—sought or gained access to such sensitive information from across the federal government,” the lawsuit said.

A temporary restraining order preserves the status quo until a preliminary injunction hearing can be held, although the legal standards for granting a temporary restraining order or preliminary injunction are essentially the same, Hollander wrote. A temporary restraining order lasts 14 days by default but can be extended.

“In my view, plaintiffs have shown a likelihood of success on the merits as to their claim that the access to records provided by SSA to the DOGE Team does not fall within the need-to-know exception to the Privacy Act. Therefore, the access violates both the Privacy Act and the APA,” Hollander wrote.

The SSA has meanwhile been hit with DOGE-fueled budget cuts affecting its operations.

The order

The order says the SSA must cut off DOGE’s access. Musk, Gleason, and all other DOGE team members and affiliates “shall disgorge and delete all non-anonymized PII [personally identifiable information] data in their possession or under their control, provided from or obtained, directly or indirectly, from any SSA system of record to which they have or have had access, directly or indirectly, since January 20, 2025,” it says.

The DOGE defendants are also prohibited “from installing any software on SSA devices, information systems, or systems of record, and shall remove any software that they previously installed since January 20, 2025, or which has been installed on their behalf,” and are prohibited “from accessing, altering, or disclosing any SSA computer or software code.”

The SSA is allowed to provide DOGE with redacted or anonymized records, and may provide “access to discrete, particularized, and non-anonymized data, in accordance with the Privacy Act” under certain conditions. “SSA must first obtain from the DOGE Team member, in writing, and subject to possible review by the Court, a detailed explanation as to the need for the record and why, for said particular and discrete record, an anonymized or redacted record is not suitable for the specified use,” the order said. “The general and conclusory explanation that the information is needed to search for fraud or waste is not sufficient to establish need.”

Judge orders Musk and DOGE to delete personal data taken from Social Security Read More »

rocket-report:-falcon-9-may-smash-reuse-record;-relativity-roving-to-texas?

Rocket Report: Falcon 9 may smash reuse record; Relativity roving to Texas?


All the news that’s fit to lift

“It is what he has always dreamt of.”

The Falcon 9 booster that launched Crew 10 is seen shortly after landing near its launch site in Florida. Credit: SpaceX

Welcome to Edition 7.36 of the Rocket Report! Well, after nine months, NASA astronauts Butch Wilmore and Suni Williams are finally back on Earth, safe and sound. This brings to conclusion one of the stranger and more dramatic human spaceflight stories in years. We’re glad they’re finally home, soon to be reunited with their families.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Summary of 2024 launch activity. In its annual launch report, released earlier this month, Bryce Tech analyzed the 259 orbital launches conducted last year. Among the major trends the analysts found were: Nearly 60 percent of all launches were conducted by US providers, Commercial providers accounted for about 70 percent of launches, and Small satellites, primarily for communications, represented the majority of all spacecraft launched at 97 percent.

Trends dominated by Starlink launches … SpaceX conducted more than half of the launches last year (134), putting 2,390 spacecraft into orbit (the vast majority of which were Starlink satellites). The next closest competitor was China, with 48 launches and 186 spacecraft. The nearest US competitor to SpaceX was Rocket Lab, with 14 launches and 33 spacecraft. The competition in “upmass,” that is total kg lofted into orbit, was less close still. SpaceX put 1.86 million kg into space, followed by China (164,000 kg) and Roscosmos (76,000). The closest US competitor was United Launch Alliance, at 29,000 kg. Put another way, for every kilogram ULA put into orbit, SpaceX lofted 66.

MaiaSpace inks first commercial customer. MaiaSpace, a French subsidiary of ArianeGroup founded in 2022, signed an agreement to fly multiple missions for Exotrail’s SpaceVan orbital transfer vehicle beginning in 2027. The partnership with Exotrail provides an early vote of confidence that the reusable Maia rocket can increase Europe’s sovereign launch capabilities, Payload reports. This is one of several launch agreements signed recently by Exotrail.

Hitting the trail … Exotrail flew its first SpaceVan mission on SpaceX’s Transporter-9 flight in November 2023 and deployed the Endurosat-built “EXO-0” cubesat in LEO after three months in orbit. In November, the company signed a deal with Arianespace to launch Exotrail’s first SpaceVan mission to geostationary transfer orbit in the latter half of 2026. After leaving Ariane 64, SpaceVan will tow a customer satellite to GEO, demonstrating its ability to deliver satellites to the full range of orbital trajectories. (submitted by gma)

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Electron launches twice in three days. Rocket Lab completed the deployment of a constellation of Internet of Things satellites for French company Kinéis with an Electron launch on Monday. The launch was the fifth and final mission under a contract signed by the companies in 2021. Each launch carried five satellites, weighing 28 kilograms each, to complete a 25-satellite constellation.

Continuing to steadily increase cadence … For Rocket Lab, this was the second launch in a little more than 72 hours, after another Electron launched a radar imaging satellite for Japanese company iQPS March 14. It was the fourth launch so far this year for Rocket Lab, which previously stated it expects to perform more than 20 Electron launches, including the HASTE suborbital version, this year.

Pangea raises Series A funding. The Spanish startup announced this week that it has raised 23 million euros ($25 million) in Series A funding, European Spaceflight reports. This funding includes contributions from former ArianeGroup CEO André-Hubert Roussel. Founded in 2018, Pangea Aerospace initially aimed to develop Meso, a small rocket designed to deliver 400 kilograms to low-Earth orbit. The rocket was to be powered by a unique, in-house-developed methalox aerospike engine.

Twice the size … However, in early 2023, the company announced it had abandoned the development of Meso to focus on providing propulsion systems for rockets and in-orbit applications. Pangea is currently in the process of developing ARCOS, an aerospike engine designed for use aboard the booster and/or upper stage of a rocket. According to Pangea, the funding will be used to “accelerate its expansion in the European market,” aiming to grow its customer base. It will look to double its workforce and scale up its manufacturing, integration, and testing capabilities.

Relativity Space eyeing move to Texas. As he consolidates control over Relativity Space, new owner and chief executive Eric Schmidt is planning significant changes at the launch company, including a likely move to the Lone Star State, Ars reports. The company faces several major challenges as it seeks to bring the Terran R rocket to market, particularly in logistics. This is because Terran R is a large launch vehicle, too large to move across the country by highway.

Watching for Baytown … The company’s initial plan was to manufacture first stages at its massive factory in Long Beach, California, and ship them through the Panama Canal to a test site at the Stennis Space Center in southern Mississippi. From there, they would be moved by barge again to the launch site in Florida. But this was expensive and time-consuming. Two sources have indicated that Relativity Space will likely move a significant portion of its Terran R manufacturing to Baytown, Texas, which is near Houston. Such a location would provide water access on the right side of the Panama Canal. Relativity has not made a formal announcement.

Crew-10 launches to ISS. A Falcon 9 rocket launched four astronauts safely into orbit on Friday evening, marking the official beginning of the Crew-10 mission to the International Space Station. Friday’s launch came two days after an initial attempt was scrubbed on Wednesday evening, Ars reports. This was due to a hydraulic issue with the ground systems that handle the Falcon 9 rocket at Launch Complex 39A in Florida.

Smooth ride to orbit … There were no technical issues on Friday, and with clear skies, NASA astronauts Anne McClain and Nichole Ayers, Japanese astronaut Takuya Onishi, and Roscosmos cosmonaut Kirill Peskov rocketed smoothly into orbit. Although any crew launch into orbit is notable, this mission came with an added bit of importance as its success cleared the way for two NASA astronauts, Butch Wilmore and Suni Williams, to finally return home from space after a saga spanning nine months. They did so on Tuesday evening.

SpaceX pushes Falcon 9 booster reuse record. On March 12 a Falcon 9 rocket first stage made its third launch, lofting the SPHEREx and PUNCH missions into low-Earth orbit for NASA. Following the successful launch, the first stage landed near the launch site at Vandenberg Space Force Base in California. Now, this same stage could launch again on Thursday night from Vandenberg, carrying the NROL-57 mission for the US Space Force.

Rapid reuse is a thing … The launch is scheduled for 06: 49 UTC, and if it takes place it would be just nine days and four hours since the SPHEREx mission. This would shatter the company’s previous booster turnaround, set in November, of a little more than 13 days. The fast turnaround was no doubt enabled by landing the booster back near the launch site, speeding the process of inspecting and refurbishing the rocket. It’s also impressive that the Space Force greenlit such a fast turnaround time for a national security payload.

And launch pad turnaround, too. SpaceX launched its latest batch of Starlink satellites from Cape Canaveral Space Force Station at sunrise Saturday morning. The mission marked a record-breaking turnaround for launch operations at Space Launch Complex 40, Spaceflight Now reports. The launch of 23 Starlink Version 2 Mini satellites came two days, eight hours, 59 minutes, and 40 seconds after the launch of the Starlink 12-21 mission. This beat SpaceX’s previous turnaround time at that pad by nearly six hours.

Ever pushing forward … Recently, Ars covered a recent string of issues with the Falcon 9 rocket, notably with its upper stage. The principal reason is that SpaceX continues to push the envelope with even its mature products like the Falcon 9 rocket, which is now nearly 15 years old. While we can take note of issues, it’s also worth celebrating the incredibly hard work that goes into pushing cadence and turnaround times. Moreover, success with the Falcon 9 rocket supports the notion that, one day, SpaceX will be able to reach a high cadence of operations with Starship.

The Jeff and the Donald. Over the past year, Amazon and Blue Origin founder Bezos has executed a sharp public reversal in his relationship with President Trump—whom he previously criticized as a “threat to democracy”—that has surprised even longtime associates. An article in the Financial Times explores this change, and finds that it is likely due, at least in part, to Bezos’ interest in his space company. There are some spicy, and to my sense of things, accurate comments that explain why Bezos has sought to curry favor with Trump.

One longtime adviser cautions … “He cares most about Blue Origin. His chance of being the player he wants to become in space could be destroyed” if the world’s richest man (Elon Musk) and most powerful politician united against him. “The growth trajectory for the entire enterprise depends on the federal contract… otherwise Blue is dead in the water.” Another close associate says that any move by Trump to deprioritize lunar missions in favor of Musk’s aspirations to reach Mars would have a significant impact on the company’s viability and success. “It is what he has always dreamt of. Nothing will hurt Jeff financially—Blue is a money loser. It is more the opportunity to be involved.”

Next three launches

March 21: Falcon 9 | NROL-57 | Vandenberg Space Force Base, Calif. | 06: 49 UTC

March 23: Spectrum | Demo flight | Andøya Rocket Range, Norway | 11: 30 UTC

March 24: Falcon 9 | NROL-69 | Cape Canaveral Space Force Station, Florida | 17: 42 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: Falcon 9 may smash reuse record; Relativity roving to Texas? Read More »

brains-of-parrots,-unlike-songbirds,-use-human-like-vocal-control

Brains of parrots, unlike songbirds, use human-like vocal control

Due to past work, we’ve already identified the brain structure that controls the activity of the key vocal organ, the syrinx, located in the bird’s throat. The new study, done by Zetian Yang and Michael Long of New York University, managed to place fine electrodes into this area of the brain in both species and track the activity of neurons there while the birds were awake and going about normal activities. This allowed them to associate neural activity with any vocalizations made by the birds. For the budgerigars, they had an average of over 1,000 calls from each of the four birds carrying the implanted electrodes.

For the zebra finch, neural activity during song production showed a pattern that was based on timing; the same neurons tended to be most active at the same point in the song. You can think of this as a bit like a player piano central organizing principle, timing when different notes should be played. “Different configurations [of neurons] are active at different moments, representing an evolving population ‘barcode,’” as Yang and Long describe this pattern.

That is not at all what was seen with the budgerigars. Here, instead, they saw patterns where the same populations of neurons tended to be active when the bird was producing a similar sound. They broke the warbles down into parts that they characterized on a scale that ranged from harmonic to noisy. They found that the groups of neurons tended to be more active whenever the warble was harmonic, and different groups tended to spike when it got noisy. Those observations led them to identify a third population, which was active whenever the budgerigars produced a low-frequency sound.

In addition, Yang and Long analyzed the pitch of the vocalizations. Only about half of the neurons in the relevant region of the brain were linked to pitch. However, the half that was linked had small groups of neurons that fired during the production of a relatively narrow range of pitches. They could use the activity of as few as five individual neurons and accurately predict the pitch of the vocalizations at the time.

Brains of parrots, unlike songbirds, use human-like vocal control Read More »

fcc-to-get-republican-majority-and-plans-to-“delete”-as-many-rules-as-possible

FCC to get Republican majority and plans to “delete” as many rules as possible

By contrast, then-President Joe Biden waited nine months to choose a Democratic nominee in 2021. His first nominee, Gigi Sohn, wasn’t confirmed despite Democrats having control of the Senate at the time. The Biden-era FCC didn’t gain a Democratic majority until Gomez was confirmed in September 2023.

Carr would have a 2-1 majority upon Starks’ departure assuming there is no Senate vote on Trusty’s nomination before then. US law prevents either party from obtaining an FCC supermajority. “The maximum number of commissioners who may be members of the same political party shall be a number equal to the least number of commissioners which constitutes a majority of the full membership of the Commission,” the law says.

Democratic leaders can be expected to recommend a replacement for Starks’ seat. The president nominates all FCC commissioners, but Trump has previously followed the tradition of using recommendations made by Democrats when nominating members from the opposing party.

The Senate sometimes pairs votes on nominations so that one Democrat and one Republican are added to the FCC at the same time. There’s no guarantee that Republicans will wait for a Democratic nominee.

“I think the Republicans will move ahead as quickly as possible with Trusty. While she could be paired with a Democrat, and in different times, would have been, I think in today’s climate, they are more likely to move ahead without a pair,” New Street Research Policy Advisor Blair Levin told Ars.

Schumer reportedly urged Starks to stay awhile

Starks would have been a possible candidate for FCC chair if Kamala Harris had won the presidency and if Rosenworcel decided not to serve a second term as chair.

Carr issued a statement praising Starks for “an impressive legacy of accomplishments in public service.” Gomez said that Starks’ “expertise on national security issues and his deep understanding of the FCC’s Enforcement Bureau have been instrumental in advancing the agency’s mission,” and that he “demonstrated unwavering commitment to protecting consumers and strengthening our communications networks.”

Starks’ departure has been anticipated since shortly after Trump’s election win. In December, Schumer reportedly urged Starks to stay at the FCC for awhile to delay the Republicans gaining a majority.

There might be another Republican seat to fill sometime after Trusty’s nomination receives a Senate vote. Carr’s fellow Republican on the commission, Nathan Simington, “has also wanted to depart to take on different work,” a Bloomberg report said.

FCC to get Republican majority and plans to “delete” as many rules as possible Read More »

google-inks-$32-billion-deal-to-buy-security-firm-wiz-even-as-doj-seeks-breakup

Google inks $32 billion deal to buy security firm Wiz even as DOJ seeks breakup

“While a tough regulatory climate in 2024 had hampered such large-scale deals, Wall Street is optimistic that a shift in antitrust policies under US President Donald Trump could reignite dealmaking momentum,” Reuters wrote today.

Google reportedly agreed to a $3.2 billion breakup fee that would be paid to Wiz if the deal collapses. A Financial Times report said the breakup fee is unusually large as it represents 10 percent of the total deal value, instead of the typical 2 or 3 percent. The large breakup fee “shows how technology companies are still bracing themselves for pushback from antitrust regulators, even under President Donald Trump and his new Federal Trade Commission chair Andrew Ferguson,” the article said.

Wiz co-founder and CEO Assaf Rappaport wrote today that although the plan is for Wiz to become part of Google Cloud, the companies both believe that “Wiz needs to remain a multicloud platform… We will still work closely with our great partners at AWS, Azure, Oracle, and across the entire industry.”

Google Cloud CEO Thomas Kurian wrote that Wiz’s platform would fill a gap in Google’s security offerings. Google products already “help customers detect and respond to attackers through both SaaS-based services and cybersecurity consulting,” but Wiz is different because it “connects to all major clouds and code environments to help prevent incidents from happening in the first place,” he wrote.

“Wiz’s solution rapidly scans the customer’s environment, constructing a comprehensive graph of code, cloud resources, services, and applications—along with the connections between them,” Kurian wrote. “It identifies potential attack paths, prioritizes the most critical risks based on their impact, and empowers enterprise developers to secure applications before deployment. It also helps security teams collaborate with developers to remediate risks in code or detect and block ongoing attacks.”

Google inks $32 billion deal to buy security firm Wiz even as DOJ seeks breakup Read More »