Science

are-standing-desks-good-for-you?-the-answer-is-getting-clearer.

Are standing desks good for you? The answer is getting clearer.


Whatever your office setup, the most important thing is to move.

Without question, inactivity is bad for us. Prolonged sitting is consistently linked to higher risks of cardiovascular disease and death. The obvious response to this frightful fate is to not sit— move. Even a few moments of exercise can have benefits, studies suggest. But in our modern times, sitting is hard to avoid, especially at the office. This has led to a range of strategies to get ourselves up, including the rise of standing desks. If you have to be tethered to a desk, at least you can do it while on your feet, the thinking goes.

However, studies on whether standing desks are beneficial have been sparse and sometimes inconclusive. Further, prolonged standing can have its own risks, and data on work-related sitting has also been mixed. While the final verdict on standing desks is still unclear, two studies out this year offer some of the most nuanced evidence yet about the potential benefits and risks of working on your feet.

Take a seat

For years, studies have pointed to standing desks improving markers for cardiovascular and metabolic health, such as lipid levels, insulin resistance, and arterial flow-mediated dilation (the ability of arteries to widen in response to increased blood flow). But it’s unclear how significant those improvements are to averting bad health outcomes, such as heart attacks. One 2018 analysis suggested the benefits might be minor.

And there are fair reasons to be skeptical about standing desks. For one, standing—like sitting—is not moving. If a lack of movement and exercise is the root problem, standing still wouldn’t be a solution.

Yet, while sitting and standing can arguably be combined into the single category of ‘stationary,’ some researchers have argued that not all sitting is the same. In a 2018 position paper published in the Journal of Occupational and Environmental Medicine, two health experts argued that the link between poor health and sitting could come down to the specific populations being examined and “the special contribution” of “sitting time at home, for example, the ‘couch potato effect.'”

The two researchers—emeritus professors David Rempel, formerly at the University of California, San Francisco, and Niklas Krause, formerly of UC Los Angeles—pointed to several studies looking specifically at occupational sitting time and poor health outcomes, which have arrived at mixed results. For instance, a 2013 analysis did not find a link between sitting at work and cardiovascular disease. Though the study did suggest a link to mortality, the link was only among women. There was also a 2015 study on about 36,500 workers in Japan, who were followed for an average of 10 years. That study found that there was no link between mortality and sitting time among salaried workers, professionals, and people who worked at home businesses. However, there was a link between mortality and sitting among people who worked in farming, forestry, and fishing industries.

Still, despite some murkiness in the specifics, more recent studies continue to turn up a link between total prolonged sitting—wherever that sitting occurs—and poor health outcomes, particularly cardiovascular disease. This has kept up interest in standing desks in offices, where people don’t always have the luxury of frequent movement breaks. And this, in turn, has kept researchers on their toes to try to answer whether there is any benefit to standing desks.

One study published last month in the International Journal of Epidemiology offers a clearer picture of how standing desks may relate to cardiovascular health risks. The authors, an international team of researchers led by Matthew Ahmadi at the University of Sydney in Australia, found that standing desks don’t improve heart health—but they don’t harm it, either, whereas sitting desks do.

Mitigating risks

For the study, the researchers tracked the health data of a little over 83,000 people in the UK over an average of about seven years. During the study, participants wore a wrist-based accelerometer device for at least four days. The devices were calibrated to determine when they were sitting, standing, walking, or running during the waking hours. With that data, the researchers linked their sitting, standing, and stationary (combined sitting and standing) times to health outcomes in their medical records.

The researchers focused on two categories of health outcomes: cardiovascular, covering coronary heart disease, heart failure, and stroke; and orthostatic circulatory disease events, including orthostatic hypotension (blood pressure drops upon standing or sitting), varicose veins, chronic venous insufficiency (veins in your legs don’t move blood back up to your heart), and venous ulcers. The reasoning for this second category is that prolonged sitting and standing may pose risks for developing circulatory diseases.

The researchers found that when participants’ total stationary time (sitting and standing) went over 12 hours per day, risk of orthostatic circulatory disease increased 22 percent per additional hour, while risk of cardiovascular disease went up 13 percent per hour.

For just sitting, risks increased every hour after 10 hours: for orthostatic circulatory disease, risk went up 26 percent every hour after 10 hours, and cardiovascular disease risk went up 15 percent. For standing, risk of orthostatic circulatory disease went up after just two hours, increasing 11 percent every 30 minutes after two hours of standing. But standing had no impact on cardiovascular disease at any time point.

“Contrary to sitting time, more time spent standing was not associated with a higher CVD [cardiovascular disease] risk. Overall, there was no association for higher or lower CVD risk throughout the range of standing duration,” the authors report.

On the other hand, keeping sitting time under 10 hours and standing time under two hours was linked to a weak protective effect against orthostatic circulatory disease: A day of nine hours of sitting and 1.5 of standing (for a total of 11.5 hours of stationary time) lowered risk of orthostatic circulatory disease by a few percentage points, the study found.

In other words, as long as you can keep your total stationary time under 12 hours, you can use a little standing time help you keep your sitting time under 10 hours and avoid increasing both cardiovascular and orthostatic risks, according to the data.

Consistent finding

It’s a very detailed formula to reduce the health risks of long days at the office, but is it set in stone? Probably not. For one thing, it’s just one study that needs to be replicated in a different population. Also, the study didn’t look at any specifics of occupational versus leisure standing and sitting times, let alone the use of standing desks specifically. The study also based estimates of people’s sitting, standing, and total stationary time on as little as just four days of activity monitoring, which may or may not have been consistent over the nearly seven-year average follow-up period.

Still, the study’s takeaway generally fits with a study published in January in JAMA Network Open. This study looked at the link between occupational sitting time, leisure physical activity, and death rates—both deaths from all causes and those specifically caused by cardiovascular disease. Researchers used a group of over 480,000 workers in Taiwan, who were followed for an average of nearly 13 years.

The workers who reported mostly sitting at work had a 16 percent increased risk of all-cause mortality and a 34 percent higher risk of dying from cardiovascular disease compared with workers who did not sit at work. The workers who reported alternating between sitting and standing, meanwhile, did not have an increased risk of all-cause or cardiovascular disease mortality. The findings held after adjusting for health factors and looking at subgroups, including by sex, age, smokers, never-smokers, and people with chronic conditions.

That said, being highly active in leisure time appeared to offset the mortality risks among those who mostly sit at work. At the highest leisure-time activity levels reported, participants who mostly sit at work had comparable risks of all-cause mortality as those who alternated sitting and standing or were didn’t sit at work. Overall, the data suggested that keeping overall stationary time as low as possible and alternating sitting and standing to some extent at work can reduce risk.

The authors call for incorporating breaks in work settings and even specifically recommend allowing for standing and activity-permissive workstations.

The takeaway

While prolonged standing has its own risks, the use of standing desks at work can, to some extent, help lessen the risks of prolonged sitting. But, overall, it’s important to keep total stationary time as low as possible and exercise whenever possible.

Photo of Beth Mole

Beth is Ars Technica’s Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes.

Are standing desks good for you? The answer is getting clearer. Read More »

ibm-boosts-the-amount-of-computation-you-can-get-done-on-quantum-hardware

IBM boosts the amount of computation you can get done on quantum hardware

By making small adjustments to the frequency that the qubits are operating at, it’s possible to avoid these problems. This can be done when the Heron chip is being calibrated before it’s opened for general use.

Separately, the company has done a rewrite of the software that controls the system during operations. “After learning from the community, seeing how to run larger circuits, [we were able to] almost better define what it should be and rewrite the whole stack towards that,” Gambetta said. The result is a dramatic speed-up. “Something that took 122 hours now is down to a couple of hours,” he told Ars.

Since people are paying for time on this hardware, that’s good for customers now. However,  it could also pay off in the longer run, as some errors can occur randomly, so less time spent on a calculation can mean fewer errors.

Deeper computations

Despite all those improvements, errors are still likely during any significant calculations. While it continues to work toward developing error-corrected qubits, IBM is focusing on what it calls error mitigation, which it first detailed last year. As we described it then:

“The researchers turned to a method where they intentionally amplified and then measured the processor’s noise at different levels. These measurements are used to estimate a function that produces similar output to the actual measurements. That function can then have its noise set to zero to produce an estimate of what the processor would do without any noise at all.”

The problem here is that using the function is computationally difficult, and the difficulty increases with the qubit count. So, while it’s still easier to do error mitigation calculations than simulate the quantum computer’s behavior on the same hardware, there’s still the risk of it becoming computationally intractable. But IBM has also taken the time to optimize that, too. “They’ve got algorithmic improvements, and the method that uses tensor methods [now] uses the GPU,” Gambetta told Ars. “So I think it’s a combination of both.”

IBM boosts the amount of computation you can get done on quantum hardware Read More »

what-did-the-snowball-earth-look-like?

What did the snowball Earth look like?

All of which raises questions about what the snowball Earth might have looked like in the continental interiors. A team of US-based geologists think they’ve found some glacial deposits in the form of what are called the Tavakaiv sandstones in Colorado. These sandstones are found along the Front Range of the Rockies, including areas just west of Colorado Springs. And, if the authors’ interpretations are correct, they formed underneath a massive sheet of glacial ice.

There are lots of ways to form sandstone deposits, and they can be difficult to date because they’re aggregates of the remains of much older rocks. But in this case, the Tavakaiv sandstone is interrupted by intrusions of dark colored rock that contains quartz and large amounts of hematite, a form of iron oxide.

These intrusions tell us a remarkable number of things. For one, some process must have exerted enough force to drive material into small faults in the sandstone. Hematite only gets deposited under fairly specific conditions, which tells us a bit more. And, most critically, hematite can trap uranium and the lead it decays into, providing a way of dating when the deposits formed.

Under the snowball

Depending on which site was being sampled, the hematite produced a range of dates, from as recent as 660 million years ago to as old as 700 million years. That means all of them were formed during what’s termed the Sturtian glaciation, which ran from 715 million to 660 million years ago. At the time, the core of what is now North America was in the equatorial region. So, the Tavakaiv sandstones can provide a window into what at least one continent experienced during the most severe global glaciation of the Cryogenian Period.

What did the snowball Earth look like? Read More »

firefly-aerospace-rakes-in-more-cash-as-competitors-struggle-for-footing

Firefly Aerospace rakes in more cash as competitors struggle for footing

More than just one thing

Firefly’s majority owner is the private equity firm AE Industrial Partners, and the Series D funding round was led by Michigan-based RPM Ventures.

“Few companies can say they’ve defined a new category in their industry—Firefly is one of those,” said Marc Weiser, a managing director at RPM Ventures. “They have captured their niche in the market as a full service provider for responsive space missions and have become the pinnacle of what a modern space and defense technology company looks like.”

This descriptor—a full service provider—is what differentiates Firefly from most other space companies. Firefly’s crosscutting work in small and medium launch vehicles, rocket engines, lunar landers, and in-space propulsion propels it into a club of wide-ranging commercial space companies that, arguably, only includes SpaceX, Blue Origin, and Rocket Lab.

NASA has awarded Firefly three task orders under the Commercial Lunar Payload Services (CLPS) program. Firefly will soon ship its first Blue Ghost lunar lander to Florida for final preparations to launch to the Moon and deliver 10 NASA-sponsored scientific instruments and tech demo experiments to the lunar surface. NASA has a contract with Firefly for a second Blue Ghost mission, plus an agreement for Firefly to transport a European data relay satellite to lunar orbit.

Firefly also boasts a healthy backlog of missions on its Alpha rocket. In June, Lockheed Martin announced a deal for as many as 25 Alpha launches through 2029. Two months later, L3Harris inked a contract with Firefly for up to 20 Alpha launches. Firefly has also signed Alpha launch contracts with NASA, the National Oceanic and Atmospheric Administration (NOAA), the Space Force, and the National Reconnaissance Office. One of these Alpha launches will deploy Firefly’s first orbital transfer vehicle, named Elytra, designed to host customer payloads and transport them to different orbits following separation from the launcher’s upper stage.

And there’s the Medium Launch Vehicle, a rocket Firefly and Northrop Grumman hope to launch as soon as 2026. But first, the companies will fly an MLV booster stage with seven kerosene-fueled Miranda engines on a new version of Northrop Grumman’s Antares rocket for cargo deliveries to the International Space Station. Northrop Grumman has retired the previous version of Antares after losing access to Russian rocket engines in the wake of Russia’s invasion of Ukraine.

Firefly Aerospace rakes in more cash as competitors struggle for footing Read More »

teen-in-critical-condition-with-canada’s-first-human-case-of-h5-bird-flu

Teen in critical condition with Canada’s first human case of H5 bird flu

A British Columbia teen who contracted Canada’s first known human case of H5 bird flu has deteriorated swiftly in recent days and is now in critical condition, health officials reported Tuesday.

The teen’s case was announced Saturday by provincial health officials, who noted that the teen had no obvious exposure to animals that could explain an infection with the highly pathogenic avian influenza. The teen tested positive for H5 bird flu at BC’s public health laboratory, and the result is currently being confirmed by the National Microbiology Laboratory in Winnipeg.

The teen’s case reportedly began with conjunctivitis, echoing the H5N1 human case reports in the US. The case then progressed to fever and cough, and the teen was admitted to BC’s Children’s hospital late Friday. The teen’s condition varied throughout the weekend but had taken a turn for the worse by Tuesday, according to BC provincial health officer Bonnie Henry.

“This was a healthy teenager prior to this—so, no underlying conditions—and it just reminds us that in young people, this is a virus that can progress and cause quite severe illness,” Bonnie Henry said in a media briefing streamed by Global News on Tuesday.

Health officials in the province have opened an investigation to understand the source of the outbreak. Around three dozen contacts of the teen have been tested, and all have been negative. “The source of exposure is very likely to be an animal or bird and is being investigated by BC’s chief veterinarian and public health teams,” health officials noted in the announcement over the weekend. The teen was reportedly exposed to pets, including dogs, cats, and reptiles, but testing so far has been negative on them.

Teen in critical condition with Canada’s first human case of H5 bird flu Read More »

this-elephant-figured-out-how-to-use-a-hose-to-shower

This elephant figured out how to use a hose to shower

And the hose-showering behavior was “lateralized,” that is, Mary preferred targeting her left body side more than her right. (Yes, Mary is a “left-trunker.”) Mary even adapted her showering behavior depending on the diameter of the hose: she preferred showering with a 24-mm hose over a 13-mm hose and preferred to use her trunk to shower rather than a 32-mm hose.

It’s not known where Mary learned to use a hose, but the authors suggest that elephants might have an intuitive understanding of how hoses work because of the similarity to their trunks. “Bathing and spraying themselves with water, mud, or dust are very common behaviors in elephants and important for body temperature regulation as well as skin care,” they wrote. “Mary’s behavior fits with other instances of tool use in elephants related to body care.”

Perhaps even more intriguing was Anchali’s behavior. While Anchali did not use the hose to shower, she nonetheless exhibited complex behavior in manipulating the hose: lifting it, kinking the hose, regrasping the kink, and compressing the kink. The latter, in particular, often resulted in reduced water flow while Mary was showering. Anchali eventually figured out how to further disrupt the water flow by placing her trunk on the hose and lowering her body onto it. Control experiments were inconclusive about whether Anchali was deliberately sabotaging Mary’s shower; the two elephants had been at odds and behaved aggressively toward each other at shower times. But similar cognitively complex behavior has been observed in elephants.

“When Anchali came up with a second behavior that disrupted water flow to Mary, I became pretty convinced that she is trying to sabotage Mary,” Brecht said. “Do elephants play tricks on each other in the wild? When I saw Anchali’s kink and clamp for the first time, I broke out in laughter. So, I wonder, does Anchali also think this is funny, or is she just being mean?

Current Biology, 2024. DOI: 10.1016/j.cub.2024.10.017  (About DOIs).

This elephant figured out how to use a hose to shower Read More »

for-the-second-time-this-year,-nasa’s-jpl-center-cuts-its-workforce

For the second time this year, NASA’s JPL center cuts its workforce

“This reduction is spread across essentially all areas of the Lab including our technical, project, business, and support areas,” Leshin wrote. “We have taken seriously the need to re-size our workforce, whether direct-funded (project) or funded on overhead (burden). With lower budgets and based on the forecasted work ahead, we had to tighten our belts across the board, and you will see that reflected in the layoff impacts.”

This year’s employee cuts came after NASA decided to consider alternatives to a multibillion-dollar plan to return samples from Mars to Earth, which had been led by JPL. In September 2023 an independent review team found that the JPL plan was unworkable and would cost $8 billion to $11 billion to be successful.

A changing environment

While NASA considers alternatives from other field centers, as well as private companies such as SpaceX and Rocket Lab, the budget for Mars Sample Return was slashed from nearly $1 billion for this fiscal year to less than $300 million. Additionally, there is no guarantee that JPL will be given leadership of a revamped Mars Sample Return mission.

The staffing cuts reflect the fact that after the recent launch of the $5 billion Europa Clipper mission, JPL is not managing another flagship deep-space mission at present. Another sizable mission, the NASA-ISRO Synthetic Aperture Radar, is almost ready for a launch next year from India. The California laboratory has smaller projects, but nothing on the order of a flagship mission to command a large budget and support a very large staff.

JPL has a long and storied history, including the management of most of NASA’s highest-profile planetary probes, including the Voyagers, Mars landers, and Galileo and Cassini spacecraft. However in recent years other spaceflight centers, such as Johns Hopkins Applied Physics Laboratory, and private companies such as Lockheed have competed for projects and delivered results.

The job of Leshin and others at NASA is to ensure that JPL has a bright future in a changing world of planetary exploration. This week’s cuts will ensure such a future, Leshin wrote, adding: “We are an incredibly strong organization—our dazzling history, current achievements, and relentless commitment to exploration and discovery position us well for the future.”

For the second time this year, NASA’s JPL center cuts its workforce Read More »

there-are-some-things-the-crew-8-astronauts-aren’t-ready-to-talk-about

There are some things the Crew-8 astronauts aren’t ready to talk about


“I did not say I was uncomfortable talking about it. I said we’re not going to talk about it.”

NASA astronaut Michael Barratt works with a spacesuit inside the Quest airlock of the International Space Station on May 31. Credit: NASA

The astronauts who came home from the International Space Station last month experienced some drama on the high frontier, and some of it accompanied them back to Earth.

In orbit, the astronauts aborted two spacewalks, both under unusual circumstances. Then, on October 25, one of the astronauts was hospitalized due to what NASA called an unspecified “medical issue” after splashdown aboard a SpaceX Crew Dragon capsule that concluded the 235-day mission. After an overnight stay in a hospital in Florida, NASA said the astronaut was released “in good health” and returned to their home base in Houston to resume normal post-flight activities.

The space agency did not identify the astronaut or any details about their condition, citing medical privacy concerns. The three NASA astronauts on the Dragon spacecraft included commander Matthew Dominick, pilot Michael Barratt, and mission specialist Jeanette Epps. Russian cosmonaut Alexander Grebenkin accompanied the three NASA crew members. Russia’s space agency confirmed he was not hospitalized after returning to Earth.

Dominick, Barratt, and Epps answered media questions in a post-flight press conference Friday, but they did not offer more information on the medical issue or say who experienced it. NASA initially sent all four crew members to the hospital in Pensacola, Florida, for evaluation, but Grebenkin and two of the NASA astronauts were quickly released and cleared to return to Houston. One astronaut remained behind until the next day.

“Spaceflight is still something we don’t fully understand,” said Barratt, a medical doctor and flight surgeon. “We’re finding things that we don’t expect sometimes. This was one of those times, and we’re still piecing things together on this, and so to maintain medical privacy and to let our processes go forward in an orderly manner, this is all we’re going to say about that event at this time.”

NASA typically makes astronaut health data available to outside researchers, who regularly publish papers while withholding identifying information about crew members. NASA officials often tout gaining knowledge about the human body’s response to spaceflight as one of the main purposes of the International Space Station. The agency is subject to federal laws, including the Health Insurance Portability and Accountability Act (HIPAA) of 1996, restricting the release of private medical information.

“I did not say I was uncomfortable talking about it,” Barratt said. “I said we’re not going to talk about it. I’m a medical doctor. Space medicine is my passion … and how we adapt, how we experience human spaceflight is something that we all take very seriously.”

Maybe some day

Barratt said NASA will release more information about the astronaut’s post-flight medical issue “in the fullness of time.” This was Barratt’s third trip to space and the first spaceflight for Dominick and Epps.

One of the most famous incidents involving hospitalized astronauts was in 1975, before the passage of the HIPAA medical privacy law, when NASA astronauts Thomas Stafford, Deke Slayton, and Vance Brand stayed at a military hospital in Hawaii nearly two weeks after inhaling toxic propellant fumes that accidentally entered their spacecraft’s internal cabin as it descended under parachutes. They were returning to Earth at the end of the Apollo-Soyuz mission, in which they docked their Apollo command module to a Soviet Soyuz spacecraft in orbit.

NASA’s view—and perhaps the public’s, too—of medical privacy has changed in the nearly 50 years since. On that occasion, NASA disclosed that the astronauts suffered from lung irritation, and officials said Brand briefly passed out from the fumes after splashdown, remaining unconscious until his crewmates fitted an oxygen mask tightly over his face. NASA and the military also made doctors available to answer media questions about their condition.

The medical concern after splashdown last month was not the only part of the Crew-8 mission that remains shrouded in mystery. Dominick and NASA astronaut Tracy Dyson were supposed to go outside the International Space Station for a spacewalk June 13, but NASA called off the excursion, citing a “spacesuit discomfort issue.” NASA replaced Dominick with Barratt and rescheduled the spacewalk for June 24 to retrieve a faulty electronics box and collect microbial samples from the exterior of the space station. But that excursion ended after just 31 minutes, when Dyson reported a water leak in the service and cooling umbilical unit of her spacesuit.

While Barratt discussed the water leak in some detail Friday, Dominick declined to answer a question from Ars regarding the suit discomfort issue. “We’re still reviewing and trying to figure all the details,” he said.

Aging suits

Regarding the water leak, Barratt said he and Dyson noticed her suit had a “spewing umbilical, which was quite dramatic, actually.” The decision to abandon the spacewalk was a “no-brainer,” he said.

“It was not a trivial leak, and we’ve got footage,” Barratt said. “Anybody who was watching NASA TV at the time could see there was basically a snowstorm, a blizzard, spewing from the airlock because we already had the hatch open. So we were seeing flakes of ice in the airlock, and Tracy was seeing a lot of them on her helmet, on her gloves, and whatnot. Dramatic is the right word, to be real honest.”

Dyson, who came back to Earth in September on a Russian Soyuz spacecraft, reconnected the leaking umbilical with her gloves and helmet covered with ice, with restricted vision. “Tracy’s actions were nowhere short of heroic,” Barratt said.

Once the leak stabilized, the astronauts closed the hatch and began repressurizing the airlock.

“Getting the airlock closed was kind of me grabbing her legs and using her as an end effector to lever that thing closed, and she just made it happen,” Barratt said. “So, yeah,  there was this drama. Everything worked out fine. Again, normal processes and procedures saved our bacon.”

Barratt said the leak wasn’t caused by any procedural error as the astronauts prepared their suits for the spacewalk.

“It was definitely a hardware issue,” he said. “There was a little poppet valve on the interface that didn’t quite seat, so really, the question became why didn’t that seat? We solved that problem by changing out the whole umbilical.”

By then, NASA’s attention on the space station had turned to other tasks, such as experiments, the arrival of a new cargo ship, and testing of Boeing’s Starliner crew capsule docked at the complex, before it ultimately departed and left its crew behind. The spacewalk wasn’t urgent, so it had to wait. NASA now plans to attempt the spacewalk again as soon as January with a different set of astronauts.

Barratt thinks the spacesuits on the space station are good to go for the next spacewalk. However, the suits are decades old, and their original designs date back more than 40 years, when NASA developed the units for use on the space shuttle. Efforts to develop a replacement suit for use in low-Earth orbit have stalled. In June, Collins Aerospace dropped out of a NASA contract to build new spacesuits for servicing the International Space Station and future orbiting research outposts.

“None of our spacesuits are spring chickens, so we will expect to see some hardware issues with repeated use and not really upgrading,” Barratt said.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

There are some things the Crew-8 astronauts aren’t ready to talk about Read More »

air-quality-problems-spur-$200-million-in-funds-to-cut-pollution-at-ports

Air quality problems spur $200 million in funds to cut pollution at ports


Diesel equipment will be replaced with hydrogen- or electric-power gear.

Raquel Garcia has been fighting for years to clean up the air in her neighborhood southwest of downtown Detroit.

Living a little over a mile from the Ambassador Bridge, which thousands of freight trucks cross every day en route to the Port of Detroit, Garcia said she and her neighbors are frequently cleaning soot off their homes.

“You can literally write your name in it,” she said. “My house is completely covered.”

Her neighborhood is part of Wayne County, which is home to heavy industry, including steel plants and major car manufacturers, and suffers from some of the worst air quality in Michigan. In its 2024 State of the Air report, the American Lung Association named Wayne County one of the “worst places to live” in terms of annual exposure to fine particulate matter pollution, or PM2.5.

But Detroit, and several other Midwest cities with major shipping ports, could soon see their air quality improve as port authorities receive hundreds of millions of dollars to replace diesel equipment with cleaner technologies like solar power and electric vehicles.

Last week, the Biden administration announced $3 billion in new grants from the US Environmental Protection Agency’s Clean Ports program, which aims to slash carbon emissions and reduce air pollution at US shipping ports. More than $200 million of that funding will go to four Midwestern states that host ports along the Great Lakes: Michigan, Illinois, Ohio, and Indiana.

The money, which comes from the Inflation Reduction Act, will not only be used to replace diesel-powered equipment and vehicles, but also to install clean energy systems and charging stations, take inventory of annual port emissions, and set plans for reducing them. It will also fund a feasibility study for establishing a green hydrogen fuel hub along the Great Lakes.

The EPA estimates that those changes will, nationwide, reduce carbon pollution in the first 10 years by more than 3 million metric tons, roughly the equivalent of taking 600,000 gasoline-powered cars off the road. The agency also projects reduced emissions of nitrous oxide and PM2.5—both of which can cause serious, long-term health complications—by about 10,000 metric tons and about 180 metric tons, respectively, during that same time period.

“Our nation’s ports are critical to creating opportunity here in America, offering good-paying jobs, moving goods, and powering our economy,” EPA Administrator Michael Regan said in the agency’s press release announcing the funds. “Delivering cleaner technologies and resources to US ports will slash harmful air and climate pollution while protecting people who work in and live nearby ports communities.”

Garcia, who runs the community advocacy nonprofit Southwest Detroit Environmental Vision, said she’s “really excited” to see the Port of Detroit getting those funds, even though it’s just a small part of what’s needed to clean up the city’s air pollution.

“We care about the air,” she said. “There’s a lot of kids in the neighborhood where I live.”

Jumpstarting the transition to cleaner technology

Nationwide, port authorities in 27 states and territories tapped the Clean Ports funding, which they’ll use to buy more than 1,500 units of cargo-handling equipment, such as forklifts and cranes, 1,000 heavy-duty trucks, 10 locomotives, and 20 seafaring vessels, all of which will be powered by electricity or green hydrogen, which doesn’t emit CO2 when burned.

In the Midwest, the Illinois Environmental Protection Agency and the Cleveland-Cuyahoga County Port Authority in Ohio were awarded about $95 million each from the program, the Detroit-Wayne County Port Authority in Michigan was awarded $25 million, and the Ports of Indiana will receive $500,000.

Mark Schrupp, executive director of the Detroit-Wayne County Port Authority, said the funding for his agency will be used to help port operators at three terminals purchase new electric forklifts, cranes, and boat motors, among other zero-emission equipment. The money will also pay for a new solar array that will reduce energy consumption for port facilities, as well as 11 new electric vehicle charging stations.

“This money is helping those [port] businesses make the investment in this clean technology, which otherwise is sometimes five or six times the cost of a diesel-powered equipment,” he said, noting that the costs of clean technologies are expected to fall significantly in the coming years as manufacturers scale up production. “It also exposes them to the potential savings over time—full maintenance costs and other things that come from having the dirtier technology in place.”

Schrupp said that the new equipment will slash the Detroit-Wayne County Port Authority’s overall carbon emissions by more than 8,600 metric tons every year, roughly a 30 percent reduction.

Carly Beck, senior manager of planning, environment and information systems for the Cleveland-Cuyahoga County Port Authority, said its new equipment will reduce the Port of Cleveland’s annual carbon emissions by roughly 1,000 metric tons, or about 40 percent of the emissions tied to the port’s operations. The funding will also pay for two electric tug boats and the installation of solar panels and battery storage on the port’s largest warehouse, she added.

In 2022, Beck said, the Port of Cleveland took an emissions inventory, which found that cargo-handling equipment, building energy use, and idling ships were the port’s biggest sources of carbon emissions. Docked ships would run diesel generators for power as they unloaded, she said, but with the new infrastructure, the cargo-handling equipment and idling ships can draw power from a 2-megawatt solar power system with battery storage.

“We’re essentially creating a microgrid at the port,” she said.

Improving the air for disadvantaged communities

The Clean Ports funding will also be a boon for people like Garcia, who live near a US shipping port.

Shipping ports are notorious for their diesel pollution, which research has shown disproportionately affects poor communities of color. And most, if not all, of the census tracts surrounding the Midwest ports are deemed “disadvantaged communities” by the federal government. The EPA uses a number of factors, including income level and exposure to environmental harms, to determine whether a community is “disadvantaged.”

About 10,000 trucks pass through the Port of Detroit every day, Schrupp said, which helps to explain why residents of Southwest Detroit and the neighboring cities of Ecorse and River Rouge, which sit adjacent to Detroit ports, breathe the state’s dirtiest air.

“We have about 50,000 residents within a few miles of the port, so those communities will definitely benefit,” he said. “This is a very industrialized area.”

Burning diesel or any other fossil fuel produces nitrous oxide or PM2.5, and research has shown that prolonged exposure to high levels of those pollutants can lead to serious health complications, including lung disease and premature death. The Detroit-Wayne County Port Authority estimates that the new port equipment will cut nearly 9 metric tons of PM2.5 emissions and about 120 metric tons of nitrous oxide emissions each year.

Garcia said she’s also excited that some of the Detroit grants will be used to establish workforce training programs, which will show people how to use the new technologies and showcase career opportunities at the ports. Her area is gentrifying quickly, Garcia said, so it’s heartening to see the city and port authority taking steps to provide local employment opportunities.

Beck said that the Port of Cleveland is also surrounded by a lot of heavy industry and that the census tracts directly adjacent to the port are all deemed “disadvantaged” by federal standards.

“We’re trying to be good neighbors and play our part,” she said, “to make it a more pleasant environment.”

Kristoffer Tigue is a staff writer for Inside Climate News, covering climate issues in the Midwest. He previously wrote the twice-weekly newsletter Today’s Climate and helped lead ICN’s national coverage on environmental justice. His work has been published in Reuters, Scientific American, Mother Jones, HuffPost, and many more. Tigue holds a master’s degree in journalism from the Missouri School of Journalism.

This story originally appeared on Inside Climate News.

Photo of Inside Climate News

Air quality problems spur $200 million in funds to cut pollution at ports Read More »

russia:-fine,-i-guess-we-should-have-a-grasshopper-rocket-project,-too

Russia: Fine, I guess we should have a Grasshopper rocket project, too

Like a lot of competitors in the global launch industry, Russia for a long time dismissed the prospects of a reusable first stage for a rocket.

As late as 2016, an official with the Russian agency that develops strategy for the country’s main space corporation, Roscosmos, concluded, “The economic feasibility of reusable launch systems is not obvious.” In the dismissal of the landing prospects of SpaceX’s Falcon 9 rocket, Russian officials were not alone. Throughout the 2010s, competitors including space agencies in Europe and Japan, and US-based United Launch Alliance, all decided to develop expendable rockets.

However, by 2017, when SpaceX re-flew a Falcon 9 rocket for the first time, the writing was on the wall. “This is a very important step, we sincerely congratulate our colleague on this achievement,” then-Roscosmos CEO Igor Komarov said at the time. He even spoke of developing reusable components, such as rocket engines capable of multiple firings.

A Russian Grasshopper

That was more than seven years ago, however, and not much has happened in Russia since then to foster the development of a reusable rocket vehicle. Yes, Roscosmos unveiled plans for the “Amur” rocket in 2020, which was intended to have a reusable first stage and methane-fueled engines and land like the Falcon 9. But its debut has slipped year for year—originally intended to fly in 2026, its first launch is now expected no earlier than 2030.

Now, however, there is some interesting news from Moscow about plans to develop a prototype vehicle to test the ability to land the Amur rocket’s first stage vertically.

According to the state-run news agency, TASS, construction of this test vehicle will enable the space corporation to solve key challenges. “Next year preparation of an experimental stage of the (Amur) rocket, which everyone is calling ‘Grasshopper,’ will begin,” said Igor Pshenichnikov, the Roscosmos deputy director of the department of future programs. The Russian news article was translated for Ars by Rob Mitchell.

Russia: Fine, I guess we should have a Grasshopper rocket project, too Read More »

how-a-stubborn-computer-scientist-accidentally-launched-the-deep-learning-boom

How a stubborn computer scientist accidentally launched the deep learning boom


“You’ve taken this idea way too far,” a mentor told Prof. Fei-Fei Li.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

During my first semester as a computer science graduate student at Princeton, I took COS 402: Artificial Intelligence. Toward the end of the semester, there was a lecture about neural networks. This was in the fall of 2008, and I got the distinct impression—both from that lecture and the textbook—that neural networks had become a backwater.

Neural networks had delivered some impressive results in the late 1980s and early 1990s. But then progress stalled. By 2008, many researchers had moved on to mathematically elegant approaches such as support vector machines.

I didn’t know it at the time, but a team at Princeton—in the same computer science building where I was attending lectures—was working on a project that would upend the conventional wisdom and demonstrate the power of neural networks. That team, led by Prof. Fei-Fei Li, wasn’t working on a better version of neural networks. They were hardly thinking about neural networks at all.

Rather, they were creating a new image dataset that would be far larger than any that had come before: 14 million images, each labeled with one of nearly 22,000 categories.

Li tells the story of ImageNet in her recent memoir, The Worlds I See. As she worked on the project, she faced plenty of skepticism from friends and colleagues.

“I think you’ve taken this idea way too far,” a mentor told her a few months into the project in 2007. “The trick is to grow with your field. Not to leap so far ahead of it.”

It wasn’t just that building such a large dataset was a massive logistical challenge. People doubted that the machine learning algorithms of the day would benefit from such a vast collection of images.

“Pre-ImageNet, people did not believe in data,” Li said in a September interview at the Computer History Museum. “Everyone was working on completely different paradigms in AI with a tiny bit of data.”

Ignoring negative feedback, Li pursued the project for more than two years. It strained her research budget and the patience of her graduate students. When she took a new job at Stanford in 2009, she took several of those students—and the ImageNet project—with her to California.

ImageNet received little attention for the first couple of years after its release in 2009. But in 2012, a team from the University of Toronto trained a neural network on the ImageNet dataset, achieving unprecedented performance in image recognition. That groundbreaking AI model, dubbed AlexNet after lead author Alex Krizhevsky, kicked off the deep learning boom that has continued to the present day.

AlexNet would not have succeeded without the ImageNet dataset. AlexNet also would not have been possible without a platform called CUDA, which allowed Nvidia’s graphics processing units (GPUs) to be used in non-graphics applications. Many people were skeptical when Nvidia announced CUDA in 2006.

So the AI boom of the last 12 years was made possible by three visionaries who pursued unorthodox ideas in the face of widespread criticism. One was Geoffrey Hinton, a University of Toronto computer scientist who spent decades promoting neural networks despite near-universal skepticism. The second was Jensen Huang, the CEO of Nvidia, who recognized early that GPUs could be useful for more than just graphics.

The third was Fei-Fei Li. She created an image dataset that seemed ludicrously large to most of her colleagues. But it turned out to be essential for demonstrating the potential of neural networks trained on GPUs.

Geoffrey Hinton

A neural network is a network of thousands, millions, or even billions of neurons. Each neuron is a mathematical function that produces an output based on a weighted average of its inputs.

Suppose you want to create a network that can identify handwritten decimal digits like the number two in the red square above. Such a network would take in an intensity value for each pixel in an image and output a probability distribution over the ten possible digits—0, 1, 2, and so forth.

To train such a network, you first initialize it with random weights. You then run it on a sequence of example images. For each image, you train the network by strengthening the connections that push the network toward the right answer (in this case, a high-probability value for the “2” output) and weakening connections that push toward a wrong answer (a low probability for “2” and high probabilities for other digits). If trained on enough example images, the model should start to predict a high probability for “2” when shown a two—and not otherwise.

In the late 1950s, scientists started to experiment with basic networks that had a single layer of neurons. However, their initial enthusiasm cooled as they realized that such simple networks lacked the expressive power required for complex computations.

Deeper networks—those with multiple layers—had the potential to be more versatile. But in the 1960s, no one knew how to train them efficiently. This was because changing a parameter somewhere in the middle of a multi-layer network could have complex and unpredictable effects on the output.

So by the time Hinton began his career in the 1970s, neural networks had fallen out of favor. Hinton wanted to study them, but he struggled to find an academic home in which to do so. Between 1976 and 1986, Hinton spent time at four different research institutions: Sussex University, the University of California San Diego (UCSD), a branch of the UK Medical Research Council, and finally Carnegie Mellon, where he became a professor in 1982.

Geoffrey Hinton speaking in Toronto in June.

Credit: Photo by Mert Alper Dervis/Anadolu via Getty Images

Geoffrey Hinton speaking in Toronto in June. Credit: Photo by Mert Alper Dervis/Anadolu via Getty Images

In a landmark 1986 paper, Hinton teamed up with two of his former colleagues at UCSD, David Rumelhart and Ronald Williams, to describe a technique called backpropagation for efficiently training deep neural networks.

Their idea was to start with the final layer of the network and work backward. For each connection in the final layer, the algorithm computes a gradient—a mathematical estimate of whether increasing the strength of that connection would push the network toward the right answer. Based on these gradients, the algorithm adjusts each parameter in the model’s final layer.

The algorithm then propagates these gradients backward to the second-to-last layer. A key innovation here is a formula—based on the chain rule from high school calculus—for computing the gradients in one layer based on gradients in the following layer. Using these new gradients, the algorithm updates each parameter in the second-to-last layer of the model. The gradients then get propagated backward to the third-to-last layer, and the whole process repeats once again.

The algorithm only makes small changes to the model in each round of training. But as the process is repeated over thousands, millions, billions, or even trillions of training examples, the model gradually becomes more accurate.

Hinton and his colleagues weren’t the first to discover the basic idea of backpropagation. But their paper popularized the method. As people realized it was now possible to train deeper networks, it triggered a new wave of enthusiasm for neural networks.

Hinton moved to the University of Toronto in 1987 and began attracting young researchers who wanted to study neural networks. One of the first was the French computer scientist Yann LeCun, who did a year-long postdoc with Hinton before moving to Bell Labs in 1988.

Hinton’s backpropagation algorithm allowed LeCun to train models deep enough to perform well on real-world tasks like handwriting recognition. By the mid-1990s, LeCun’s technology was working so well that banks started to use it for processing checks.

“At one point, LeCun’s creation read more than 10 percent of all checks deposited in the United States,” wrote Cade Metz in his 2022 book Genius Makers.

But when LeCun and other researchers tried to apply neural networks to larger and more complex images, it didn’t go well. Neural networks once again fell out of fashion, and some researchers who had focused on neural networks moved on to other projects.

Hinton never stopped believing that neural networks could outperform other machine learning methods. But it would be many years before he’d have access to enough data and computing power to prove his case.

Jensen Huang

Jensen Huang speaking in Denmark in October.

Credit: Photo by MADS CLAUS RASMUSSEN/Ritzau Scanpix/AFP via Getty Images

Jensen Huang speaking in Denmark in October. Credit: Photo by MADS CLAUS RASMUSSEN/Ritzau Scanpix/AFP via Getty Images

The brain of every personal computer is a central processing unit (CPU). These chips are designed to perform calculations in order, one step at a time. This works fine for conventional software like Windows and Office. But some video games require so many calculations that they strain the capabilities of CPUs. This is especially true of games like Quake, Call of Duty, and Grand Theft Auto, which render three-dimensional worlds many times per second.

So gamers rely on GPUs to accelerate performance. Inside a GPU are many execution units—essentially tiny CPUs—packaged together on a single chip. During gameplay, different execution units draw different areas of the screen. This parallelism enables better image quality and higher frame rates than would be possible with a CPU alone.

Nvidia invented the GPU in 1999 and has dominated the market ever since. By the mid-2000s, Nvidia CEO Jensen Huang suspected that the massive computing power inside a GPU would be useful for applications beyond gaming. He hoped scientists could use it for compute-intensive tasks like weather simulation or oil exploration.

So in 2006, Nvidia announced the CUDA platform. CUDA allows programmers to write “kernels,” short programs designed to run on a single execution unit. Kernels allow a big computing task to be split up into bite-sized chunks that can be processed in parallel. This allows certain kinds of calculations to be completed far faster than with a CPU alone.

But there was little interest in CUDA when it was first introduced, wrote Steven Witt in The New Yorker last year:

When CUDA was released, in late 2006, Wall Street reacted with dismay. Huang was bringing supercomputing to the masses, but the masses had shown no indication that they wanted such a thing.

“They were spending a fortune on this new chip architecture,” Ben Gilbert, the co-host of “Acquired,” a popular Silicon Valley podcast, said. “They were spending many billions targeting an obscure corner of academic and scientific computing, which was not a large market at the time—certainly less than the billions they were pouring in.”

Huang argued that the simple existence of CUDA would enlarge the supercomputing sector. This view was not widely held, and by the end of 2008, Nvidia’s stock price had declined by seventy percent…

Downloads of CUDA hit a peak in 2009, then declined for three years. Board members worried that Nvidia’s depressed stock price would make it a target for corporate raiders.

Huang wasn’t specifically thinking about AI or neural networks when he created the CUDA platform. But it turned out that Hinton’s backpropagation algorithm could easily be split up into bite-sized chunks. So training neural networks turned out to be a killer app for CUDA.

According to Witt, Hinton was quick to recognize the potential of CUDA:

In 2009, Hinton’s research group used Nvidia’s CUDA platform to train a neural network to recognize human speech. He was surprised by the quality of the results, which he presented at a conference later that year. He then reached out to Nvidia. “I sent an e-mail saying, ‘Look, I just told a thousand machine-learning researchers they should go and buy Nvidia cards. Can you send me a free one?’ ” Hinton told me. “They said no.”

Despite the snub, Hinton and his graduate students, Alex Krizhevsky and Ilya Sutskever, obtained a pair of Nvidia GTX 580 GPUs for the AlexNet project. Each GPU had 512 execution units, allowing Krizhevsky and Sutskever to train a neural network hundreds of times faster than would be possible with a CPU. This speed allowed them to train a larger model—and to train it on many more training images. And they would need all that extra computing power to tackle the massive ImageNet dataset.

Fei-Fei Li

Fei-Fei Li at the SXSW conference in 2018.

Credit: Photo by Hubert Vestil/Getty Images for SXSW

Fei-Fei Li at the SXSW conference in 2018. Credit: Photo by Hubert Vestil/Getty Images for SXSW

Fei-Fei Li wasn’t thinking about either neural networks or GPUs as she began a new job as a computer science professor at Princeton in January of 2007. While earning her PhD at Caltech, she had built a dataset called Caltech 101 that had 9,000 images across 101 categories.

That experience had taught her that computer vision algorithms tended to perform better with larger and more diverse training datasets. Not only had Li found her own algorithms performed better when trained on Caltech 101, but other researchers also started training their models using Li’s dataset and comparing their performance to one another. This turned Caltech 101 into a benchmark for the field of computer vision.

So when she got to Princeton, Li decided to go much bigger. She became obsessed with an estimate by vision scientist Irving Biederman that the average person recognizes roughly 30,000 different kinds of objects. Li started to wonder if it would be possible to build a truly comprehensive image dataset—one that included every kind of object people commonly encounter in the physical world.

A Princeton colleague told Li about WordNet, a massive database that attempted to catalog and organize 140,000 words. Li called her new dataset ImageNet, and she used WordNet as a starting point for choosing categories. She eliminated verbs and adjectives, as well as intangible nouns like “truth.” That left a list of 22,000 countable objects ranging from “ambulance” to “zucchini.”

She planned to take the same approach she’d taken with the Caltech 101 dataset: use Google’s image search to find candidate images, then have a human being verify them. For the Caltech 101 dataset, Li had done this herself over the course of a few months. This time she would need more help. She planned to hire dozens of Princeton undergraduates to help her choose and label images.

But even after heavily optimizing the labeling process—for example, pre-downloading candidate images so they’re instantly available for students to review—Li and her graduate student Jia Deng calculated that it would take more than 18 years to select and label millions of images.

The project was saved when Li learned about Amazon Mechanical Turk, a crowdsourcing platform Amazon had launched a couple of years earlier. Not only was AMT’s international workforce more affordable than Princeton undergraduates, but the platform was also far more flexible and scalable. Li’s team could hire as many people as they needed, on demand, and pay them only as long as they had work available.

AMT cut the time needed to complete ImageNet down from 18 to two years. Li writes that her lab spent two years “on the knife-edge of our finances” as the team struggled to complete the ImageNet project. But they had enough funds to pay three people to look at each of the 14 million images in the final data set.

ImageNet was ready for publication in 2009, and Li submitted it to the Conference on Computer Vision and Pattern Recognition, which was held in Miami that year. Their paper was accepted, but it didn’t get the kind of recognition Li hoped for.

“ImageNet was relegated to a poster session,” Li writes. “This meant that we wouldn’t be presenting our work in a lecture hall to an audience at a predetermined time but would instead be given space on the conference floor to prop up a large-format print summarizing the project in hopes that passersby might stop and ask questions… After so many years of effort, this just felt anticlimactic.”

To generate public interest, Li turned ImageNet into a competition. Realizing that the full dataset might be too unwieldy to distribute to dozens of contestants, she created a much smaller (but still massive) dataset with 1,000 categories and 1.4 million images.

The first year’s competition in 2010 generated a healthy amount of interest, with 11 teams participating. The winning entry was based on support vector machines. Unfortunately, Li writes, it was “only a slight improvement over cutting-edge work found elsewhere in our field.”

The second year of the ImageNet competition attracted fewer entries than the first. The winning entry in 2011 was another support vector machine, and it just barely improved on the performance of the 2010 winner. Li started to wonder if the critics had been right. Maybe “ImageNet was too much for most algorithms to handle.”

“For two years running, well-worn algorithms had exhibited only incremental gains in capabilities, while true progress seemed all but absent,” Li writes. “If ImageNet was a bet, it was time to start wondering if we’d lost.”

But when Li reluctantly staged the competition a third time in 2012, the results were totally different. Geoff Hinton’s team was the first to submit a model based on a deep neural network. And its top-5 accuracy was 85 percent—10 percentage points better than the 2011 winner.

Li’s initial reaction was incredulity: “Most of us saw the neural network as a dusty artifact encased in glass and protected by velvet ropes.”

“This is proof”

Yann LeCun testifies before the US Senate in September.

Credit: Photo by Kevin Dietsch/Getty Images

Yann LeCun testifies before the US Senate in September. Credit: Photo by Kevin Dietsch/Getty Images

The ImageNet winners were scheduled to be announced at the European Conference on Computer Vision in Florence, Italy. Li, who had a baby at home in California, was planning to skip the event. But when she saw how well AlexNet had done on her dataset, she realized this moment would be too important to miss: “I settled reluctantly on a twenty-hour slog of sleep deprivation and cramped elbow room.”

On an October day in Florence, Alex Krizhevsky presented his results to a standing-room-only crowd of computer vision researchers. Fei-Fei Li was in the audience. So was Yann LeCun.

Cade Metz reports that after the presentation, LeCun stood up and called AlexNet “an unequivocal turning point in the history of computer vision. This is proof.”

The success of AlexNet vindicated Hinton’s faith in neural networks, but it was arguably an even bigger vindication for LeCun.

AlexNet was a convolutional neural network, a type of neural network that LeCun had developed 20 years earlier to recognize handwritten digits on checks. (For more details on how CNNs work, see the in-depth explainer I wrote for Ars in 2018.) Indeed, there were few architectural differences between AlexNet and LeCun’s image recognition networks from the 1990s.

AlexNet was simply far larger. In a 1998 paper, LeCun described a document-recognition network with seven layers and 60,000 trainable parameters. AlexNet had eight layers, but these layers had 60 million trainable parameters.

LeCun could not have trained a model that large in the early 1990s because there were no computer chips with as much processing power as a 2012-era GPU. Even if LeCun had managed to build a big enough supercomputer, he would not have had enough images to train it properly. Collecting those images would have been hugely expensive in the years before Google and Amazon Mechanical Turk.

And this is why Fei-Fei Li’s work on ImageNet was so consequential. She didn’t invent convolutional networks or figure out how to make them run efficiently on GPUs. But she provided the training data that large neural networks needed to reach their full potential.

The technology world immediately recognized the importance of AlexNet. Hinton and his students formed a shell company with the goal to be “acquihired” by a big tech company. Within months, Google purchased the company for $44 million. Hinton worked at Google for the next decade while retaining his academic post in Toronto. Ilya Sutskever spent a few years at Google before becoming a cofounder of OpenAI.

AlexNet also made Nvidia GPUs the industry standard for training neural networks. In 2012, the market valued Nvidia at less than $10 billion. Today, Nvidia is one of the most valuable companies in the world, with a market capitalization north of $3 trillion. That high valuation is driven mainly by overwhelming demand for GPUs like the H100 that are optimized for training neural networks.

Sometimes the conventional wisdom is wrong

“That moment was pretty symbolic to the world of AI because three fundamental elements of modern AI converged for the first time,” Li said in a September interview at the Computer History Museum. “The first element was neural networks. The second element was big data, using ImageNet. And the third element was GPU computing.”

Today, leading AI labs believe the key to progress in AI is to train huge models on vast data sets. Big technology companies are in such a hurry to build the data centers required to train larger models that they’ve started to lease out entire nuclear power plants to provide the necessary power.

You can view this as a straightforward application of the lessons of AlexNet. But I wonder if we ought to draw the opposite lesson from AlexNet: that it’s a mistake to become too wedded to conventional wisdom.

“Scaling laws” have had a remarkable run in the 12 years since AlexNet, and perhaps we’ll see another generation or two of impressive results as the leading labs scale up their foundation models even more.

But we should be careful not to let the lessons of AlexNet harden into dogma. I think there’s at least a chance that scaling laws will run out of steam in the next few years. And if that happens, we’ll need a new generation of stubborn nonconformists to notice that the old approach isn’t working and try something different.

Tim Lee was on staff at Ars from 2017 to 2021. Last year, he launched a newsletter, Understanding AI, that explores how AI works and how it’s changing our world. You can subscribe here.

Photo of Timothy B. Lee

Timothy is a senior reporter covering tech policy and the future of transportation. He lives in Washington DC.

How a stubborn computer scientist accidentally launched the deep learning boom Read More »

rocket-report:-australia-says-yes-to-the-launch;-russia-delivers-for-iran

Rocket Report: Australia says yes to the launch; Russia delivers for Iran


The world’s first wooden satellite arrived at the International Space Station this week.

A Falcon 9 booster fires its engines on SpaceX’s “tripod” test stand in McGregor, Texas. Credit: SpaceX

Welcome to Edition 7.19 of the Rocket Report! Okay, we get it. We received more submissions from our readers on Australia’s approval of a launch permit for Gilmour Space than we’ve received on any other news story in recent memory. Thank you for your submissions as global rocket activity continues apace. We’ll cover Gilmour in more detail as they get closer to launch. There will be no Rocket Report next week as Eric and I join the rest of the Ars team for our 2024 Technicon in New York.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Gilmour Space has a permit to fly. Gilmour Space Technologies has been granted a permit to launch its 82-foot-tall (25-meter) orbital rocket from a spaceport in Queensland, Australia. The space company, founded in 2012, had initially planned to lift off in March but was unable to do so without approval from the Australian Space Agency, the Australian Broadcasting Corporation reports. The government approved Gilmour’s launch permit Monday, although the company is still weeks away from flying its three-stage Eris rocket.

A first for Australia … Australia hosted a handful of satellite launches with US and British rockets from 1967 through 1971, but Gilmour’s Eris rocket would become the first all-Australian launch vehicle to reach orbit. The Eris rocket is capable of delivering about 670 pounds (305 kilograms) of payload mass into a Sun-synchronous orbit. Eris will be powered by hybrid rocket engines burning a solid fuel mixed with a liquid oxidizer, making it unique among orbital-class rockets. Gilmour completed a wet dress rehearsal, or practice countdown, with the Eris rocket on the launch pad in Queensland in September. The launch permit becomes active after 30 days, or the first week of December. “We do think we’ve got a good chance of launching at the end of the 30-day period, and we’re going to give it a red hot go,” said Adam Gilmour, the company’s co-founder and CEO. (submitted by Marzipan, mryall, ZygP, Ken the Bin, Spencer Willis, MarkW98, and EllPeaTea)

North Korea tests new missile. North Korea apparently completed a successful test of its most powerful intercontinental ballistic missile on October 31, lofting it nearly 4,800 miles (7,700 kilometers) into space before the projectile fell back to Earth, Ars reports. This solid-fueled, multi-stage missile, named the Hwasong-19, is a new tool in North Korea’s increasingly sophisticated arsenal of weapons. It has enough range—perhaps as much as 9,320 miles (15,000 kilometers), according to Japan’s government—to strike targets anywhere in the United States. It also happens to be one of the largest ICBMs in the world, rivaling the missiles fielded by the world’s more established nuclear powers.

Quid pro quo? … The Hwasong-19 missile test comes as North Korea deploys some 10,000 troops inside Russia to support the country’s war against Ukraine. The budding partnership between Russia and North Korea has evolved for several years. Russian President Vladimir Putin has met with North Korean leader Kim Jong Un on multiple occasions, most recently in Pyongyang in June. This has fueled speculation about what Russia is offering North Korea in exchange for the troops deployed on Russian soil. US and South Korean officials have some thoughts. They said North Korea is likely to ask for technology transfers in diverse areas related to tactical nuclear weapons, ICBMs, and reconnaissance satellites.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Virgin Galactic is on the hunt for cash. Virgin Galactic is proposing to raise $300 million in additional capital to accelerate production of suborbital spaceplanes and a mothership aircraft the company says can fuel its long-term growth, Space News reports. The company, founded by billionaire Richard Branson, suspended operations of its VSS Unity suborbital spaceplane earlier this year. VSS Unity hit a monthly flight cadence carrying small groups of space tourists and researchers to the edge of space, but it just wasn’t profitable. Now, Virgin Galactic is developing larger Delta-class spaceplanes it says will be easier and cheaper to turn around between flights.

All-in with Delta … Michael Colglazier, Virgin Galactic’s CEO, announced the company’s appetite for fundraising in a quarterly earnings call with investment analysts Wednesday. He said manufacturing of components for Virgin Galactic’s first two Delta-class ships, which the company says it can fund with existing cash, is proceeding on schedule at a factory in Arizona. Virgin Galactic previously said it would use revenue from paying passengers on its first two Delta-class ships to pay for development of future vehicles. Instead, Virgin Galactic now says it wants to raise money to speed up work on the third and fourth Delta-class vehicles, along with a second airplane mothership to carry the spaceplanes aloft before they release and fire into space. (submitted by Ken the Bin and EllPeaTea)

ESA breaks its silence on Themis. The European Space Agency has provided a rare update on the progress of its Themis reusable booster demonstrator project, European Spaceflight reports. ESA is developing the Themis test vehicle for atmospheric flights to fine-tune technologies for a future European reusable rocket capable of vertical takeoffs and vertical landings. Themis started out as a project led by CNES, the French space agency, in 2018. ESA member states signed up to help fund the project in 2019, and the agency awarded ArianeGroup a contract to move forward with Themis in 2020. At the time, the first low-altitude hop test was expected to take place in 2022.

Some slow progress … Now, the first low-altitude hop is scheduled for 2025 from Esrange Space Centre in Sweden, a three-year delay. This week, ESA said engineers have completed testing of the Themis vehicle’s main systems, and assembly of the demonstrator is underway in France. A single methane-fueled Prometheus engine, also developed by ArianeGroup, has been installed on the rocket. Teams are currently adding avionics, computers, electrical systems, and cable harnesses. Themis’ stainless steel propellant tanks have been manufactured, tested, and cleaned and are now ready to be installed on the Themis demonstrator. Then, the rocket will travel by road from France to the test site in Sweden for its initial low-altitude hops. After those flights are complete, officials plan to add two more Prometheus engines to the rocket and ship it to French Guiana for high-altitude test flights. (submitted by Ken the Bin and EllPeaTea)

SpaceX will give the ISS a boost. A Cargo Dragon spacecraft docked to the International Space Station on Tuesday morning, less than a day after lifting off from Florida. As space missions go, this one is fairly routine, ferrying about 6,000 pounds (2,700 kilograms) of cargo and science experiments to the space station. One thing that’s different about this mission is that it delivered to the station a tiny 2 lb (900 g) satellite named LignoSat, the first spacecraft made of wood, for later release outside the research complex. There is one more characteristic of this flight that may prove significant for NASA and the future of the space station, Ars reports. As early as Friday, NASA and SpaceX have scheduled a “reboost and attitude control demonstration,” during which the Dragon spacecraft will use some of the thrusters at the base of the capsule. This is the first time the Dragon spacecraft will be used to move the space station.

Dragon’s breath … Dragon will fire a subset of its 16 Draco thrusters, each with about 90 pounds of thrust, for approximately 12.5 minutes to make a slight adjustment to the orbital trajectory of the roughly 450-ton space station. SpaceX and NASA engineers will analyze the results from the demonstration to determine if Dragon could be used for future space station reboost opportunities. The data will also inform the design of the US Deorbit Vehicle, which SpaceX is developing to perform the maneuvers required to bring the space station back to Earth for a controlled, destructive reentry in the early 2030s. For NASA, demonstrating Dragon’s ability to move the space station will be another step toward breaking free of reliance on Russia, which is currently responsible for providing propulsion to maneuver the orbiting outpost. Northrop Grumman’s Cygnus supply ship also previously demonstrated a reboost capability. (submitted by Ken the Bin and N35t0r)

Russia launches Soyuz in service of Iran. Russia launched a Soyuz rocket Monday carrying two satellites designed to monitor the space weather around Earth and 53 small satellites, including two Iranian ones, Reuters reports. The primary payloads aboard the Soyuz-2.1b rocket were two Ionosfera-M satellites to probe the ionosphere, an outer layer of the atmosphere near the edge of space. Solar activity can alter conditions in the ionosphere, impacting communications and navigation. The two Iranian satellites on this mission were named Kowsar and Hodhod. They will collect high-resolution reconnaissance imagery and support communications for Iran.

A distant third … This was only the 13th orbital launch by Russia this year, trailing far behind the United States and China. We know of two more Soyuz flights planned for later this month, but no more, barring a surprise military launch (which is possible). The projected launch rate puts Russia on pace for its quietest year of launch activity since 1961, the year Yuri Gagarin became the first person to fly in space. A major reason for this decline in launches is the decisions of Western governments and companies to move their payloads off of Russian rockets after the invasion of Ukraine. For example, OneWeb stopped launching on Soyuz in 2022, and the European Space Agency suspended its partnership with Russia to launch Soyuz rockets from French Guiana. (submitted by Ken the Bin)

H3 deploys Japanese national security satellite. Japan launched a defense satellite Monday aimed at speedier military operations and communication on an H3 rocket and successfully placed it into orbit, the Associated Press reports. The Kirameki 3 satellite will use high-speed X-band communication to support Japan’s defense ministry with information and data sharing, and command and control services. The satellite will serve Japanese land, air, and naval forces from its perch in geostationary orbit alongside two other Kirameki communications satellites.

Gaining trust … The H3 is Japan’s new flagship rocket, developed by Mitsubishi Heavy Industries (MHI) and funded by the Japan Aerospace Exploration Agency (JAXA). The launch of Kirameki 3 marked the third consecutive successful launch of the H3 rocket, following a debut flight in March 2023 that failed to reach orbit. This was the first time Japan’s defense ministry put one of its satellites on the H3 rocket. The first two Kirameki satellites launched on a European Ariane 5 and a Japanese H-IIA rocket, which the H3 will replace. (submitted by Ken the Bin, tsunam, and EllPeaTea)

Rocket Lab enters the race for military contracts. Rocket Lab is aiming to chip away at SpaceX’s dominance in military space launch, confirming its bid to compete for Pentagon contracts with its new medium-lift rocket, Neutron, Space News reports. Last month, the Space Force released a request for proposals from launch companies seeking to join the military’s roster of launch providers in the National Security Space Launch (NSSL) program. The Space Force will accept bids for launch providers to “on-ramp” to the NSSL Phase 3 Lane 1 contract, which doles out task orders to launch companies for individual missions. In order to win a task order, a launch provider must be on the Phase 3 Lane 1 contract. Currently, SpaceX, United Launch Alliance, and Blue Origin are the only rocket companies eligible. SpaceX won all of the first round of Lane 1 task orders last month.

Joining the club … The Space Force is accepting additional risk for Lane 1 missions, which largely comprise repeat launches deploying a constellation of missile-tracking and data-relay satellites for the Space Development Agency. A separate class of heavy-lift missions, known as Lane 2, will require rockets to undergo a thorough certification by the Space Force to ensure their reliability. In order for a launch company to join the Lane 1 roster, the Space Force requires bidders to be ready for a first launch by December 2025. Peter Beck, Rocket Lab’s founder and CEO, said he thinks the Neutron rocket will be ready for its first launch by then. Other new medium-lift rockets, such as Firefly Aerospace’s MLV and Relativity’s Terran-R, almost certainly won’t be ready to launch by the end of next year, leaving Rocket Lab as the only company that will potentially join incumbents SpaceX, ULA, and Blue Origin. (submitted by Ken the Bin)

Next Starship flight is just around the corner. Less than a month has passed since the historic fifth flight of SpaceX’s Starship, during which the company caught the booster with mechanical arms back at the launch pad in Texas. Now, another test flight could come as soon as November 18, Ars reports. The improbable but successful recovery of the Starship first stage with “chopsticks” last month, and the on-target splashdown of the Starship upper stage halfway around the world, allowed SpaceX to avoid an anomaly investigation by the Federal Aviation Administration. Thus, the company was able to press ahead on a sixth test flight if it flew a similar profile. And that’s what SpaceX plans to do, albeit with some notable additions to the flight plan.

Around the edges … Perhaps the most significant change to the profile for Flight 6 will be an attempt to reignite a Raptor engine on Starship while it is in space. SpaceX tried to do this on a test flight in March but aborted the burn because the ship’s rolling motion exceeded limits. A successful demonstration of a Raptor engine relight could pave the way for SpaceX to launch Starship into a higher stable orbit around Earth on future test flights. This is required for SpaceX to begin using Starship to launch Starlink Internet satellites and perform in-orbit refueling experiments with two ships docked together. (submitted by EllPeaTea)

China’s version of Starship. China has updated the design of its next-generation heavy-lift rocket, the Long March 9, and it looks almost exactly like a clone of SpaceX’s Starship rocket, Ars reports. The Long March 9 started out as a conventional-looking expendable rocket, then morphed into a launcher with a reusable first stage. Now, the rocket will have a reusable booster and upper stage. The booster will have 30 methane-fueled engines, similar to the number of engines on SpaceX’s Super Heavy booster. The upper stage looks remarkably like Starship, with flaps in similar locations. China intends to fly this vehicle for the first time in 2033, nearly a decade from now.

A vehicle for the Moon … The reusable Long March 9 is intended to unlock robust lunar operations for China, similar to the way Starship, and to some extent Blue Origin’s Blue Moon lander, promises to support sustained astronaut stays on the Moon’s surface. China says it plans to land its astronauts on the Moon by 2030, initially using a more conventional architecture with an expendable rocket named the Long March 10, and a lander reminiscent of NASA’s Apollo lunar lander. These will allow Chinese astronauts to remain on the Moon for a matter of days. With Long March 9, China could deliver massive loads of cargo and life support resources to sustain astronauts for much longer stays.

Ta-ta to the tripod. The large three-legged vertical test stand at SpaceX’s engine test site in McGregor, Texas, is being decommissioned, NASA Spaceflight reports. Cranes have started removing propellant tanks from the test stand, nicknamed the tripod, towering above the Central Texas prairie. McGregor is home to SpaceX’s propulsion test team and has 16 test cells to support firings of Merlin, Raptor, and Draco engines multiple times per day for the Falcon 9 rocket, Starship, and Dragon spacecraft.

Some history … The tripod might have been one of SpaceX’s most important assets in the company’s early years. It was built by Beal Aerospace for liquid-fueled rocket engine tests in the late 1990s. Beal Aerospace folded, and SpaceX took over the site in 2003. After some modifications, SpaceX installed the first qualification version of its Falcon 9 rocket on the tripod for a series of nine-engine test-firings leading up to the rocket’s inaugural flight in 2010. SpaceX test-fired numerous new Falcon 9 boosters on the tripod before shipping them to launch sites in Florida or California. Most recently, the tripod was used for testing of Raptor engines destined to fly on Starship and the Super Heavy booster.

Next three launches

Nov. 9:  Long March 2C | Unknown Payload | Jiuquan Satellite Launch Center, China | 03: 40 UTC

Nov. 9: Falcon 9 | Starlink 9-10 | Vandenberg Space Force Base, California | 06: 14 UTC

Nov. 10:  Falcon 9 | Starlink 6-69 | Cape Canaveral Space Force Station, Florida | 21: 28 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: Australia says yes to the launch; Russia delivers for Iran Read More »