Author name: Shannon Garcia

firefly-aerospace-rakes-in-more-cash-as-competitors-struggle-for-footing

Firefly Aerospace rakes in more cash as competitors struggle for footing

More than just one thing

Firefly’s majority owner is the private equity firm AE Industrial Partners, and the Series D funding round was led by Michigan-based RPM Ventures.

“Few companies can say they’ve defined a new category in their industry—Firefly is one of those,” said Marc Weiser, a managing director at RPM Ventures. “They have captured their niche in the market as a full service provider for responsive space missions and have become the pinnacle of what a modern space and defense technology company looks like.”

This descriptor—a full service provider—is what differentiates Firefly from most other space companies. Firefly’s crosscutting work in small and medium launch vehicles, rocket engines, lunar landers, and in-space propulsion propels it into a club of wide-ranging commercial space companies that, arguably, only includes SpaceX, Blue Origin, and Rocket Lab.

NASA has awarded Firefly three task orders under the Commercial Lunar Payload Services (CLPS) program. Firefly will soon ship its first Blue Ghost lunar lander to Florida for final preparations to launch to the Moon and deliver 10 NASA-sponsored scientific instruments and tech demo experiments to the lunar surface. NASA has a contract with Firefly for a second Blue Ghost mission, plus an agreement for Firefly to transport a European data relay satellite to lunar orbit.

Firefly also boasts a healthy backlog of missions on its Alpha rocket. In June, Lockheed Martin announced a deal for as many as 25 Alpha launches through 2029. Two months later, L3Harris inked a contract with Firefly for up to 20 Alpha launches. Firefly has also signed Alpha launch contracts with NASA, the National Oceanic and Atmospheric Administration (NOAA), the Space Force, and the National Reconnaissance Office. One of these Alpha launches will deploy Firefly’s first orbital transfer vehicle, named Elytra, designed to host customer payloads and transport them to different orbits following separation from the launcher’s upper stage.

And there’s the Medium Launch Vehicle, a rocket Firefly and Northrop Grumman hope to launch as soon as 2026. But first, the companies will fly an MLV booster stage with seven kerosene-fueled Miranda engines on a new version of Northrop Grumman’s Antares rocket for cargo deliveries to the International Space Station. Northrop Grumman has retired the previous version of Antares after losing access to Russian rocket engines in the wake of Russia’s invasion of Ukraine.

Firefly Aerospace rakes in more cash as competitors struggle for footing Read More »

record-labels-unhappy-with-court-win,-say-isp-should-pay-more-for-user-piracy

Record labels unhappy with court win, say ISP should pay more for user piracy


Music companies appeal, demanding payment for each song instead of each album.

Credit: Getty Images | digicomphoto

The big three record labels notched another court victory against a broadband provider last month, but the music publishing firms aren’t happy that an appeals court only awarded per-album damages instead of damages for each song.

Universal, Warner, and Sony are seeking an en banc rehearing of the copyright infringement case, claiming that Internet service provider Grande Communications should have to pay per-song damages over its failure to terminate the accounts of Internet users accused of piracy. The decision to make Grande pay for each album instead of each song “threatens copyright owners’ ability to obtain fair damages,” said the record labels’ petition filed last week.

The case is in the conservative-leaning US Court of Appeals for the 5th Circuit. A three-judge panel unanimously ruled last month that Grande, a subsidiary of Astound Broadband, violated the law by failing to terminate subscribers accused of being repeat infringers. Subscribers were flagged for infringement based on their IP addresses being connected to torrent downloads monitored by Rightscorp, a copyright-enforcement company used by the music labels.

The one good part of the ruling for Grande is that the 5th Circuit ordered a new trial on damages because it said a $46.8 million award was too high. Appeals court judges found that the district court “erred in granting JMOL [judgment as a matter of law] that each of the 1,403 songs in suit was eligible for a separate award of statutory damages.” The damages were $33,333 per song.

Record labels want the per-album portion of the ruling reversed while leaving the rest of it intact.

All parts of album “constitute one work”

The Copyright Act says that “all the parts of a compilation or derivative work constitute one work,” the 5th Circuit panel noted. The panel concluded that “the statute unambiguously instructs that a compilation is eligible for only one statutory damage award, whether or not its constituent works are separately copyrightable.”

When there is a choice “between policy arguments and the statutory text—no matter how sympathetic the plight of the copyright owners—the text must prevail,” the ruling said. “So, the strong policy arguments made by Plaintiffs and their amicus are best directed at Congress.”

Record labels say the panel got it wrong, arguing that the “one work” portion of the law “serves to prevent a plaintiff from alleging and proving infringement of the original authorship in a compilation (e.g., the particular selection, coordination, or arrangement of preexisting materials) and later arguing that it should be entitled to collect separate statutory damages awards for each of the compilation’s constituent parts. That rule should have no bearing on this case, where Plaintiffs alleged and proved the infringement of individual sound recordings, not compilations.”

Record labels say that six other US appeals courts “held that Section 504(c)(1) authorizes a separate statutory damages award for each infringed copyrightable unit of expression that was individually commercialized by its copyright owner,” though several of those cases involved non-musical works such as clip-art images, photos, and TV episodes.

Music companies say the per-album decision prevents them from receiving “fair damages” because “sound recordings are primarily commercialized (and generate revenue for copyright owners) as individual tracks, not as parts of albums.” The labels also complained of what they call “a certain irony to the panel’s decision,” because “the kind of rampant peer-to-peer infringement at issue in this case was a primary reason that record companies had to shift their business models from selling physical copies of compilations (albums) to making digital copies of recordings available on an individual basis (streaming/downloading).”

Record labels claim the panel “inverted the meaning” of the statutory text “and turned a rule designed to ensure that compilation copyright owners do not obtain statutory damages windfalls into a rule that prevents copyright owners of individual works from obtaining just compensation.” The petition continued:

The practical implications of the panel’s rule are stark. For example, if an infringer separately downloads the recordings of four individual songs that so happened at any point in time to have been separately selected for and included among the ten tracks on a particular album, the panel’s decision would permit the copyright owner to collect only one award of statutory damages for the four recordings collectively. That would be so even if there were unrebutted trial evidence that the four recordings were commercialized individually by the copyright owner. This outcome is wholly unsupported by the text of the Copyright Act.

ISP wants to overturn underlying ruling

Grande also filed a petition for rehearing because it wants to escape liability, whether for each song or each album. A rehearing would be in front of all the court’s judges.

“Providing Internet service is not actionable conduct,” Grande argued. “The Panel’s decision erroneously permits contributory liability to be based on passive, equivocal commercial activity: the provision of Internet access.”

Grande cited Supreme Court decisions in MGM Studios v. Grokster and Twitter v. Taamneh. “Nothing in Grokster permits inferring culpability from a defendant’s failure to stop infringement,” Grande wrote. “And Twitter makes clear that providing online platforms or services for the exchange of information, even if the provider knows of misuse, is not sufficiently culpable to support secondary liability. This is because supplying the ‘infrastructure’ for communication in a way that is ‘agnostic as to the nature of the content’ is not ‘active, substantial assistance’ for any unlawful use.”

This isn’t the only important case in the ongoing battle between copyright owners and broadband providers, which could have dramatic effects on Internet access for individuals accused of piracy.

ISPs, labels want Supreme Court to weigh in

ISPs don’t want to be held liable when their subscribers violate copyright law and argue that they shouldn’t have to conduct mass terminations of Internet users based on mere accusations of piracy. ISPs say that copyright-infringement notices sent on behalf of record labels aren’t accurate enough to justify such terminations.

Digital rights groups have supported ISPs in these cases, arguing that turning ISPs into copyright cops would be bad for society and disconnect people who were falsely accused or were just using the same Internet connection as an infringer.

The broadband and music publishing industries are waiting to learn whether the Supreme Court will take up a challenge by cable firm Cox Communications, which wants to overturn a ruling in a copyright infringement lawsuit brought by Sony. In that case, the US Court of Appeals for the 4th Circuit affirmed a jury’s finding that Cox was guilty of willful contributory infringement, but vacated a $1 billion damages award and ordered a new damages trial. Record labels also petitioned the Supreme Court because they want the $1 billion verdict reinstated.

Cox has said that the 4th Circuit ruling “would force ISPs to terminate Internet service to households or businesses based on unproven allegations of infringing activity, and put them in a position of having to police their networks… Terminating Internet service would not just impact the individual accused of unlawfully downloading content, it would kick an entire household off the Internet.”

Four other large ISPs told the Supreme Court that the legal question presented by the case “is exceptionally important to the future of the Internet.” They called the copyright-infringement notices “famously flawed” and said mass terminations of Internet users who are subject to those notices “would harm innocent people by depriving households, schools, hospitals, and businesses of Internet access.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Record labels unhappy with court win, say ISP should pay more for user piracy Read More »

air-quality-problems-spur-$200-million-in-funds-to-cut-pollution-at-ports

Air quality problems spur $200 million in funds to cut pollution at ports


Diesel equipment will be replaced with hydrogen- or electric-power gear.

Raquel Garcia has been fighting for years to clean up the air in her neighborhood southwest of downtown Detroit.

Living a little over a mile from the Ambassador Bridge, which thousands of freight trucks cross every day en route to the Port of Detroit, Garcia said she and her neighbors are frequently cleaning soot off their homes.

“You can literally write your name in it,” she said. “My house is completely covered.”

Her neighborhood is part of Wayne County, which is home to heavy industry, including steel plants and major car manufacturers, and suffers from some of the worst air quality in Michigan. In its 2024 State of the Air report, the American Lung Association named Wayne County one of the “worst places to live” in terms of annual exposure to fine particulate matter pollution, or PM2.5.

But Detroit, and several other Midwest cities with major shipping ports, could soon see their air quality improve as port authorities receive hundreds of millions of dollars to replace diesel equipment with cleaner technologies like solar power and electric vehicles.

Last week, the Biden administration announced $3 billion in new grants from the US Environmental Protection Agency’s Clean Ports program, which aims to slash carbon emissions and reduce air pollution at US shipping ports. More than $200 million of that funding will go to four Midwestern states that host ports along the Great Lakes: Michigan, Illinois, Ohio, and Indiana.

The money, which comes from the Inflation Reduction Act, will not only be used to replace diesel-powered equipment and vehicles, but also to install clean energy systems and charging stations, take inventory of annual port emissions, and set plans for reducing them. It will also fund a feasibility study for establishing a green hydrogen fuel hub along the Great Lakes.

The EPA estimates that those changes will, nationwide, reduce carbon pollution in the first 10 years by more than 3 million metric tons, roughly the equivalent of taking 600,000 gasoline-powered cars off the road. The agency also projects reduced emissions of nitrous oxide and PM2.5—both of which can cause serious, long-term health complications—by about 10,000 metric tons and about 180 metric tons, respectively, during that same time period.

“Our nation’s ports are critical to creating opportunity here in America, offering good-paying jobs, moving goods, and powering our economy,” EPA Administrator Michael Regan said in the agency’s press release announcing the funds. “Delivering cleaner technologies and resources to US ports will slash harmful air and climate pollution while protecting people who work in and live nearby ports communities.”

Garcia, who runs the community advocacy nonprofit Southwest Detroit Environmental Vision, said she’s “really excited” to see the Port of Detroit getting those funds, even though it’s just a small part of what’s needed to clean up the city’s air pollution.

“We care about the air,” she said. “There’s a lot of kids in the neighborhood where I live.”

Jumpstarting the transition to cleaner technology

Nationwide, port authorities in 27 states and territories tapped the Clean Ports funding, which they’ll use to buy more than 1,500 units of cargo-handling equipment, such as forklifts and cranes, 1,000 heavy-duty trucks, 10 locomotives, and 20 seafaring vessels, all of which will be powered by electricity or green hydrogen, which doesn’t emit CO2 when burned.

In the Midwest, the Illinois Environmental Protection Agency and the Cleveland-Cuyahoga County Port Authority in Ohio were awarded about $95 million each from the program, the Detroit-Wayne County Port Authority in Michigan was awarded $25 million, and the Ports of Indiana will receive $500,000.

Mark Schrupp, executive director of the Detroit-Wayne County Port Authority, said the funding for his agency will be used to help port operators at three terminals purchase new electric forklifts, cranes, and boat motors, among other zero-emission equipment. The money will also pay for a new solar array that will reduce energy consumption for port facilities, as well as 11 new electric vehicle charging stations.

“This money is helping those [port] businesses make the investment in this clean technology, which otherwise is sometimes five or six times the cost of a diesel-powered equipment,” he said, noting that the costs of clean technologies are expected to fall significantly in the coming years as manufacturers scale up production. “It also exposes them to the potential savings over time—full maintenance costs and other things that come from having the dirtier technology in place.”

Schrupp said that the new equipment will slash the Detroit-Wayne County Port Authority’s overall carbon emissions by more than 8,600 metric tons every year, roughly a 30 percent reduction.

Carly Beck, senior manager of planning, environment and information systems for the Cleveland-Cuyahoga County Port Authority, said its new equipment will reduce the Port of Cleveland’s annual carbon emissions by roughly 1,000 metric tons, or about 40 percent of the emissions tied to the port’s operations. The funding will also pay for two electric tug boats and the installation of solar panels and battery storage on the port’s largest warehouse, she added.

In 2022, Beck said, the Port of Cleveland took an emissions inventory, which found that cargo-handling equipment, building energy use, and idling ships were the port’s biggest sources of carbon emissions. Docked ships would run diesel generators for power as they unloaded, she said, but with the new infrastructure, the cargo-handling equipment and idling ships can draw power from a 2-megawatt solar power system with battery storage.

“We’re essentially creating a microgrid at the port,” she said.

Improving the air for disadvantaged communities

The Clean Ports funding will also be a boon for people like Garcia, who live near a US shipping port.

Shipping ports are notorious for their diesel pollution, which research has shown disproportionately affects poor communities of color. And most, if not all, of the census tracts surrounding the Midwest ports are deemed “disadvantaged communities” by the federal government. The EPA uses a number of factors, including income level and exposure to environmental harms, to determine whether a community is “disadvantaged.”

About 10,000 trucks pass through the Port of Detroit every day, Schrupp said, which helps to explain why residents of Southwest Detroit and the neighboring cities of Ecorse and River Rouge, which sit adjacent to Detroit ports, breathe the state’s dirtiest air.

“We have about 50,000 residents within a few miles of the port, so those communities will definitely benefit,” he said. “This is a very industrialized area.”

Burning diesel or any other fossil fuel produces nitrous oxide or PM2.5, and research has shown that prolonged exposure to high levels of those pollutants can lead to serious health complications, including lung disease and premature death. The Detroit-Wayne County Port Authority estimates that the new port equipment will cut nearly 9 metric tons of PM2.5 emissions and about 120 metric tons of nitrous oxide emissions each year.

Garcia said she’s also excited that some of the Detroit grants will be used to establish workforce training programs, which will show people how to use the new technologies and showcase career opportunities at the ports. Her area is gentrifying quickly, Garcia said, so it’s heartening to see the city and port authority taking steps to provide local employment opportunities.

Beck said that the Port of Cleveland is also surrounded by a lot of heavy industry and that the census tracts directly adjacent to the port are all deemed “disadvantaged” by federal standards.

“We’re trying to be good neighbors and play our part,” she said, “to make it a more pleasant environment.”

Kristoffer Tigue is a staff writer for Inside Climate News, covering climate issues in the Midwest. He previously wrote the twice-weekly newsletter Today’s Climate and helped lead ICN’s national coverage on environmental justice. His work has been published in Reuters, Scientific American, Mother Jones, HuffPost, and many more. Tigue holds a master’s degree in journalism from the Missouri School of Journalism.

This story originally appeared on Inside Climate News.

Photo of Inside Climate News

Air quality problems spur $200 million in funds to cut pollution at ports Read More »

how-a-stubborn-computer-scientist-accidentally-launched-the-deep-learning-boom

How a stubborn computer scientist accidentally launched the deep learning boom


“You’ve taken this idea way too far,” a mentor told Prof. Fei-Fei Li.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

During my first semester as a computer science graduate student at Princeton, I took COS 402: Artificial Intelligence. Toward the end of the semester, there was a lecture about neural networks. This was in the fall of 2008, and I got the distinct impression—both from that lecture and the textbook—that neural networks had become a backwater.

Neural networks had delivered some impressive results in the late 1980s and early 1990s. But then progress stalled. By 2008, many researchers had moved on to mathematically elegant approaches such as support vector machines.

I didn’t know it at the time, but a team at Princeton—in the same computer science building where I was attending lectures—was working on a project that would upend the conventional wisdom and demonstrate the power of neural networks. That team, led by Prof. Fei-Fei Li, wasn’t working on a better version of neural networks. They were hardly thinking about neural networks at all.

Rather, they were creating a new image dataset that would be far larger than any that had come before: 14 million images, each labeled with one of nearly 22,000 categories.

Li tells the story of ImageNet in her recent memoir, The Worlds I See. As she worked on the project, she faced plenty of skepticism from friends and colleagues.

“I think you’ve taken this idea way too far,” a mentor told her a few months into the project in 2007. “The trick is to grow with your field. Not to leap so far ahead of it.”

It wasn’t just that building such a large dataset was a massive logistical challenge. People doubted that the machine learning algorithms of the day would benefit from such a vast collection of images.

“Pre-ImageNet, people did not believe in data,” Li said in a September interview at the Computer History Museum. “Everyone was working on completely different paradigms in AI with a tiny bit of data.”

Ignoring negative feedback, Li pursued the project for more than two years. It strained her research budget and the patience of her graduate students. When she took a new job at Stanford in 2009, she took several of those students—and the ImageNet project—with her to California.

ImageNet received little attention for the first couple of years after its release in 2009. But in 2012, a team from the University of Toronto trained a neural network on the ImageNet dataset, achieving unprecedented performance in image recognition. That groundbreaking AI model, dubbed AlexNet after lead author Alex Krizhevsky, kicked off the deep learning boom that has continued to the present day.

AlexNet would not have succeeded without the ImageNet dataset. AlexNet also would not have been possible without a platform called CUDA, which allowed Nvidia’s graphics processing units (GPUs) to be used in non-graphics applications. Many people were skeptical when Nvidia announced CUDA in 2006.

So the AI boom of the last 12 years was made possible by three visionaries who pursued unorthodox ideas in the face of widespread criticism. One was Geoffrey Hinton, a University of Toronto computer scientist who spent decades promoting neural networks despite near-universal skepticism. The second was Jensen Huang, the CEO of Nvidia, who recognized early that GPUs could be useful for more than just graphics.

The third was Fei-Fei Li. She created an image dataset that seemed ludicrously large to most of her colleagues. But it turned out to be essential for demonstrating the potential of neural networks trained on GPUs.

Geoffrey Hinton

A neural network is a network of thousands, millions, or even billions of neurons. Each neuron is a mathematical function that produces an output based on a weighted average of its inputs.

Suppose you want to create a network that can identify handwritten decimal digits like the number two in the red square above. Such a network would take in an intensity value for each pixel in an image and output a probability distribution over the ten possible digits—0, 1, 2, and so forth.

To train such a network, you first initialize it with random weights. You then run it on a sequence of example images. For each image, you train the network by strengthening the connections that push the network toward the right answer (in this case, a high-probability value for the “2” output) and weakening connections that push toward a wrong answer (a low probability for “2” and high probabilities for other digits). If trained on enough example images, the model should start to predict a high probability for “2” when shown a two—and not otherwise.

In the late 1950s, scientists started to experiment with basic networks that had a single layer of neurons. However, their initial enthusiasm cooled as they realized that such simple networks lacked the expressive power required for complex computations.

Deeper networks—those with multiple layers—had the potential to be more versatile. But in the 1960s, no one knew how to train them efficiently. This was because changing a parameter somewhere in the middle of a multi-layer network could have complex and unpredictable effects on the output.

So by the time Hinton began his career in the 1970s, neural networks had fallen out of favor. Hinton wanted to study them, but he struggled to find an academic home in which to do so. Between 1976 and 1986, Hinton spent time at four different research institutions: Sussex University, the University of California San Diego (UCSD), a branch of the UK Medical Research Council, and finally Carnegie Mellon, where he became a professor in 1982.

Geoffrey Hinton speaking in Toronto in June.

Credit: Photo by Mert Alper Dervis/Anadolu via Getty Images

Geoffrey Hinton speaking in Toronto in June. Credit: Photo by Mert Alper Dervis/Anadolu via Getty Images

In a landmark 1986 paper, Hinton teamed up with two of his former colleagues at UCSD, David Rumelhart and Ronald Williams, to describe a technique called backpropagation for efficiently training deep neural networks.

Their idea was to start with the final layer of the network and work backward. For each connection in the final layer, the algorithm computes a gradient—a mathematical estimate of whether increasing the strength of that connection would push the network toward the right answer. Based on these gradients, the algorithm adjusts each parameter in the model’s final layer.

The algorithm then propagates these gradients backward to the second-to-last layer. A key innovation here is a formula—based on the chain rule from high school calculus—for computing the gradients in one layer based on gradients in the following layer. Using these new gradients, the algorithm updates each parameter in the second-to-last layer of the model. The gradients then get propagated backward to the third-to-last layer, and the whole process repeats once again.

The algorithm only makes small changes to the model in each round of training. But as the process is repeated over thousands, millions, billions, or even trillions of training examples, the model gradually becomes more accurate.

Hinton and his colleagues weren’t the first to discover the basic idea of backpropagation. But their paper popularized the method. As people realized it was now possible to train deeper networks, it triggered a new wave of enthusiasm for neural networks.

Hinton moved to the University of Toronto in 1987 and began attracting young researchers who wanted to study neural networks. One of the first was the French computer scientist Yann LeCun, who did a year-long postdoc with Hinton before moving to Bell Labs in 1988.

Hinton’s backpropagation algorithm allowed LeCun to train models deep enough to perform well on real-world tasks like handwriting recognition. By the mid-1990s, LeCun’s technology was working so well that banks started to use it for processing checks.

“At one point, LeCun’s creation read more than 10 percent of all checks deposited in the United States,” wrote Cade Metz in his 2022 book Genius Makers.

But when LeCun and other researchers tried to apply neural networks to larger and more complex images, it didn’t go well. Neural networks once again fell out of fashion, and some researchers who had focused on neural networks moved on to other projects.

Hinton never stopped believing that neural networks could outperform other machine learning methods. But it would be many years before he’d have access to enough data and computing power to prove his case.

Jensen Huang

Jensen Huang speaking in Denmark in October.

Credit: Photo by MADS CLAUS RASMUSSEN/Ritzau Scanpix/AFP via Getty Images

Jensen Huang speaking in Denmark in October. Credit: Photo by MADS CLAUS RASMUSSEN/Ritzau Scanpix/AFP via Getty Images

The brain of every personal computer is a central processing unit (CPU). These chips are designed to perform calculations in order, one step at a time. This works fine for conventional software like Windows and Office. But some video games require so many calculations that they strain the capabilities of CPUs. This is especially true of games like Quake, Call of Duty, and Grand Theft Auto, which render three-dimensional worlds many times per second.

So gamers rely on GPUs to accelerate performance. Inside a GPU are many execution units—essentially tiny CPUs—packaged together on a single chip. During gameplay, different execution units draw different areas of the screen. This parallelism enables better image quality and higher frame rates than would be possible with a CPU alone.

Nvidia invented the GPU in 1999 and has dominated the market ever since. By the mid-2000s, Nvidia CEO Jensen Huang suspected that the massive computing power inside a GPU would be useful for applications beyond gaming. He hoped scientists could use it for compute-intensive tasks like weather simulation or oil exploration.

So in 2006, Nvidia announced the CUDA platform. CUDA allows programmers to write “kernels,” short programs designed to run on a single execution unit. Kernels allow a big computing task to be split up into bite-sized chunks that can be processed in parallel. This allows certain kinds of calculations to be completed far faster than with a CPU alone.

But there was little interest in CUDA when it was first introduced, wrote Steven Witt in The New Yorker last year:

When CUDA was released, in late 2006, Wall Street reacted with dismay. Huang was bringing supercomputing to the masses, but the masses had shown no indication that they wanted such a thing.

“They were spending a fortune on this new chip architecture,” Ben Gilbert, the co-host of “Acquired,” a popular Silicon Valley podcast, said. “They were spending many billions targeting an obscure corner of academic and scientific computing, which was not a large market at the time—certainly less than the billions they were pouring in.”

Huang argued that the simple existence of CUDA would enlarge the supercomputing sector. This view was not widely held, and by the end of 2008, Nvidia’s stock price had declined by seventy percent…

Downloads of CUDA hit a peak in 2009, then declined for three years. Board members worried that Nvidia’s depressed stock price would make it a target for corporate raiders.

Huang wasn’t specifically thinking about AI or neural networks when he created the CUDA platform. But it turned out that Hinton’s backpropagation algorithm could easily be split up into bite-sized chunks. So training neural networks turned out to be a killer app for CUDA.

According to Witt, Hinton was quick to recognize the potential of CUDA:

In 2009, Hinton’s research group used Nvidia’s CUDA platform to train a neural network to recognize human speech. He was surprised by the quality of the results, which he presented at a conference later that year. He then reached out to Nvidia. “I sent an e-mail saying, ‘Look, I just told a thousand machine-learning researchers they should go and buy Nvidia cards. Can you send me a free one?’ ” Hinton told me. “They said no.”

Despite the snub, Hinton and his graduate students, Alex Krizhevsky and Ilya Sutskever, obtained a pair of Nvidia GTX 580 GPUs for the AlexNet project. Each GPU had 512 execution units, allowing Krizhevsky and Sutskever to train a neural network hundreds of times faster than would be possible with a CPU. This speed allowed them to train a larger model—and to train it on many more training images. And they would need all that extra computing power to tackle the massive ImageNet dataset.

Fei-Fei Li

Fei-Fei Li at the SXSW conference in 2018.

Credit: Photo by Hubert Vestil/Getty Images for SXSW

Fei-Fei Li at the SXSW conference in 2018. Credit: Photo by Hubert Vestil/Getty Images for SXSW

Fei-Fei Li wasn’t thinking about either neural networks or GPUs as she began a new job as a computer science professor at Princeton in January of 2007. While earning her PhD at Caltech, she had built a dataset called Caltech 101 that had 9,000 images across 101 categories.

That experience had taught her that computer vision algorithms tended to perform better with larger and more diverse training datasets. Not only had Li found her own algorithms performed better when trained on Caltech 101, but other researchers also started training their models using Li’s dataset and comparing their performance to one another. This turned Caltech 101 into a benchmark for the field of computer vision.

So when she got to Princeton, Li decided to go much bigger. She became obsessed with an estimate by vision scientist Irving Biederman that the average person recognizes roughly 30,000 different kinds of objects. Li started to wonder if it would be possible to build a truly comprehensive image dataset—one that included every kind of object people commonly encounter in the physical world.

A Princeton colleague told Li about WordNet, a massive database that attempted to catalog and organize 140,000 words. Li called her new dataset ImageNet, and she used WordNet as a starting point for choosing categories. She eliminated verbs and adjectives, as well as intangible nouns like “truth.” That left a list of 22,000 countable objects ranging from “ambulance” to “zucchini.”

She planned to take the same approach she’d taken with the Caltech 101 dataset: use Google’s image search to find candidate images, then have a human being verify them. For the Caltech 101 dataset, Li had done this herself over the course of a few months. This time she would need more help. She planned to hire dozens of Princeton undergraduates to help her choose and label images.

But even after heavily optimizing the labeling process—for example, pre-downloading candidate images so they’re instantly available for students to review—Li and her graduate student Jia Deng calculated that it would take more than 18 years to select and label millions of images.

The project was saved when Li learned about Amazon Mechanical Turk, a crowdsourcing platform Amazon had launched a couple of years earlier. Not only was AMT’s international workforce more affordable than Princeton undergraduates, but the platform was also far more flexible and scalable. Li’s team could hire as many people as they needed, on demand, and pay them only as long as they had work available.

AMT cut the time needed to complete ImageNet down from 18 to two years. Li writes that her lab spent two years “on the knife-edge of our finances” as the team struggled to complete the ImageNet project. But they had enough funds to pay three people to look at each of the 14 million images in the final data set.

ImageNet was ready for publication in 2009, and Li submitted it to the Conference on Computer Vision and Pattern Recognition, which was held in Miami that year. Their paper was accepted, but it didn’t get the kind of recognition Li hoped for.

“ImageNet was relegated to a poster session,” Li writes. “This meant that we wouldn’t be presenting our work in a lecture hall to an audience at a predetermined time but would instead be given space on the conference floor to prop up a large-format print summarizing the project in hopes that passersby might stop and ask questions… After so many years of effort, this just felt anticlimactic.”

To generate public interest, Li turned ImageNet into a competition. Realizing that the full dataset might be too unwieldy to distribute to dozens of contestants, she created a much smaller (but still massive) dataset with 1,000 categories and 1.4 million images.

The first year’s competition in 2010 generated a healthy amount of interest, with 11 teams participating. The winning entry was based on support vector machines. Unfortunately, Li writes, it was “only a slight improvement over cutting-edge work found elsewhere in our field.”

The second year of the ImageNet competition attracted fewer entries than the first. The winning entry in 2011 was another support vector machine, and it just barely improved on the performance of the 2010 winner. Li started to wonder if the critics had been right. Maybe “ImageNet was too much for most algorithms to handle.”

“For two years running, well-worn algorithms had exhibited only incremental gains in capabilities, while true progress seemed all but absent,” Li writes. “If ImageNet was a bet, it was time to start wondering if we’d lost.”

But when Li reluctantly staged the competition a third time in 2012, the results were totally different. Geoff Hinton’s team was the first to submit a model based on a deep neural network. And its top-5 accuracy was 85 percent—10 percentage points better than the 2011 winner.

Li’s initial reaction was incredulity: “Most of us saw the neural network as a dusty artifact encased in glass and protected by velvet ropes.”

“This is proof”

Yann LeCun testifies before the US Senate in September.

Credit: Photo by Kevin Dietsch/Getty Images

Yann LeCun testifies before the US Senate in September. Credit: Photo by Kevin Dietsch/Getty Images

The ImageNet winners were scheduled to be announced at the European Conference on Computer Vision in Florence, Italy. Li, who had a baby at home in California, was planning to skip the event. But when she saw how well AlexNet had done on her dataset, she realized this moment would be too important to miss: “I settled reluctantly on a twenty-hour slog of sleep deprivation and cramped elbow room.”

On an October day in Florence, Alex Krizhevsky presented his results to a standing-room-only crowd of computer vision researchers. Fei-Fei Li was in the audience. So was Yann LeCun.

Cade Metz reports that after the presentation, LeCun stood up and called AlexNet “an unequivocal turning point in the history of computer vision. This is proof.”

The success of AlexNet vindicated Hinton’s faith in neural networks, but it was arguably an even bigger vindication for LeCun.

AlexNet was a convolutional neural network, a type of neural network that LeCun had developed 20 years earlier to recognize handwritten digits on checks. (For more details on how CNNs work, see the in-depth explainer I wrote for Ars in 2018.) Indeed, there were few architectural differences between AlexNet and LeCun’s image recognition networks from the 1990s.

AlexNet was simply far larger. In a 1998 paper, LeCun described a document-recognition network with seven layers and 60,000 trainable parameters. AlexNet had eight layers, but these layers had 60 million trainable parameters.

LeCun could not have trained a model that large in the early 1990s because there were no computer chips with as much processing power as a 2012-era GPU. Even if LeCun had managed to build a big enough supercomputer, he would not have had enough images to train it properly. Collecting those images would have been hugely expensive in the years before Google and Amazon Mechanical Turk.

And this is why Fei-Fei Li’s work on ImageNet was so consequential. She didn’t invent convolutional networks or figure out how to make them run efficiently on GPUs. But she provided the training data that large neural networks needed to reach their full potential.

The technology world immediately recognized the importance of AlexNet. Hinton and his students formed a shell company with the goal to be “acquihired” by a big tech company. Within months, Google purchased the company for $44 million. Hinton worked at Google for the next decade while retaining his academic post in Toronto. Ilya Sutskever spent a few years at Google before becoming a cofounder of OpenAI.

AlexNet also made Nvidia GPUs the industry standard for training neural networks. In 2012, the market valued Nvidia at less than $10 billion. Today, Nvidia is one of the most valuable companies in the world, with a market capitalization north of $3 trillion. That high valuation is driven mainly by overwhelming demand for GPUs like the H100 that are optimized for training neural networks.

Sometimes the conventional wisdom is wrong

“That moment was pretty symbolic to the world of AI because three fundamental elements of modern AI converged for the first time,” Li said in a September interview at the Computer History Museum. “The first element was neural networks. The second element was big data, using ImageNet. And the third element was GPU computing.”

Today, leading AI labs believe the key to progress in AI is to train huge models on vast data sets. Big technology companies are in such a hurry to build the data centers required to train larger models that they’ve started to lease out entire nuclear power plants to provide the necessary power.

You can view this as a straightforward application of the lessons of AlexNet. But I wonder if we ought to draw the opposite lesson from AlexNet: that it’s a mistake to become too wedded to conventional wisdom.

“Scaling laws” have had a remarkable run in the 12 years since AlexNet, and perhaps we’ll see another generation or two of impressive results as the leading labs scale up their foundation models even more.

But we should be careful not to let the lessons of AlexNet harden into dogma. I think there’s at least a chance that scaling laws will run out of steam in the next few years. And if that happens, we’ll need a new generation of stubborn nonconformists to notice that the old approach isn’t working and try something different.

Tim Lee was on staff at Ars from 2017 to 2021. Last year, he launched a newsletter, Understanding AI, that explores how AI works and how it’s changing our world. You can subscribe here.

Photo of Timothy B. Lee

Timothy is a senior reporter covering tech policy and the future of transportation. He lives in Washington DC.

How a stubborn computer scientist accidentally launched the deep learning boom Read More »

rocket-report:-australia-says-yes-to-the-launch;-russia-delivers-for-iran

Rocket Report: Australia says yes to the launch; Russia delivers for Iran


The world’s first wooden satellite arrived at the International Space Station this week.

A Falcon 9 booster fires its engines on SpaceX’s “tripod” test stand in McGregor, Texas. Credit: SpaceX

Welcome to Edition 7.19 of the Rocket Report! Okay, we get it. We received more submissions from our readers on Australia’s approval of a launch permit for Gilmour Space than we’ve received on any other news story in recent memory. Thank you for your submissions as global rocket activity continues apace. We’ll cover Gilmour in more detail as they get closer to launch. There will be no Rocket Report next week as Eric and I join the rest of the Ars team for our 2024 Technicon in New York.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Gilmour Space has a permit to fly. Gilmour Space Technologies has been granted a permit to launch its 82-foot-tall (25-meter) orbital rocket from a spaceport in Queensland, Australia. The space company, founded in 2012, had initially planned to lift off in March but was unable to do so without approval from the Australian Space Agency, the Australian Broadcasting Corporation reports. The government approved Gilmour’s launch permit Monday, although the company is still weeks away from flying its three-stage Eris rocket.

A first for Australia … Australia hosted a handful of satellite launches with US and British rockets from 1967 through 1971, but Gilmour’s Eris rocket would become the first all-Australian launch vehicle to reach orbit. The Eris rocket is capable of delivering about 670 pounds (305 kilograms) of payload mass into a Sun-synchronous orbit. Eris will be powered by hybrid rocket engines burning a solid fuel mixed with a liquid oxidizer, making it unique among orbital-class rockets. Gilmour completed a wet dress rehearsal, or practice countdown, with the Eris rocket on the launch pad in Queensland in September. The launch permit becomes active after 30 days, or the first week of December. “We do think we’ve got a good chance of launching at the end of the 30-day period, and we’re going to give it a red hot go,” said Adam Gilmour, the company’s co-founder and CEO. (submitted by Marzipan, mryall, ZygP, Ken the Bin, Spencer Willis, MarkW98, and EllPeaTea)

North Korea tests new missile. North Korea apparently completed a successful test of its most powerful intercontinental ballistic missile on October 31, lofting it nearly 4,800 miles (7,700 kilometers) into space before the projectile fell back to Earth, Ars reports. This solid-fueled, multi-stage missile, named the Hwasong-19, is a new tool in North Korea’s increasingly sophisticated arsenal of weapons. It has enough range—perhaps as much as 9,320 miles (15,000 kilometers), according to Japan’s government—to strike targets anywhere in the United States. It also happens to be one of the largest ICBMs in the world, rivaling the missiles fielded by the world’s more established nuclear powers.

Quid pro quo? … The Hwasong-19 missile test comes as North Korea deploys some 10,000 troops inside Russia to support the country’s war against Ukraine. The budding partnership between Russia and North Korea has evolved for several years. Russian President Vladimir Putin has met with North Korean leader Kim Jong Un on multiple occasions, most recently in Pyongyang in June. This has fueled speculation about what Russia is offering North Korea in exchange for the troops deployed on Russian soil. US and South Korean officials have some thoughts. They said North Korea is likely to ask for technology transfers in diverse areas related to tactical nuclear weapons, ICBMs, and reconnaissance satellites.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Virgin Galactic is on the hunt for cash. Virgin Galactic is proposing to raise $300 million in additional capital to accelerate production of suborbital spaceplanes and a mothership aircraft the company says can fuel its long-term growth, Space News reports. The company, founded by billionaire Richard Branson, suspended operations of its VSS Unity suborbital spaceplane earlier this year. VSS Unity hit a monthly flight cadence carrying small groups of space tourists and researchers to the edge of space, but it just wasn’t profitable. Now, Virgin Galactic is developing larger Delta-class spaceplanes it says will be easier and cheaper to turn around between flights.

All-in with Delta … Michael Colglazier, Virgin Galactic’s CEO, announced the company’s appetite for fundraising in a quarterly earnings call with investment analysts Wednesday. He said manufacturing of components for Virgin Galactic’s first two Delta-class ships, which the company says it can fund with existing cash, is proceeding on schedule at a factory in Arizona. Virgin Galactic previously said it would use revenue from paying passengers on its first two Delta-class ships to pay for development of future vehicles. Instead, Virgin Galactic now says it wants to raise money to speed up work on the third and fourth Delta-class vehicles, along with a second airplane mothership to carry the spaceplanes aloft before they release and fire into space. (submitted by Ken the Bin and EllPeaTea)

ESA breaks its silence on Themis. The European Space Agency has provided a rare update on the progress of its Themis reusable booster demonstrator project, European Spaceflight reports. ESA is developing the Themis test vehicle for atmospheric flights to fine-tune technologies for a future European reusable rocket capable of vertical takeoffs and vertical landings. Themis started out as a project led by CNES, the French space agency, in 2018. ESA member states signed up to help fund the project in 2019, and the agency awarded ArianeGroup a contract to move forward with Themis in 2020. At the time, the first low-altitude hop test was expected to take place in 2022.

Some slow progress … Now, the first low-altitude hop is scheduled for 2025 from Esrange Space Centre in Sweden, a three-year delay. This week, ESA said engineers have completed testing of the Themis vehicle’s main systems, and assembly of the demonstrator is underway in France. A single methane-fueled Prometheus engine, also developed by ArianeGroup, has been installed on the rocket. Teams are currently adding avionics, computers, electrical systems, and cable harnesses. Themis’ stainless steel propellant tanks have been manufactured, tested, and cleaned and are now ready to be installed on the Themis demonstrator. Then, the rocket will travel by road from France to the test site in Sweden for its initial low-altitude hops. After those flights are complete, officials plan to add two more Prometheus engines to the rocket and ship it to French Guiana for high-altitude test flights. (submitted by Ken the Bin and EllPeaTea)

SpaceX will give the ISS a boost. A Cargo Dragon spacecraft docked to the International Space Station on Tuesday morning, less than a day after lifting off from Florida. As space missions go, this one is fairly routine, ferrying about 6,000 pounds (2,700 kilograms) of cargo and science experiments to the space station. One thing that’s different about this mission is that it delivered to the station a tiny 2 lb (900 g) satellite named LignoSat, the first spacecraft made of wood, for later release outside the research complex. There is one more characteristic of this flight that may prove significant for NASA and the future of the space station, Ars reports. As early as Friday, NASA and SpaceX have scheduled a “reboost and attitude control demonstration,” during which the Dragon spacecraft will use some of the thrusters at the base of the capsule. This is the first time the Dragon spacecraft will be used to move the space station.

Dragon’s breath … Dragon will fire a subset of its 16 Draco thrusters, each with about 90 pounds of thrust, for approximately 12.5 minutes to make a slight adjustment to the orbital trajectory of the roughly 450-ton space station. SpaceX and NASA engineers will analyze the results from the demonstration to determine if Dragon could be used for future space station reboost opportunities. The data will also inform the design of the US Deorbit Vehicle, which SpaceX is developing to perform the maneuvers required to bring the space station back to Earth for a controlled, destructive reentry in the early 2030s. For NASA, demonstrating Dragon’s ability to move the space station will be another step toward breaking free of reliance on Russia, which is currently responsible for providing propulsion to maneuver the orbiting outpost. Northrop Grumman’s Cygnus supply ship also previously demonstrated a reboost capability. (submitted by Ken the Bin and N35t0r)

Russia launches Soyuz in service of Iran. Russia launched a Soyuz rocket Monday carrying two satellites designed to monitor the space weather around Earth and 53 small satellites, including two Iranian ones, Reuters reports. The primary payloads aboard the Soyuz-2.1b rocket were two Ionosfera-M satellites to probe the ionosphere, an outer layer of the atmosphere near the edge of space. Solar activity can alter conditions in the ionosphere, impacting communications and navigation. The two Iranian satellites on this mission were named Kowsar and Hodhod. They will collect high-resolution reconnaissance imagery and support communications for Iran.

A distant third … This was only the 13th orbital launch by Russia this year, trailing far behind the United States and China. We know of two more Soyuz flights planned for later this month, but no more, barring a surprise military launch (which is possible). The projected launch rate puts Russia on pace for its quietest year of launch activity since 1961, the year Yuri Gagarin became the first person to fly in space. A major reason for this decline in launches is the decisions of Western governments and companies to move their payloads off of Russian rockets after the invasion of Ukraine. For example, OneWeb stopped launching on Soyuz in 2022, and the European Space Agency suspended its partnership with Russia to launch Soyuz rockets from French Guiana. (submitted by Ken the Bin)

H3 deploys Japanese national security satellite. Japan launched a defense satellite Monday aimed at speedier military operations and communication on an H3 rocket and successfully placed it into orbit, the Associated Press reports. The Kirameki 3 satellite will use high-speed X-band communication to support Japan’s defense ministry with information and data sharing, and command and control services. The satellite will serve Japanese land, air, and naval forces from its perch in geostationary orbit alongside two other Kirameki communications satellites.

Gaining trust … The H3 is Japan’s new flagship rocket, developed by Mitsubishi Heavy Industries (MHI) and funded by the Japan Aerospace Exploration Agency (JAXA). The launch of Kirameki 3 marked the third consecutive successful launch of the H3 rocket, following a debut flight in March 2023 that failed to reach orbit. This was the first time Japan’s defense ministry put one of its satellites on the H3 rocket. The first two Kirameki satellites launched on a European Ariane 5 and a Japanese H-IIA rocket, which the H3 will replace. (submitted by Ken the Bin, tsunam, and EllPeaTea)

Rocket Lab enters the race for military contracts. Rocket Lab is aiming to chip away at SpaceX’s dominance in military space launch, confirming its bid to compete for Pentagon contracts with its new medium-lift rocket, Neutron, Space News reports. Last month, the Space Force released a request for proposals from launch companies seeking to join the military’s roster of launch providers in the National Security Space Launch (NSSL) program. The Space Force will accept bids for launch providers to “on-ramp” to the NSSL Phase 3 Lane 1 contract, which doles out task orders to launch companies for individual missions. In order to win a task order, a launch provider must be on the Phase 3 Lane 1 contract. Currently, SpaceX, United Launch Alliance, and Blue Origin are the only rocket companies eligible. SpaceX won all of the first round of Lane 1 task orders last month.

Joining the club … The Space Force is accepting additional risk for Lane 1 missions, which largely comprise repeat launches deploying a constellation of missile-tracking and data-relay satellites for the Space Development Agency. A separate class of heavy-lift missions, known as Lane 2, will require rockets to undergo a thorough certification by the Space Force to ensure their reliability. In order for a launch company to join the Lane 1 roster, the Space Force requires bidders to be ready for a first launch by December 2025. Peter Beck, Rocket Lab’s founder and CEO, said he thinks the Neutron rocket will be ready for its first launch by then. Other new medium-lift rockets, such as Firefly Aerospace’s MLV and Relativity’s Terran-R, almost certainly won’t be ready to launch by the end of next year, leaving Rocket Lab as the only company that will potentially join incumbents SpaceX, ULA, and Blue Origin. (submitted by Ken the Bin)

Next Starship flight is just around the corner. Less than a month has passed since the historic fifth flight of SpaceX’s Starship, during which the company caught the booster with mechanical arms back at the launch pad in Texas. Now, another test flight could come as soon as November 18, Ars reports. The improbable but successful recovery of the Starship first stage with “chopsticks” last month, and the on-target splashdown of the Starship upper stage halfway around the world, allowed SpaceX to avoid an anomaly investigation by the Federal Aviation Administration. Thus, the company was able to press ahead on a sixth test flight if it flew a similar profile. And that’s what SpaceX plans to do, albeit with some notable additions to the flight plan.

Around the edges … Perhaps the most significant change to the profile for Flight 6 will be an attempt to reignite a Raptor engine on Starship while it is in space. SpaceX tried to do this on a test flight in March but aborted the burn because the ship’s rolling motion exceeded limits. A successful demonstration of a Raptor engine relight could pave the way for SpaceX to launch Starship into a higher stable orbit around Earth on future test flights. This is required for SpaceX to begin using Starship to launch Starlink Internet satellites and perform in-orbit refueling experiments with two ships docked together. (submitted by EllPeaTea)

China’s version of Starship. China has updated the design of its next-generation heavy-lift rocket, the Long March 9, and it looks almost exactly like a clone of SpaceX’s Starship rocket, Ars reports. The Long March 9 started out as a conventional-looking expendable rocket, then morphed into a launcher with a reusable first stage. Now, the rocket will have a reusable booster and upper stage. The booster will have 30 methane-fueled engines, similar to the number of engines on SpaceX’s Super Heavy booster. The upper stage looks remarkably like Starship, with flaps in similar locations. China intends to fly this vehicle for the first time in 2033, nearly a decade from now.

A vehicle for the Moon … The reusable Long March 9 is intended to unlock robust lunar operations for China, similar to the way Starship, and to some extent Blue Origin’s Blue Moon lander, promises to support sustained astronaut stays on the Moon’s surface. China says it plans to land its astronauts on the Moon by 2030, initially using a more conventional architecture with an expendable rocket named the Long March 10, and a lander reminiscent of NASA’s Apollo lunar lander. These will allow Chinese astronauts to remain on the Moon for a matter of days. With Long March 9, China could deliver massive loads of cargo and life support resources to sustain astronauts for much longer stays.

Ta-ta to the tripod. The large three-legged vertical test stand at SpaceX’s engine test site in McGregor, Texas, is being decommissioned, NASA Spaceflight reports. Cranes have started removing propellant tanks from the test stand, nicknamed the tripod, towering above the Central Texas prairie. McGregor is home to SpaceX’s propulsion test team and has 16 test cells to support firings of Merlin, Raptor, and Draco engines multiple times per day for the Falcon 9 rocket, Starship, and Dragon spacecraft.

Some history … The tripod might have been one of SpaceX’s most important assets in the company’s early years. It was built by Beal Aerospace for liquid-fueled rocket engine tests in the late 1990s. Beal Aerospace folded, and SpaceX took over the site in 2003. After some modifications, SpaceX installed the first qualification version of its Falcon 9 rocket on the tripod for a series of nine-engine test-firings leading up to the rocket’s inaugural flight in 2010. SpaceX test-fired numerous new Falcon 9 boosters on the tripod before shipping them to launch sites in Florida or California. Most recently, the tripod was used for testing of Raptor engines destined to fly on Starship and the Super Heavy booster.

Next three launches

Nov. 9:  Long March 2C | Unknown Payload | Jiuquan Satellite Launch Center, China | 03: 40 UTC

Nov. 9: Falcon 9 | Starlink 9-10 | Vandenberg Space Force Base, California | 06: 14 UTC

Nov. 10:  Falcon 9 | Starlink 6-69 | Cape Canaveral Space Force Station, Florida | 21: 28 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: Australia says yes to the launch; Russia delivers for Iran Read More »

matter-1.4-has-some-solid-ideas-for-the-future-home—now-let’s-see-the-support

Matter 1.4 has some solid ideas for the future home—now let’s see the support

With Matter 1.4 and improved Thread support, you shouldn’t need to blanket your home in HomePod Minis to have adequate Thread coverage. Then again, they do brighten up the place. Credit: Apple

Routers are joining the Thread/Matter melee

A whole bunch of networking gear, known as Home Routers and Access Points (HRAP), can now support Matter, while also extending Thread networks with Matter 1.4.

“Matter-certified HRAP devices provide the foundational infrastructure of smart homes by combining both a Wi-Fi access point and a Thread Border Router, ensuring these ubiquitous devices have the necessary infrastructure for Matter products using either of these technologies,” the CSA writes in its announcement.

Prior to wireless networking gear officially getting in on the game, the devices that have served as Thread Border Routers, accepting and re-transmitting traffic for endpoint devices, has been a hodgepodge of gear. Maybe you had HomePod Minis, newer Nest Hub or Echo devices from Google or Amazon, or Nanoleaf lights around your home, but probably not. Routers, and particularly mesh networking gear, should already be set up to reach most corners of your home with wireless signal, so it makes a lot more sense to have that gear do Matter authentication and Thread broadcasting.

Freeing home energy gear from vendor lock-in

Matter 1.4 adds some big, expensive gear to its list of device types and control powers, and not a moment too soon. Solar inverters and arrays, battery storage systems, heat pumps, and water heaters join the list. Thermostats and Electric Vehicle Supply Equipment (EVSE), i.e. EV charging devices, also get some enhancements. For that last category, it’s not a moment too soon, as chargers that support Matter can keep up their scheduled charging without cloud support from manufacturers.

More broadly, Matter 1.4 bakes a lot of timing, energy cost, and other automation triggers into the spec, which—again, when supported by device manufacturers, at some future date—should allow for better home energy savings and customization, without tying it all to one particular app or platform.

CSA says that, with “nearly two years of real-world deployment in millions of households,” the companies and trade groups and developers tending to Matter are “refining software development kits, streamlining certification processes, and optimizing individual device implementations.” Everything they’ve got lined up seems neat, but it has to end up inside more boxes to be truly impressive.

Matter 1.4 has some solid ideas for the future home—now let’s see the support Read More »

verizon,-at&t-tell-courts:-fcc-can’t-punish-us-for-selling-user-location-data

Verizon, AT&T tell courts: FCC can’t punish us for selling user location data

Supreme Court ruling could hurt FCC case

Both AT&T and Verizon cite the Supreme Court’s June 2024 ruling in Securities and Exchange Commission v. Jarkesy, which held that “when the SEC seeks civil penalties against a defendant for securities fraud, the Seventh Amendment entitles the defendant to a jury trial.”

The Supreme Court ruling, which affirmed a 5th Circuit order, had not been issued yet when the FCC finalized its fines. The FCC disputed the 5th Circuit ruling, saying among other things that Supreme Court precedent made clear that “Congress can assign matters involving public rights to adjudication by an administrative agency ‘even if the Seventh Amendment would have required a jury where the adjudication of those rights is assigned to a federal court of law instead.'”

Of course, the FCC will have a tougher time disputing the Jarkesy ruling now that the Supreme Court affirmed the 5th Circuit. Verizon pointed out that in the high court’s Jarkesy decision, “Justice Sotomayor, in dissent, recognized that Jarkesy was not limited to the SEC, identifying many agencies, including the FCC, whose practice of ‘impos[ing] civil penalties in administrative proceedings’ would be ‘upend[ed].'”

Verizon further argued: “As in Jarkesy, the fact that the FCC seeks ‘civil penalties… designed to punish’ is ‘all but dispositive’ of Verizon’s entitlement to an Article III court and a jury, rather than an agency prosecutor and adjudicator.”

Carriers: We didn’t get fair notice

Both carriers said the FCC did not provide “fair notice” that its section 222 authority over customer proprietary network information (CPNI) would apply to the data in question.

When it issued the fines, the FCC said carriers had fair notice. “CPNI is defined by statute, in relevant part, to include ‘information that relates to… the location… of a telecommunications service,'” the FCC said.

Verizon, AT&T tell courts: FCC can’t punish us for selling user location data Read More »

discord-terrorist-known-as-“rabid”-gets-30-years-for-preying-on-kids

Discord terrorist known as “Rabid” gets 30 years for preying on kids

Densmore likely motivated by fame

Online, Densmore was known in so-called “Sewer” communities under the alias “Rabid.” During their investigation, the FBI found that Densmore kept a collection of “child pornography and bloody images of ‘Rabid,’ ‘Sewer,’ and ‘764’ carved into victims’ limbs, in some cases with razor blades and boxcutters nearby.” He also sexually exploited children, the DOJ said, including paying another 764 member to coerce a young girl to send a nude video with “Rabid” written on her chest. Gaining attention for his livestreams, he would threaten to release the coerced abusive images if kids did not participate “on cam,” the DOJ said.

“I have all your information,” Densmore threatened one victim. “I own you …. You do what I say now, kitten.”

In a speech Thursday, Assistant Attorney General Matthew G. Olsen described 764 as a terrorist network working “to normalize and weaponize the possession, production, and distribution of child sexual abuse material and other types of graphic and violent material” online. Ultimately, by attacking children, the group wants to “destroy civil society” and “collapse the US government,” Olsen said.

People like Densmore, Olsen said, join 764 to inflate their “own sense of fame,” with many having “an end-goal of forcing their victims to commit suicide on livestream for the 764 network’s entertainment.”

In the DOJ’s press release, the FBI warned parents and caregivers to pay attention to their kids’ activity both online and off. In addition to watching out for behavioral shifts or signs of self-harm, caregivers should also take note of any suspicious packages arriving, as 764 sometimes ships kids “razor blades, sexual devices, gifts, and other materials to use in creating online content.” Parents should also encourage kids to discuss online activity, especially if they feel threatened.

“If you are worried about someone who might be self-harming or is at risk of suicide, please consult a health care professional or call 911 in the event of an immediate threat,” the DOJ said.

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Discord terrorist known as “Rabid” gets 30 years for preying on kids Read More »

the-next-starship-launch-may-occur-in-less-than-two-weeks

The next Starship launch may occur in less than two weeks

The company will also use Starship’s next flight to assess new tiles and other elements of the vehicle’s heat shield.

“Several thermal protection experiments and operational changes will test the limits of Starship’s capabilities and generate flight data to inform plans for ship catch and reuse,” the company’s statement said. “The flight test will assess new secondary thermal protection materials and will have entire sections of heat shield tiles removed on either side of the ship in locations being studied for catch-enabling hardware on future vehicles. The ship also will intentionally fly at a higher angle of attack in the final phase of descent, purposefully stressing the limits of flap control to gain data on future landing profiles.”

Final flight of the first Starship

The five previous flights of Starship, dating back to April 2023, have all launched near dawn from South Texas. For the upcoming mission, the company will look for a late-afternoon launch window, which will allow the vehicle to reenter during daylight into the Indian Ocean.

SpaceX’s update also confirms that this will be the last flight of the initial version of the Starship vehicle, with the next generation including redesigned forward flaps, larger propellant tanks, and newer tiles and secondary thermal protection layers.

Reaching a near-monthly cadence of Starship flights during only the second year of the vehicle’s operation is impressive, but it’s also essential if SpaceX wants to unlock the full potential of a rocket that needs multiple refueling launches to support Starship missions to the Moon or Mars.

Wednesday’s announcement comes the day after the US presidential election in which Donald Trump was given a second term by American voters, and it is notable that he was assisted in this through an all-out effort by SpaceX founder Elon Musk.

Musk’s interventions in politics were highly controversial and alienated a significant segment of the US population and political class. Nevertheless Musk’s gambit paid off, as the election of Trump will now likely accelerate Starship’s development and increase its centrality to the nation’s space exploration endeavors.

However, the timing of this launch announcement is likely coincidental, as SpaceX did not need formal regulatory approval to move ahead with this sixth attempt—it was almost entirely dependent on the readiness of the company’s hardware, software, and ground systems.

The next Starship launch may occur in less than two weeks Read More »

corning-faces-antitrust-actions-for-its-gorilla-glass-dominance

Corning faces antitrust actions for its Gorilla Glass dominance

The European Commission (EC) has opened an antitrust investigation into US-based glass-maker Corning, claiming that its Gorilla Glass has dominated the mobile phone screen market due to restrictive deals and licensing.

Corning’s shatter-resistant alkali-aluminosilicate glass keeps its place atop the market, according to the EC’s announcement, because it both demands, and rewards with rebates, device makers that agree to “source all or nearly all of their (Gorilla Glass) demand from Corning.” Corning also allegedly required device makers to report competitive offers to the glass maker. The company is accused of exerting a similar pressure on “finishers,” or those firms that turn raw glass into finished phone screen protectors, as well as demanding finishers not pursue patent challenges against Corning.

“[T]he agreements that Corning put in place with OEMs and finishers may have excluded rival glass producers from large segments of the market, thereby reducing customer choice, increasing prices, and stifling innovation to the detriment of consumers worldwide,” the Commission wrote.

Ars has reached out to Corning for comment and will update this post with response.

Gorilla Glass does approach Xerox or Kleenex levels of brand name association with its function. New iterations of its thin, durable glass reach a bit further than the last and routinely pick up press coverage. Gorilla Glass 4 was pitched as being “up to two times stronger” than any “competitive” alternative. Gorilla Glass 5 could survive a 1.6-meter drop 80 percent of the time, and 6 built in more repetitive damage resistance.

Apple considers Corning’s glass products so essential to its products, like the ceramic shield on the iPhone 12, as to have invested $45 million into the company to expand its US manufacturing. The first iPhone was changed very shortly before launch to use Gorilla Glass instead of a plastic screen, per Steve Jobs’ insistence.

Corning faces antitrust actions for its Gorilla Glass dominance Read More »

the-ps5-pro’s-biggest-problem-is-that-the-ps5-is-already-very-good

The PS5 Pro’s biggest problem is that the PS5 is already very good


For $700, I was hoping for a much larger leap in visual impact.

The racing stripe makes it faster. Credit: Kyle Orland

In many ways, the timing of Sony’s 2016 launch of the PS4 Pro couldn’t have been better. The slightly upgraded version of 2013’s PlayStation 4 came at a time when a wave of 4K TVs was just beginning to crest in the form of tens of millions of annual sales in the US.

Purchasing Sony’s first-ever “mid-generation” console upgrade in 2016 didn’t give original PS4 owners access to any new games, a fact that contributed to us calling the PS4 Pro “a questionable value proposition” when it launched. Still, many graphics-conscious console gamers were looking for an excuse to use the extra pixels and HDR colors on their new 4K TVs, and spending hundreds of dollars on a stopgap console years before the PS5 served that purpose well enough.

Fast-forward to today and the PS5 Pro faces an even weaker value proposition. The PS5, after all, has proven more than capable of creating excellent-looking games that take full advantage of the 4K TVs that are now practically standard in American homes. With 8K TVs still an extremely small market niche, there isn’t anything akin to what Sony’s Mike Somerset called “the most significant picture-quality increase probably since black and white went to color” when talking about 4K TV in 2016.

Front view of the PS5 Pro. Note the complete lack of a built-in disc drive on the only model available. Kyle Orland

Instead, Sony says that spending $700 on a PS5 Pro has a decidedly more marginal impact—namely, helping current PS5 gamers avoid having to choose between the smooth, 60 fps visuals of “Performance” mode and the resolution-maximizing, graphical effects-laden “Fidelity” mode in many games. The extra power of the PS5 Pro, Sony says, will let you have the best of both worlds: full 4K, ray-traced graphics and 60 fps at the same time.

While there’s nothing precisely wrong with this value proposition, there’s a severe case of diminishing returns that comes into play here. The graphical improvements between a “Performance mode” PS5 game and a “Performance Pro mode” PS5 game are small enough, in fact, that I often found it hard to reliably tell at a glance which was which.

Is it just me, or does the Ps5 Pro look like a goofy puppet from this angle? The sloped mouth, the PS logo eye… you see it, right? Kyle Orland

The biggest problem with the PS5 Pro, in other words, is that the original PS5 is already too good.

Smooth operator

In announcing the PS5 Pro in September, Sony’s Mark Cerny mentioned that roughly three-quarters of PS5 owners opt for Performance mode over Fidelity mode when offered the choice on a stock PS5. It’s not hard to see why. Research shows that the vast majority of people can detect a distinct decrease in flickering or juddery animation when the frames-per-second counter is cranked up from (Fidelity mode’s) 30 fps to (Performance mode’s) 60 fps.

The extra visual smoothness is especially important in any reflex-heavy game, where every millisecond of reaction time between your eyes and your thumbs can have a dramatic impact. That reaction advantage can extend well past 60 fps, as PC gamers know all too well.

But the other reason that Performance mode is so overwhelmingly popular among PS5 players, I’d argue, is that you don’t really have to give up too much to get that frame rate-doubling boost. In most games, hopping from Fidelity mode to Performance means giving up a steady 4K image for either a (nicely upscaled) 1440p image or “Dynamic 4K” resolution (i.e., 4K that sometimes temporarily drops down lower to maintain frame rates). While some gamers swear that this difference is important to a game’s overall visual impact, most players will likely struggle to even notice that resolution dip unless they’re sitting incredibly close to a very large screen.

For the PS5 Pro, Sony is marketing “PlayStation Spectral Super Resolution,” its buzzword for an AI-driven upscaling feature that adds further clarity and detail to scenes. Sony’s original announcement of “Super Resolution” heavily used zoomed-in footage to highlight the impact of this feature on distant details. That’s likely because without that level of zoom, the effect of this resolution bump is practically unnoticeable.

Tracing those rays

The other visual upgrade often inherent in a PS5 game’s Fidelity mode is support for ray-tracing, wherein the system tracks individual light rays for more accurate reflections and light scattering off of simulated objects. Having ray-tracing enabled can sometimes lead to striking visual moments, such as when you see Spider-Man’s every move reflected in the mirrored windows of a nearby skyscraper. But as we noted in our initial PS5 review, the effect is usually a much subtler tweak to the overall “realism” of how objects come across in a scene.

Having those kind of ray-traced images at a full 60 fps is definitely nice, but the impact tends to be muted unless a scene has a lot of highly reflective objects. Even the “Fidelity Pro” mode in some PS5 Pro games—which scales the frame rate back to 30 fps to allow for the ray-tracing algorithm to model more reflections and more accurate occlusion and shadows—doesn’t create very many “wow” moments over a standard PS5 in moment-to-moment gameplay.

On the original PS5, I never hesitated to give up the (often marginal) fidelity improvements in favor of a much smoother frame rate. Getting that slightly improved fidelity on the PS5 Pro—without having to give up my beloved 60 fps—is definitely nice, but it’s far from an exciting new frontier in graphical impact.

Which is which?

When testing the PS5 Pro for this review, I had my original PS5 plugged into a secondary input on the same TV, running the same games consecutively. I’d play a section of a game in Pro mode on the PS5 Pro, then immediately switch to the PS5 running the same game in Performance mode (or vice versa). Sitting roughly six feet away from a 60-inch 4K TV, I was struggling to notice any subjective difference in overall visual quality.

I also took comparative screenshots on an original PS5 and a PS5 Pro in as close to identical circumstances as possible, some of which you can see shared in this review (be sure to blow them up to full screen on a good monitor). Flipping back and forth between two screenshots, I could occasionally make out small tangible differences—more natural shine coming off the skin of Aloy’s face in Horizon Zero Dawn Remastered, for instance, or a slight increase in detail on Ratchet’s Lombax fur. More often than not, though, I had legitimate trouble telling which screenshot came from which console without double-checking which TV input was currently active.

I’m a single reviewer with a single pair of eyes, of course. Your impression of the relative visual improvement might be very different. Luckily, if you have access to a PS5 already, you can run your own visual test just by switching between Fidelity and Performance modes on any of your current games. If you find the individual screens in Performance mode look noticeably worse than those in Fidelity mode (putting frame rate aside), then you might be in the market for a PS5 Pro. If you don’t, you can probably stop reading this review right here.

Barely a bang for your buck

Even if you’re the kind of person who appreciates the visual impact of Fidelity mode on the PS5, upgrading to the PS5 Pro isn’t exactly an instant purchase. At $700, getting a PS5 Pro is akin to a PC gamer purchasing a top-of-the-line graphics card, even though the lack of modular components means replacing your entire PS5 console rather than a single part. But while a GeForce RTX 4070 Ti could conceivably keep running new PC games for a decade or more, the PS5 Pro should be thought of as more of a stopgap until the PlayStation 6 (and its inevitable exclusive games) hit around 2028 or so (based on past PlayStation launch spacing).

If you already have a PS5, that $700 could instead go toward the purchase of 10 full, big-budget games at launch pricing or even more intriguing indie releases. That money could also go toward more than four years of PlayStation Plus Premium and access to its library of hundreds of streaming and downloadable modern and classic PlayStation titles PS5 titles. Both strike me as a better use of a limited gaming budget than the slight visual upgrade you’d get from a PS5 Pro.

Even if you’re in the market for your first PS5, I’m not sure the Pro is the version I’d recommend. The $250 difference between a stock PS5 and the PS5 Pro similarly feels like it could be put to better use than the slight visual improvements on offer here. And while the addition of an extra terabyte of high-speed game storage on the PS5 Pro is very welcome, the need to buy an external disc drive peripheral for physical games on the new console may understandably rub some players the wrong way.

Back when the PlayStation 2 launched, I distinctly remember thinking that video game graphics had reached a “good enough” plateau, past which future hardware improvements would be mostly superfluous. That memory feels incredibly quaint now from the perspective of nearly two-and-a-half decades of improvements in console graphics and TV displays. Yet the PS5 Pro has me similarly feeling that the original PS5 was something of a graphical plateau, with this next half-step in graphical horsepower struggling to prove its worth.

Maybe I’ll look back in two decades and consider that feeling similarly naive, seeing the PS5 Pro as a halting first step toward yet unimagined frontiers of graphical realism. Right now, though, I’m comfortable recommending that the vast majority of console gamers spend their money elsewhere.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

The PS5 Pro’s biggest problem is that the PS5 is already very good Read More »

“havard”-trained-spa-owner-injected-clients-with-bogus-botox,-prosecutors-say

“Havard”-trained spa owner injected clients with bogus Botox, prosecutors say

Mounting evidence

Multiple clients and employees told investigators that Fadanelli also said she is a registered nurse, which is false. Though she is a registered aesthetician, aestheticians are not permitted to administer injections or prescription drugs.

Investigators set up an undercover operation where an agent went in for a consultation, and Fadanelli provided a quote for a $450 Botox treatment. Investigators also obtained videos and images of Fadanelli performing injections. And the evidence points to those injections being counterfeit, prosecutors allege. Sales records from the spa indicate that Fadanelli performed 1,631 “Botox” injections, 95 “Sculptra” injections, and 990 injections of unspecified “filler,” all totaling over $933,000. But sales records from the manufacturers of the brand name drugs failed to turn up any record of Fadanelli or anyone else from her spa ever purchasing legitimate versions of the drugs.

Despite the mounting evidence against her, Fadanelli reportedly stuck to her story, denying that she ever told anyone she was a nurse and denying ever administering any injections. “When agents asked Fadanelli if she would like to retract or modify that claim if she knew there was evidence showing that she was in fact administering such products, she reiterated that she does not administer injections.”

Ars has reached to Fadanelli’s spa for comment and will update this story if we get a response. According to the affidavit, clients who received the allegedly bogus injections complained of bumps, tingling, and poor appearances, but no infections or other adverse health outcomes.

In a press release announcing her arrest, Acting United States Attorney for Massachusetts Joshua Levy said: “For years, Ms. Fadanelli allegedly put unsuspecting patients at risk by representing herself to be a nurse and then administering thousands of illegal, counterfeit injections. … The type of deception alleged here is illegal, reckless, and potentially life-threatening.”

For a charge of illegal importation, Fadanelli faces up to 20 years in prison and a $250,000 fine. For each of two charges of knowingly selling or dispensing a counterfeit drug or counterfeit device, she faces up to 10 years in prison and a fine of $250,000.

“Havard”-trained spa owner injected clients with bogus Botox, prosecutors say Read More »