How Should the FOSS Movement Respond to Proprietary Software?

Long-time FOSS-watcher Bruce Byfield writes that while people “still dream of a completely free alternative, increasingly the emphasis in FOSS seems to be on accepting coexistence with proprietary software.”
Many, too, have always preferred the permissive BSD licenses, which permits combining FOSS and proprietary software. From some perspectives, Debian’s newest [non-free firmware] repository or Nobara’s popularity [a Fedora-based distro but with proprietary drivers and gaming applications] is simply an admission of the true state of affairs…

On the other hand, the FOSS philosophy may be weakened because it no longer has a strong advocate. Sixteen years ago, the FSF reached a peak of authority in the discussions of 2006-2007 about the structure of GPLv3 — then immediately lost that authority by not reaching a consensus. That was followed by the cancellation of Richard Stallman in 2017, which, deserved or not, had the side effect of silencing free software’s most influential representative. Today the FSF that Stallman led continues to function, with Stallman returned to the board of directors, but its actions go unreported, and it seems to speak to a much smaller group of loyalists. The Linux Foundation, with its corporate emphasis, is not an adequate substitution. In these circumstances, there is reason to wonder whether FOSS has lost its way.

While the issue has yet to reach the mainstream, Bruce Perens, one of the coiners of the term “open source” in 1998, is already trying to describe what he calls the Post-Open Source era. Not only does Perens believe that FOSS licenses no longer fulfill their original purpose, but they no longer inform or benefit the average user. According to Perens,

“Open Source has completely failed to serve the common person. For the most part, if they use us at all they do so through a proprietary software company’s systems, like Apple iOS or Google Android, both of which use Open Source for infrastructure but the apps are mostly proprietary. The common person doesn’t know about Open Source, they don’t know about the freedoms we promote which are increasingly in their interest. Indeed, Open Source is used today to surveil and even oppress them.”

As a remedy, Perens proposes that licenses should be replaced by contracts. He envisions that companies pay for the benefits they receive from using FOSS. Compliance for each contract would be checked, renewed, and paid for yearly, and the payments would go towards funding FOSS development. Individuals and nonprofits would continue to use FOSS for free. In March 2024, Perens posted a draft Post-Open license. The draft includes a description of the contract-related files to be shipped with FOSS software, a description of the status of derivative works, how revenue is collected, and conditions of termination. The draft has yet to be reviewed by a lawyer, but what is immediately noticeable is how it draws on both contract language and FOSS licenses to produce something different.

Byfield concludes that “free licenses are straining to respond to loopholes, and a discussion needs to be had about whether they are adequate to modern pressures.”

Read more of this story at Slashdot.

Mike McQuaid on 15 Years of Homebrew and Protecting Open-Source Maintainers

Despite multiple methods available across major operating systems for installing and updating applications, there remains “no real clear answer to ‘which is best,'” reports The Next Web. Each system faces unique challenges such as outdated packages, high fees, and policy restrictions.

Enter Homebrew.

“Initially created as an option for developers to keep the dependencies they often need for developing, testing, and running their work, Homebrew has grown to be so much more in its 15-year history.” Created in 2009, Homebrew has become a leading solution for macOS, integrating with MDM tools through its enterprise-focused extension, Workbrew, to balance user freedom with corporate security needs, while maintaining its open-source roots under the guidance of Mike McQuaid. In an interview with The Next Web’s Chris Chinchilla, project leader Mike McQuaid talks about the challenges and responsibilities of maintaining one of the world’s largest open-source projects: As with anything that attracts plenty of use and attention, Homebrew also attracts a lot of mixed and extreme opinions, and processing and filtering those requires a tough outlook, something that Mike has spoken about in numerous interviews and at conferences. “As a large project, you get a lot of hate from people. Either people are just frustrated because they hit a bug or because you changed something, and they didn’t read the release notes, and now something’s broken,” Mike says when I ask him about how he copes with the constant influx of communication. “There are a lot of entitled, noisy users in open source who contribute very little and like to shout at people and make them feel bad. One of my strengths is that I have very little time for those people, and I just insta-block them or close their issues.”

More crucially, an open-source project is often managed and maintained by a group of people. Homebrew has several dozen maintainers and nearly one thousand total contributors. Mike explains that all of these people also deserve to be treated with respect by users, “I’m also super protective of my maintainers, and I don’t want them to be treated that way either.” But despite these features and its widespread use, one area Homebrew has always lacked is the ability to work well with teams of users. This is where Workbrew, a company Mike founded with two other Homebrew maintainers, steps in. […] Workbrew ties together various Homebrew features with custom glue to create a workflow for setting up and maintaining Mac machines. It adds new features that core Homebrew maintainers had no interest in adding, such as admin and reporting dashboards for a computing fleet, while bringing more general improvements to the core project.

Bearing in mind Mike’s motivation to keep Homebrew in the “traditional open source” model, I asked him how he intended to keep the needs of the project and the business separated and satisfied. “We’ve seen a lot of churn in the last few years from companies that made licensing decisions five or ten years ago, which have now changed quite dramatically and have generated quite a lot of community backlash,” Mike said. “I’m very sensitive to that, and I am a little bit of an open-source purist in that I still consider the open-source initiative’s definition of open source to be what open source means. If you don’t comply with that, then you can be another thing, but I think you’re probably not open source.”

And regarding keeping his and his co-founder’s dual roles separated, Mike states, “I’m the CTO and co-founder of Workbrew, and I’m the project leader of Homebrew. The project leader with Homebrew is an elected position.” Every year, the maintainers and the community elect a candidate. “But then, with the Homebrew maintainers working with us on Workbrew, one of the things I say is that when we’re working on Workbrew, I’m your boss now, but when we work on Homebrew, I’m not your boss,” Mike adds. “If you think I’m saying something and it’s a bad idea, you tell me it’s a bad idea, right?” The company is keeping its early progress in a private beta for now, but you can expect an announcement soon. As for what’s happening for Homebrew? Well, in the best “open source” way, that’s up to the community and always will be.

Read more of this story at Slashdot.

Nvidia’s Open-Source Linux Kernel Driver Performing At Parity To Proprietary Driver

Nvidia’s new R555 Linux driver series has significantly improved their open-source GPU kernel driver modules, achieving near parity with their proprietary drivers. Phoronix’s Michael Larabel reports: The NVIDIA open-source kernel driver modules shipped by their driver installer and also available via their GitHub repository are in great shape. With the R555 series the support and performance is basically at parity of their open-source kernel modules compared to their proprietary kernel drivers. […] Across a range of different GPU-accelerated creator workloads, the performance of the open-source NVIDIA kernel modules matched that of the proprietary driver. No loss in performance going the open-source kernel driver route. Across various professional graphics workloads, both the NVIDIA RTX A2000 and A4000 graphics cards were also achieving the same performance whether on the open-source MIT/GPLv2 driver or using NVIDIA’s classic proprietary driver.

Across all of the tests I carried out using the NVIDIA 555 stable series Linux driver, the open-source NVIDIA kernel modules were able to achieve the same performance as the classic proprietary driver. Also important is that there was no increased power use or other difference in power management when switching over to the open-source NVIDIA kernel modules.

It’s great seeing how far the NVIDIA open-source kernel modules have evolved and that with the upcoming NVIDIA 560 Linux driver series they will be defaulting to them on supported GPUs. And moving forward with Blackwell and beyond, NVIDIA is just enabling the GPU support along their open-source kernel drivers with leaving the proprietary kernel drivers to older hardware. Tests I have done using NVIDIA GeForce RTX 40 graphics cards with Linux gaming workloads between the MIT/GPL and proprietary kernel drivers have yielded similar (boring but good) results: the same performance being achieved with no loss going the open-source route. You can view Phoronix’s performance results in charts here, here, and here.

Read more of this story at Slashdot.

Developer Successfully Boots Up Linux on Google Drive

Its FOSS writes:
When it comes to Linux, we get to see some really cool, and sometimes quirky projects (read Hannah Montana Linux) that try to show off what’s possible, and that’s not a bad thing. One such quirky undertaking has recently surfaced, which sees a sophomore trying to one-up their friend, who had booted Linux off NFS. With their work, they have been able to run Arch Linux on Google Drive.

Their ultimate idea included FUSE (which allows running file-system code in userspace). The developer’s blog post explains that when Linux boots, “the kernel unpacks a temporary filesystem into RAM which has the tools to mount the real filesystem… it’s very helpful! We can mount a FUSE filesystem in that step and boot normally…. ”

Thankfully, Dracut makes it easy enough to build a custom initramfs… I decide to build this on top of Arch Linux because it’s relatively lightweight and I’m familiar with how it work.”
Doing testing in an Amazon S3 container, they built an EFI image — then spent days trying to enable networking… And the adventure continues. (“Would it be possible to manually switch the root without a specialized system call? What if I just chroot?”) After they’d made a few more tweaks, “I sit there, in front of my computer, staring. It can’t have been that easy, can it? Surely, this is a profane act, and the spirit of Dennis Ritchie ought’t’ve stopped me, right? Nobody stopped me, so I kept going…”

I build the unified EFI file, throw it on a USB drive under /BOOT/EFI, and stick it in my old server… This is my magnum opus. My Great Work. This is the mark I will leave on this planet long after I am gone: The Cloud Native Computer.
Despite how silly this project is, there are a few less-silly uses I can think of, like booting Linux off of SSH, or perhaps booting Linux off of a Git repository and tracking every change in Git using gitfs. The possibilities are endless, despite the middling usefulness.

If there is anything I know about technology, it’s that moving everything to The Cloud is the current trend. As such, I am prepared to commercialize this for any company wishing to leave their unreliable hardware storage behind and move entirely to The Cloud. Please request a quote if you are interested in True Cloud Native Computing.

Unfortunately, I don’t know what to do next with this. Maybe I should install Nix?

Read more of this story at Slashdot.

OIN Expands Linux Patent Protection Yet Again (But Not To AI)

Steven Vaughan-Nichols reports via ZDNet: While Linux and open-source software (OSS) are no longer constantly under intellectual property (IP) attacks, the Open Invention Network (OIN) patent consortium still stands guard over its patents. Now, OIN, the largest patent non-aggression community, has expanded its protection once again by updating its Linux System definition. Covering more than just Linux, the Linux System definition also protects adjacent open-source technologies. In the past, protection was expanded to Android, Kubernetes, and OpenStack. The OIN accomplishes this by providing a shared defensive patent pool of over 3 million patents from over 3,900 community members. OIN members include Amazon, Google, Microsoft, and essentially all Linux-based companies.

This latest update extends OIN’s existing patent risk mitigation efforts to cloud-native computing and enterprise software. In the cloud computing realm, OIN has added patent coverage for projects such as Istio, Falco, Argo, Grafana, and Spire. For enterprise computing, packages such as Apache Atlas and Apache Solr — used for data management and search at scale, respectively — are now protected. The update also enhances patent protection for the Internet of Things (IoT), networking, and automotive technologies. OpenThread and packages such as agl-compositor and kukusa.val have been added to the Linux System definition. In the embedded systems space, OIN has supplemented its coverage of technologies like OpenEmbedded by adding the OpenAMP and Matter, the home IoT standard. OIN has included open hardware development tools such as Edalize, cocotb, Amaranth, and Migen, building upon its existing coverage of hardware design tools like Verilator and FuseSoc.

Keith Bergelt, OIN’s CEO, emphasized the importance of this update, stating, “Linux and other open-source software projects continue to accelerate the pace of innovation across a growing number of industries. By design, periodic expansion of OIN’s Linux System definition enables OIN to keep pace with OSS’s growth.” […] Looking ahead, Bergelt said, “We made this conscious decision not to include AI. It’s so dynamic. We wait until we see what AI programs have significant usage and adoption levels.” This is how the OIN has always worked. The consortium takes its time to ensure it extends its protection to projects that will be around for the long haul. The OIN practices patent non-aggression in core Linux and adjacent open-source technologies by cross-licensing their Linux System patents to one another on a royalty-free basis. When OIN signees are attacked because of their patents, the OIN can spring into action.

Read more of this story at Slashdot.

Why a ‘Frozen’ Distribution Linux Kernel Isn’t the Safest Choice for Security

Jeremy Allison — Sam (Slashdot reader #8,157) is a Distinguished Engineer at Rocky Linux creator CIQ. This week he published a blog post responding to promises of Linux distros “carefully selecting only the most polished and pristine open source patches from the raw upstream open source Linux kernel in order to create the secure distribution kernel you depend on in your business.”

But do carefully curated software patches (applied to a known “frozen” Linux kernel) really bring greater security? “After a lot of hard work and data analysis by my CIQ kernel engineering colleagues Ronnie Sahlberg and Jonathan Maple, we finally have an answer to this question. It’s no.”

The data shows that “frozen” vendor Linux kernels, created by branching off a release point and then using a team of engineers to select specific patches to back-port to that branch, are buggier than the upstream “stable” Linux kernel created by Greg Kroah-Hartman. How can this be? If you want the full details the link to the white paper is here. But the results of the analysis couldn’t be clearer.
– A “frozen” vendor kernel is an insecure kernel. A vendor kernel released later in the release schedule is doubly so.
– The number of known bugs in a “frozen” vendor kernel grows over time. The growth in the number of bugs even accelerates over time.

– There are too many open bugs in these kernels for it to be feasible to analyze or even classify them….

[T]hinking that you’re making a more secure choice by using a “frozen” vendor kernel isn’t a luxury we can still afford to believe. As Greg Kroah-Hartman explicitly said in his talk “Demystifying the Linux Kernel Security Process”: “If you are not using the latest stable / longterm kernel, your system is insecure.”

CIQ describes its report as “a count of all the known bugs from an upstream kernel that were introduced, but never fixed in RHEL 8.”
For the most recent RHEL 8 kernels, at the time of writing, these counts are: RHEL 8.6 : 5034 RHEL 8.7 : 4767 RHEL 8.8 : 4594

In RHEL 8.8 we have a total of 4594 known bugs with fixes that exist upstream, but for which known fixes have not been back-ported to RHEL 8.8. The situation is worse for RHEL 8.6 and RHEL 8.7 as they cut off back-porting earlier than RHEL 8.8 but of course that did not prevent new bugs from being discovered and fixed upstream….
This whitepaper is not meant as a criticism of the engineers working at any Linux vendors who are dedicated to producing high quality work in their products on behalf of their customers. This problem is extremely difficult to solve. We know this is an open secret amongst many in the industry and would like to put concrete numbers describing the problem to encourage discussion. Our hope is for Linux vendors and the community as a whole to rally behind the kernel.org stable kernels as the best long term supported solution. As engineers, we would prefer this to allow us to spend more time fixing customer specific bugs and submitting feature improvements upstream, rather than the endless grind of backporting upstream changes into vendor kernels, a practice which can introduce more bugs than it fixes.
ZDNet calls it “an open secret in the Linux community.”

It’s not enough to use a long-term support release. You must use the most up-to-date release to be as secure as possible. Unfortunately, almost no one does that. Nevertheless, as Google Linux kernel engineer Kees Cook explained, “So what is a vendor to do? The answer is simple: if painful: Continuously update to the latest kernel release, either major or stable.” Why? As Kroah-Hartman explained, “Any bug has the potential of being a security issue at the kernel level….”
Although [CIQ’s] programmers examined RHEL 8.8 specifically, this is a general problem. They would have found the same results if they had examined SUSE, Ubuntu, or Debian Linux. Rolling-release Linux distros such as Arch, Gentoo, and OpenSUSE Tumbleweed constantly release the latest updates, but they’re not used in businesses.

Jeremy Allison’s post points out that “the Linux kernel used by Android devices is based on the upstream kernel and also has a stable internal kernel ABI, so this isn’t an insurmountable problem…”

Read more of this story at Slashdot.

Google Removes RISC-V Support From Android Common Kernel, Denies Abandoning Its Efforts

Mishaal Rahman reports via Android Authority: Earlier today, a Senior Staff Software Engineer at Google who, according to their LinkedIn, leads the Android Systems Team and works on Android’s Linux kernel fork, submitted a series of patches to AOSP that “remove ACK’s support for riscv64.” The description of these patches states that “support for risc64 GKI kernels is discontinued.”

ACK stands for Android Common Kernel and refers to the downstream branches of the official kernel.org Linux kernels that Google maintains. The ACK is basically Linux plus some “patches of interest to the Android community that haven’t been merged into mainline or Long Term Supported (LTS) kernels.” There are multiple ACK branches, including android-mainline, which is the primary development branch that is forked into “GKI” kernel branches that correspond to a particular combination of supported Linux kernel and Android OS version. GKI stands for Generic Kernel Image and refers to a kernel that’s built from one of these branches. Every certified Android device ships with a kernel based on one of these GKI branches, as Google currently does not certify Android devices that ship with a mainline Linux kernel build.

Since these patches remove RISC-V kernel support, RISC-V kernel build support, and RISC-V emulator support, any companies looking to compile a RISC-V build of Android right now would need to create and maintain their own fork of Linux with the requisite ACK and RISC-V patches. Given that Google currently only certifies Android builds that ship with a GKI kernel built from an ACK branch, that means we likely won’t see certified builds of Android on RISC-V hardware anytime soon. Our initial interpretation of these patches was that Google was preparing to kill off RISC-V support in Android since that was the most obvious conclusion. However, a spokesperson for Google told us this: “Android will continue to support RISC-V. Due to the rapid rate of iteration, we are not ready to provide a single supported image for all vendors. This particular series of patches removes RISC-V support from the Android Generic Kernel Image (GKI).” Based on Google’s statement, Rahman suggests that “there’s still a ton of work that needs to be done before Android is ready for RISC-V.”

“Even once it’s ready, Google will need to redo the work to add RISC-V support in the kernel anyway. At the very least, Google’s decision likely means that we might need to wait even longer than expected to see commercial Android devices running on a RISC-V chip.”

Read more of this story at Slashdot.

Redis To Adopt ‘Source-Available Licensing’ Starting With Next Version

Longtime Slashdot reader jgulla shares an announcement from Redis: Beginning today, all future versions of Redis will be released with source-available licenses. Starting with Redis 7.4, Redis will be dual-licensed under the Redis Source Available License (RSALv2) and Server Side Public License (SSPLv1). Consequently, Redis will no longer be distributed under the three-clause Berkeley Software Distribution (BSD). The new source-available licenses allow us to sustainably provide permissive use of our source code.

We’re leading Redis into its next phase of development as a real-time data platform with a unified set of clients, tools, and core Redis product offerings. The Redis source code will continue to be freely available to developers, customers, and partners through Redis Community Edition. Future Redis source-available releases will unify core Redis with Redis Stack, including search, JSON, vector, probabilistic, and time-series data models in one free, easy-to-use package as downloadable software. This will allow anyone to easily use Redis across a variety of contexts, including as a high-performance key/value and document store, a powerful query engine, and a low-latency vector database powering generative AI applications. […]

Under the new license, cloud service providers hosting Redis offerings will no longer be permitted to use the source code of Redis free of charge. For example, cloud service providers will be able to deliver Redis 7.4 only after agreeing to licensing terms with Redis, the maintainers of the Redis code. These agreements will underpin support for existing integrated solutions and provide full access to forthcoming Redis innovations. In practice, nothing changes for the Redis developer community who will continue to enjoy permissive licensing under the dual license. At the same time, all the Redis client libraries under the responsibility of Redis will remain open source licensed. Redis will continue to support its vast partner ecosystem — including managed service providers and system integrators — with exclusive access to all future releases, updates, and features developed and delivered by Redis through its Partner Program. There is no change for existing Redis Enterprise customers.

Read more of this story at Slashdot.

OpenTTD (Unofficial Remake of ‘Transport Tycoon Deluxe’ Game) Turns 20

In 1995 Scottish video game designer Chris Sawyer created the business simulator game Transport Tycoon Deluxe — and within four years, Wikipedia notes, work began on the first version of an open source version that’s still being actively developed. “According to a study of the 61,154 open-source projects on SourceForge in the period between 1999 and 2005, OpenTTD ranked as the 8th most active open-source project to receive patches and contributions. In 2004, development moved to their own server.”

Long-time Slashdot reader orudge says he’s been involved for almost 25 years. “Exactly 21 years ago, I received an ICQ message (look it up, kids) out of the blue from a guy named Ludvig Strigeus (nicknamed Ludde).”

“Hello, you probably don’t know me, but I’ve been working on a project to clone Transport Tycoon Deluxe for a while,” he said, more or less… Ludde made more progress with the project [written in C] over the coming year, and it looks like we even attempted some multiplayer games (not too reliable, especially over my dial-up connection at the time). Eventually, when he was happy with what he had created, he agreed to allow me to release the game as open source. Coincidentally, this happened exactly a year after I’d first spoken to him, on the 6th March 2004…

Things really got going after this, and a community started to form with enthusiastic developers fixing bugs, adding in new features, and smoothing off the rough edges. Ludde was, I think, a bit taken aback by how popular it proved, and even rejoined the development effort for a while. A read through the old changelogs reveals just how many features were added over a very short period of time. Quick wins like higher vehicle limits came in very quickly, and support for TTDPatch’s NewGRF format started to be functional just four months later. Large maps, improved multiplayer, better pathfinders, improved TTDPatch compatibility, and of course, ports to a great many different operating systems, such as Mac OS X, BeOS, MorphOS and OS/2. It was a very exciting time to be a TTD fan!

Within six years, ambitious projects to create free replacements for the original TTD graphics, sounds and music sets were complete, and OpenTTD finally had its 1.0 release. And while we may not have the same frantic addition of new features we had in 2004, there have still been massive improvements to the code, with plenty of exciting new features over the years, with major releases every year since 2008. he move to GitHub in 2018 and the release of OpenTTD on Steam in 2021 have also re-energised development efforts, with thousands of people now enjoying playing the game regularly. And development shows no signs of slowing down, with the upcoming OpenTTD 14.0 release including over 40 new features!
“Personally, I would like to say thank you to everyone who has supported OpenTTD development over the past two decades…” they write, adding “Finally, of course, I’d like to thank you, the players! None of us would be here if people weren’t still playing the game.

“Seeing how the first twenty years have gone, I can’t wait to see what the next twenty years have in store. :)”

Read more of this story at Slashdot.