Nvidia’s Open-Source Linux Kernel Driver Performing At Parity To Proprietary Driver

Nvidia’s new R555 Linux driver series has significantly improved their open-source GPU kernel driver modules, achieving near parity with their proprietary drivers. Phoronix’s Michael Larabel reports: The NVIDIA open-source kernel driver modules shipped by their driver installer and also available via their GitHub repository are in great shape. With the R555 series the support and performance is basically at parity of their open-source kernel modules compared to their proprietary kernel drivers. […] Across a range of different GPU-accelerated creator workloads, the performance of the open-source NVIDIA kernel modules matched that of the proprietary driver. No loss in performance going the open-source kernel driver route. Across various professional graphics workloads, both the NVIDIA RTX A2000 and A4000 graphics cards were also achieving the same performance whether on the open-source MIT/GPLv2 driver or using NVIDIA’s classic proprietary driver.

Across all of the tests I carried out using the NVIDIA 555 stable series Linux driver, the open-source NVIDIA kernel modules were able to achieve the same performance as the classic proprietary driver. Also important is that there was no increased power use or other difference in power management when switching over to the open-source NVIDIA kernel modules.

It’s great seeing how far the NVIDIA open-source kernel modules have evolved and that with the upcoming NVIDIA 560 Linux driver series they will be defaulting to them on supported GPUs. And moving forward with Blackwell and beyond, NVIDIA is just enabling the GPU support along their open-source kernel drivers with leaving the proprietary kernel drivers to older hardware. Tests I have done using NVIDIA GeForce RTX 40 graphics cards with Linux gaming workloads between the MIT/GPL and proprietary kernel drivers have yielded similar (boring but good) results: the same performance being achieved with no loss going the open-source route. You can view Phoronix’s performance results in charts here, here, and here.

Read more of this story at Slashdot.

Developer Successfully Boots Up Linux on Google Drive

Its FOSS writes:
When it comes to Linux, we get to see some really cool, and sometimes quirky projects (read Hannah Montana Linux) that try to show off what’s possible, and that’s not a bad thing. One such quirky undertaking has recently surfaced, which sees a sophomore trying to one-up their friend, who had booted Linux off NFS. With their work, they have been able to run Arch Linux on Google Drive.

Their ultimate idea included FUSE (which allows running file-system code in userspace). The developer’s blog post explains that when Linux boots, “the kernel unpacks a temporary filesystem into RAM which has the tools to mount the real filesystem… it’s very helpful! We can mount a FUSE filesystem in that step and boot normally…. ”

Thankfully, Dracut makes it easy enough to build a custom initramfs… I decide to build this on top of Arch Linux because it’s relatively lightweight and I’m familiar with how it work.”
Doing testing in an Amazon S3 container, they built an EFI image — then spent days trying to enable networking… And the adventure continues. (“Would it be possible to manually switch the root without a specialized system call? What if I just chroot?”) After they’d made a few more tweaks, “I sit there, in front of my computer, staring. It can’t have been that easy, can it? Surely, this is a profane act, and the spirit of Dennis Ritchie ought’t’ve stopped me, right? Nobody stopped me, so I kept going…”

I build the unified EFI file, throw it on a USB drive under /BOOT/EFI, and stick it in my old server… This is my magnum opus. My Great Work. This is the mark I will leave on this planet long after I am gone: The Cloud Native Computer.
Despite how silly this project is, there are a few less-silly uses I can think of, like booting Linux off of SSH, or perhaps booting Linux off of a Git repository and tracking every change in Git using gitfs. The possibilities are endless, despite the middling usefulness.

If there is anything I know about technology, it’s that moving everything to The Cloud is the current trend. As such, I am prepared to commercialize this for any company wishing to leave their unreliable hardware storage behind and move entirely to The Cloud. Please request a quote if you are interested in True Cloud Native Computing.

Unfortunately, I don’t know what to do next with this. Maybe I should install Nix?

Read more of this story at Slashdot.

OIN Expands Linux Patent Protection Yet Again (But Not To AI)

Steven Vaughan-Nichols reports via ZDNet: While Linux and open-source software (OSS) are no longer constantly under intellectual property (IP) attacks, the Open Invention Network (OIN) patent consortium still stands guard over its patents. Now, OIN, the largest patent non-aggression community, has expanded its protection once again by updating its Linux System definition. Covering more than just Linux, the Linux System definition also protects adjacent open-source technologies. In the past, protection was expanded to Android, Kubernetes, and OpenStack. The OIN accomplishes this by providing a shared defensive patent pool of over 3 million patents from over 3,900 community members. OIN members include Amazon, Google, Microsoft, and essentially all Linux-based companies.

This latest update extends OIN’s existing patent risk mitigation efforts to cloud-native computing and enterprise software. In the cloud computing realm, OIN has added patent coverage for projects such as Istio, Falco, Argo, Grafana, and Spire. For enterprise computing, packages such as Apache Atlas and Apache Solr — used for data management and search at scale, respectively — are now protected. The update also enhances patent protection for the Internet of Things (IoT), networking, and automotive technologies. OpenThread and packages such as agl-compositor and kukusa.val have been added to the Linux System definition. In the embedded systems space, OIN has supplemented its coverage of technologies like OpenEmbedded by adding the OpenAMP and Matter, the home IoT standard. OIN has included open hardware development tools such as Edalize, cocotb, Amaranth, and Migen, building upon its existing coverage of hardware design tools like Verilator and FuseSoc.

Keith Bergelt, OIN’s CEO, emphasized the importance of this update, stating, “Linux and other open-source software projects continue to accelerate the pace of innovation across a growing number of industries. By design, periodic expansion of OIN’s Linux System definition enables OIN to keep pace with OSS’s growth.” […] Looking ahead, Bergelt said, “We made this conscious decision not to include AI. It’s so dynamic. We wait until we see what AI programs have significant usage and adoption levels.” This is how the OIN has always worked. The consortium takes its time to ensure it extends its protection to projects that will be around for the long haul. The OIN practices patent non-aggression in core Linux and adjacent open-source technologies by cross-licensing their Linux System patents to one another on a royalty-free basis. When OIN signees are attacked because of their patents, the OIN can spring into action.

Read more of this story at Slashdot.

Why a ‘Frozen’ Distribution Linux Kernel Isn’t the Safest Choice for Security

Jeremy Allison — Sam (Slashdot reader #8,157) is a Distinguished Engineer at Rocky Linux creator CIQ. This week he published a blog post responding to promises of Linux distros “carefully selecting only the most polished and pristine open source patches from the raw upstream open source Linux kernel in order to create the secure distribution kernel you depend on in your business.”

But do carefully curated software patches (applied to a known “frozen” Linux kernel) really bring greater security? “After a lot of hard work and data analysis by my CIQ kernel engineering colleagues Ronnie Sahlberg and Jonathan Maple, we finally have an answer to this question. It’s no.”

The data shows that “frozen” vendor Linux kernels, created by branching off a release point and then using a team of engineers to select specific patches to back-port to that branch, are buggier than the upstream “stable” Linux kernel created by Greg Kroah-Hartman. How can this be? If you want the full details the link to the white paper is here. But the results of the analysis couldn’t be clearer.
– A “frozen” vendor kernel is an insecure kernel. A vendor kernel released later in the release schedule is doubly so.
– The number of known bugs in a “frozen” vendor kernel grows over time. The growth in the number of bugs even accelerates over time.

– There are too many open bugs in these kernels for it to be feasible to analyze or even classify them….

[T]hinking that you’re making a more secure choice by using a “frozen” vendor kernel isn’t a luxury we can still afford to believe. As Greg Kroah-Hartman explicitly said in his talk “Demystifying the Linux Kernel Security Process”: “If you are not using the latest stable / longterm kernel, your system is insecure.”

CIQ describes its report as “a count of all the known bugs from an upstream kernel that were introduced, but never fixed in RHEL 8.”
For the most recent RHEL 8 kernels, at the time of writing, these counts are: RHEL 8.6 : 5034 RHEL 8.7 : 4767 RHEL 8.8 : 4594

In RHEL 8.8 we have a total of 4594 known bugs with fixes that exist upstream, but for which known fixes have not been back-ported to RHEL 8.8. The situation is worse for RHEL 8.6 and RHEL 8.7 as they cut off back-porting earlier than RHEL 8.8 but of course that did not prevent new bugs from being discovered and fixed upstream….
This whitepaper is not meant as a criticism of the engineers working at any Linux vendors who are dedicated to producing high quality work in their products on behalf of their customers. This problem is extremely difficult to solve. We know this is an open secret amongst many in the industry and would like to put concrete numbers describing the problem to encourage discussion. Our hope is for Linux vendors and the community as a whole to rally behind the kernel.org stable kernels as the best long term supported solution. As engineers, we would prefer this to allow us to spend more time fixing customer specific bugs and submitting feature improvements upstream, rather than the endless grind of backporting upstream changes into vendor kernels, a practice which can introduce more bugs than it fixes.
ZDNet calls it “an open secret in the Linux community.”

It’s not enough to use a long-term support release. You must use the most up-to-date release to be as secure as possible. Unfortunately, almost no one does that. Nevertheless, as Google Linux kernel engineer Kees Cook explained, “So what is a vendor to do? The answer is simple: if painful: Continuously update to the latest kernel release, either major or stable.” Why? As Kroah-Hartman explained, “Any bug has the potential of being a security issue at the kernel level….”
Although [CIQ’s] programmers examined RHEL 8.8 specifically, this is a general problem. They would have found the same results if they had examined SUSE, Ubuntu, or Debian Linux. Rolling-release Linux distros such as Arch, Gentoo, and OpenSUSE Tumbleweed constantly release the latest updates, but they’re not used in businesses.

Jeremy Allison’s post points out that “the Linux kernel used by Android devices is based on the upstream kernel and also has a stable internal kernel ABI, so this isn’t an insurmountable problem…”

Read more of this story at Slashdot.

Google Removes RISC-V Support From Android Common Kernel, Denies Abandoning Its Efforts

Mishaal Rahman reports via Android Authority: Earlier today, a Senior Staff Software Engineer at Google who, according to their LinkedIn, leads the Android Systems Team and works on Android’s Linux kernel fork, submitted a series of patches to AOSP that “remove ACK’s support for riscv64.” The description of these patches states that “support for risc64 GKI kernels is discontinued.”

ACK stands for Android Common Kernel and refers to the downstream branches of the official kernel.org Linux kernels that Google maintains. The ACK is basically Linux plus some “patches of interest to the Android community that haven’t been merged into mainline or Long Term Supported (LTS) kernels.” There are multiple ACK branches, including android-mainline, which is the primary development branch that is forked into “GKI” kernel branches that correspond to a particular combination of supported Linux kernel and Android OS version. GKI stands for Generic Kernel Image and refers to a kernel that’s built from one of these branches. Every certified Android device ships with a kernel based on one of these GKI branches, as Google currently does not certify Android devices that ship with a mainline Linux kernel build.

Since these patches remove RISC-V kernel support, RISC-V kernel build support, and RISC-V emulator support, any companies looking to compile a RISC-V build of Android right now would need to create and maintain their own fork of Linux with the requisite ACK and RISC-V patches. Given that Google currently only certifies Android builds that ship with a GKI kernel built from an ACK branch, that means we likely won’t see certified builds of Android on RISC-V hardware anytime soon. Our initial interpretation of these patches was that Google was preparing to kill off RISC-V support in Android since that was the most obvious conclusion. However, a spokesperson for Google told us this: “Android will continue to support RISC-V. Due to the rapid rate of iteration, we are not ready to provide a single supported image for all vendors. This particular series of patches removes RISC-V support from the Android Generic Kernel Image (GKI).” Based on Google’s statement, Rahman suggests that “there’s still a ton of work that needs to be done before Android is ready for RISC-V.”

“Even once it’s ready, Google will need to redo the work to add RISC-V support in the kernel anyway. At the very least, Google’s decision likely means that we might need to wait even longer than expected to see commercial Android devices running on a RISC-V chip.”

Read more of this story at Slashdot.

Redis To Adopt ‘Source-Available Licensing’ Starting With Next Version

Longtime Slashdot reader jgulla shares an announcement from Redis: Beginning today, all future versions of Redis will be released with source-available licenses. Starting with Redis 7.4, Redis will be dual-licensed under the Redis Source Available License (RSALv2) and Server Side Public License (SSPLv1). Consequently, Redis will no longer be distributed under the three-clause Berkeley Software Distribution (BSD). The new source-available licenses allow us to sustainably provide permissive use of our source code.

We’re leading Redis into its next phase of development as a real-time data platform with a unified set of clients, tools, and core Redis product offerings. The Redis source code will continue to be freely available to developers, customers, and partners through Redis Community Edition. Future Redis source-available releases will unify core Redis with Redis Stack, including search, JSON, vector, probabilistic, and time-series data models in one free, easy-to-use package as downloadable software. This will allow anyone to easily use Redis across a variety of contexts, including as a high-performance key/value and document store, a powerful query engine, and a low-latency vector database powering generative AI applications. […]

Under the new license, cloud service providers hosting Redis offerings will no longer be permitted to use the source code of Redis free of charge. For example, cloud service providers will be able to deliver Redis 7.4 only after agreeing to licensing terms with Redis, the maintainers of the Redis code. These agreements will underpin support for existing integrated solutions and provide full access to forthcoming Redis innovations. In practice, nothing changes for the Redis developer community who will continue to enjoy permissive licensing under the dual license. At the same time, all the Redis client libraries under the responsibility of Redis will remain open source licensed. Redis will continue to support its vast partner ecosystem — including managed service providers and system integrators — with exclusive access to all future releases, updates, and features developed and delivered by Redis through its Partner Program. There is no change for existing Redis Enterprise customers.

Read more of this story at Slashdot.

OpenTTD (Unofficial Remake of ‘Transport Tycoon Deluxe’ Game) Turns 20

In 1995 Scottish video game designer Chris Sawyer created the business simulator game Transport Tycoon Deluxe — and within four years, Wikipedia notes, work began on the first version of an open source version that’s still being actively developed. “According to a study of the 61,154 open-source projects on SourceForge in the period between 1999 and 2005, OpenTTD ranked as the 8th most active open-source project to receive patches and contributions. In 2004, development moved to their own server.”

Long-time Slashdot reader orudge says he’s been involved for almost 25 years. “Exactly 21 years ago, I received an ICQ message (look it up, kids) out of the blue from a guy named Ludvig Strigeus (nicknamed Ludde).”

“Hello, you probably don’t know me, but I’ve been working on a project to clone Transport Tycoon Deluxe for a while,” he said, more or less… Ludde made more progress with the project [written in C] over the coming year, and it looks like we even attempted some multiplayer games (not too reliable, especially over my dial-up connection at the time). Eventually, when he was happy with what he had created, he agreed to allow me to release the game as open source. Coincidentally, this happened exactly a year after I’d first spoken to him, on the 6th March 2004…

Things really got going after this, and a community started to form with enthusiastic developers fixing bugs, adding in new features, and smoothing off the rough edges. Ludde was, I think, a bit taken aback by how popular it proved, and even rejoined the development effort for a while. A read through the old changelogs reveals just how many features were added over a very short period of time. Quick wins like higher vehicle limits came in very quickly, and support for TTDPatch’s NewGRF format started to be functional just four months later. Large maps, improved multiplayer, better pathfinders, improved TTDPatch compatibility, and of course, ports to a great many different operating systems, such as Mac OS X, BeOS, MorphOS and OS/2. It was a very exciting time to be a TTD fan!

Within six years, ambitious projects to create free replacements for the original TTD graphics, sounds and music sets were complete, and OpenTTD finally had its 1.0 release. And while we may not have the same frantic addition of new features we had in 2004, there have still been massive improvements to the code, with plenty of exciting new features over the years, with major releases every year since 2008. he move to GitHub in 2018 and the release of OpenTTD on Steam in 2021 have also re-energised development efforts, with thousands of people now enjoying playing the game regularly. And development shows no signs of slowing down, with the upcoming OpenTTD 14.0 release including over 40 new features!
“Personally, I would like to say thank you to everyone who has supported OpenTTD development over the past two decades…” they write, adding “Finally, of course, I’d like to thank you, the players! None of us would be here if people weren’t still playing the game.

“Seeing how the first twenty years have gone, I can’t wait to see what the next twenty years have in store. :)”

Read more of this story at Slashdot.