Vitalik Buterin Addresses Threats To Ethereum’s Decentralization In New Blog Post

In a new blog post, Ethereum co-founder Vitalik Buterin has shared his thoughts on three issues core to Ethereum’s decentralization: MEV, liquid staking, and the hardware requirements of nodes. The Block reports: In his post, published on May 17, Buterin first addresses the issue of MEV, or the financial gain that sophisticated node operators can capture by reordering the transactions within a block. Buterin characterizes the two approaches to MEV as “minimization” (reducing MEV through smart protocol design, such as CowSwap) and “quarantining” (attempting to reduce or eliminate MEV altogether through in-protocol techniques). While MEV quarantining seems like an alluring option, Buterin notes that the prospect comes with some centralization risks. “If builders have the power to exclude transactions from a block entirely, there are attacks that can quite easily arise,” Buterin noted. However, Buterin championed the builders working on MEV quarantining through concepts like transaction inclusion lists, which “take away the builder’s ability to push transactions out of the block entirely.” “I think ideas in this direction – really pushing the quarantine box to be as small as possible – are really interesting, and I’m in favor of going in that direction,” Buterin concluded.

Buterin also addressed the relatively low number of solo Ethereum stakers, as most stakers choose to stake with a staking provider, either a centralized offering like Coinbase or a decentralized offering like Lido or RocketPool, given the complexity, hardware requirement, and 32 eth minimum needed to operate an Ethereum node solo. While Buterin acknowledges the progress being made to reduce the cost and complexity around running a solo node, he also noted “once again there is more that we could do,” perhaps through reducing the time to withdraw staked ether or reducing the 32 eth minimum requirement to become a solo staker. “Incorrect answers could lead Ethereum down a path of centralization and ‘re-creating the traditional financial system with extra steps’; correct answers could create a shining example of a successful ecosystem with a wide and diverse set of solo stakers and highly decentralized staking pools,” Buterin wrote. […]

Buterin finished his post by imploring the Ethereum ecosystem to tackle the hard questions rather than shy away from them. “…We should have deep respect for the properties that make Ethereum unique, and continue to work to maintain and improve on those properties as Ethereum scales,” Buterin wrote. Buterin added today, in a post on X, that he was pleased to see civil debate among community members. “I’m really proud that ethereum does not have any culture of trying to prevent people from speaking their minds, even when they have very negative feelings toward major things in the protocol or ecosystem. Some wave the ideal of ‘open discourse’ as a flag, some take it seriously,” Buterin wrote.

Read more of this story at Slashdot.

HP Resurrects ’90s OmniBook Branding, Kills Spectre and Dragonfly

HP announced today that it will resurrect the “Omni” branding it first coined for its business-oriented laptops introduced in 1993. The vintage branding will now be used for the company’s new consumer-facing laptops, with HP retiring the Spectre and Dragonfly brands in the process. Furthermore, computers under consumer PC series names like Pavilion will also no longer be released. “Instead, every consumer computer from HP will be called either an OmniBook for laptops, an OmniDesk for desktops, or an OmniStudio for AIOs,” reports Ars Technica. From the report: The computers will also have a modifier, ranging from 3 up to 5, 7, X, or Ultra to denote computers that are entry-level all the way up to advanced. For instance, an HP OmniBook Ultra would represent HP’s highest-grade consumer laptop. “For example, an HP OmniBook 3 will appeal to customers who prioritize entertainment and personal use, while the OmniBook X will be designed for those with higher creative and technical demands,” Stacy Wolff, SVP of design and sustainability at HP, said via a press announcement today. […] So far, HP has announced one new Omni computer, the OmniBook X. It has a 12-core Snapdragon X Elite X1E-78-100, 16GB or 32GB of MPDDR5x-8448 memory, up to 2TB of storage, and a 14-inch, 2240×1400 IPS display. HP is pointing to the Latin translation of omni, meaning “all” (or everything), as the rationale behind the naming update. The new name should give shoppers confidence that the computers will provide all the things that they need.

HP is also getting rid of some of its commercial series names, like Pro. From now on, new, lower-end commercial laptops will be ProBooks. There will also be ProDesktop desktops and ProStudio AIOs. These computers will have either a 2 modifier for entry-level designs or a 4 modifier for ones with a little more power. For example, an HP ProDesk 2 is less powerful than an HP ProDesk 4. Anything more powerful will be considered either an EliteBook (laptops), EliteDesk (desktops), or EliteStudio (AIOs). For the Elite computers, the modifiers go from 6 to 8, X, and then Ultra. A Dragonfly laptop today would fall into the Ultra category. HP did less overhauling of its commercial lineup because it “recognized a need to preserve the brand equity and familiarity with our current sub-brands,” Wolff said, adding that HP “acknowledged the creation of additional product names like Dragonfly made those products stand out, rather than be seen as part of a holistic portfolio.” […]

As you might now expect of any tech rebranding, marketing push, or product release these days, HP is also announcing a new emblem that will appear on its computers, as well as other products or services, that substantially incorporate AI. The two laptops announced today carry the logo. According to Wolff, on computers, the logo means that the systems have an integrated NPU “at 40+ trillions of operations per second.” They also come with a chatbot based on ChatGPT 4, an HP spokesperson told me.

Read more of this story at Slashdot.

Linux Foundation Announces Launch of ‘High Performance Software Foundation’

This week the nonprofit Linux Foundation announced the launch of the High Performance Software Foundation, which “aims to build, promote, and advance a portable core software stack for high performance computing” (or HPC) by “increasing adoption, lowering barriers to contribution, and supporting development efforts.”

It promises initiatives focused on “continuously built, turnkey software stacks,” as well as other initiatives including architecture support and performance regression testing. Its first open source technical projects are:

– Spack: the HPC package manager.
– Kokkos: a performance-portable programming model for writing modern C++ applications in a hardware-agnostic way.
– Viskores (formerly VTK-m): a toolkit of scientific visualization algorithms for accelerator architectures.
– HPCToolkit: performance measurement and analysis tools for computers ranging from desktop systems to GPU-accelerated supercomputers.
– Apptainer: Formerly known as Singularity, Apptainer is a Linux Foundation project providing a high performance, full featured HPC and computing optimized container subsystem.
– E4S: a curated, hardened distribution of scientific software packages.

As use of HPC becomes ubiquitous in scientific computing and digital engineering, and AI use cases multiply, more and more data centers deploy GPUs and other compute accelerators. The High Performance Software Foundation will provide a neutral space for pivotal projects in the high performance computing ecosystem, enabling industry, academia, and government entities to collaborate on the scientific software.

The High Performance Software Foundation benefits from strong support across the HPC landscape, including Premier Members Amazon Web Services (AWS), Hewlett Packard Enterprise, Lawrence Livermore National Laboratory, and Sandia National Laboratories; General Members AMD, Argonne National Laboratory, Intel, Kitware, Los Alamos National Laboratory, NVIDIA, and Oak Ridge National Laboratory; and Associate Members University of Maryland, University of Oregon, and Centre for Development of Advanced Computing.
In a statement, an AMD vice president said that by joining “we are using our collective hardware and software expertise to help develop a portable, open-source software stack for high-performance computing across industry, academia, and government.” And an AWS executive said the high-performance computing community “has a long history of innovation being driven by open source projects. AWS is thrilled to join the High Performance Software Foundation to build on this work. In particular, AWS has been deeply involved in contributing upstream to Spack, and we’re looking forward to working with the HPSF to sustain and accelerate the growth of key HPC projects so everyone can benefit.”

The new foundation will “set up a technical advisory committee to manage working groups tackling a variety of HPC topics,” according to the announcement, following a governance model based on the Cloud Native Computing Foundation.

Read more of this story at Slashdot.

FORTRAN and COBOL Re-enter TIOBE’s Ranking of Programming Language Popularity

“The TIOBE Index sets out to reflect the relative popularity of computer languages,” writes i-Programmer, “so it comes as something of a surprise to see two languages dating from the 1950’s in this month’s Top 20.

Having broken into the the Top 20 in April 2021 Fortran has continued to rise and has now risen to it’s highest ever position at #10… The headline for this month’s report by Paul Jansen on the TIOBE index is:
Fortran in the top 10, what is going on?
Jansen’s explanation points to the fact that there are more than 1,000 hits on Amazon for “Fortran Programming” while languages such as Kotlin and Rust, barely hit 300 books for the same search query. He also explains that Fortran is still evolving with the new ISO Fortran 2023 definition published less than half a year ago….

The other legacy language that is on the rise in the TIOBE index is COBOL. We noticed it re-enter the Top 20 in January 2024 and, having dropped out in the interim, it is there again this month.

More details from TechRepublic:

Along with Fortran holding on to its spot in the rankings, there were a few small changes in the top 10. Go gained 0.61 percentage points year over year, rising from tenth place in May 2023 to eighth this year. C++ rose slightly in popularity year over year, from fourth place to third, while Java (-3.53%) and Visual Basic (-1.8) fell.

Here’s how TIOBE ranked the 10 most popular programming languages in May:

Python
C
C++
Java
C#
JavaScript
Visual Basic
Go
SQL
Fortran

On the rival PYPL ranking of programming language popularity, Fortran does not appear anywhere in the top 29.
A note on its page explains that “Worldwide, Python is the most popular language, Rust grew the most in the last 5 years (2.1%) and Java lost the most (-4.0%).” Here’s how it ranks the 10 most popular programming languages for May:

Python (28.98% share)
Java (15.97% share)
JavaScript (8.79%)
C# (6.78% share)
R (4.76% share)
PHP (4.55% share)
TypeScript (3.03% share)
Swift (2.76% share)
Rust (2.6% share)

Read more of this story at Slashdot.

Why a ‘Frozen’ Distribution Linux Kernel Isn’t the Safest Choice for Security

Jeremy Allison — Sam (Slashdot reader #8,157) is a Distinguished Engineer at Rocky Linux creator CIQ. This week he published a blog post responding to promises of Linux distros “carefully selecting only the most polished and pristine open source patches from the raw upstream open source Linux kernel in order to create the secure distribution kernel you depend on in your business.”

But do carefully curated software patches (applied to a known “frozen” Linux kernel) really bring greater security? “After a lot of hard work and data analysis by my CIQ kernel engineering colleagues Ronnie Sahlberg and Jonathan Maple, we finally have an answer to this question. It’s no.”

The data shows that “frozen” vendor Linux kernels, created by branching off a release point and then using a team of engineers to select specific patches to back-port to that branch, are buggier than the upstream “stable” Linux kernel created by Greg Kroah-Hartman. How can this be? If you want the full details the link to the white paper is here. But the results of the analysis couldn’t be clearer.
– A “frozen” vendor kernel is an insecure kernel. A vendor kernel released later in the release schedule is doubly so.
– The number of known bugs in a “frozen” vendor kernel grows over time. The growth in the number of bugs even accelerates over time.

– There are too many open bugs in these kernels for it to be feasible to analyze or even classify them….

[T]hinking that you’re making a more secure choice by using a “frozen” vendor kernel isn’t a luxury we can still afford to believe. As Greg Kroah-Hartman explicitly said in his talk “Demystifying the Linux Kernel Security Process”: “If you are not using the latest stable / longterm kernel, your system is insecure.”

CIQ describes its report as “a count of all the known bugs from an upstream kernel that were introduced, but never fixed in RHEL 8.”
For the most recent RHEL 8 kernels, at the time of writing, these counts are: RHEL 8.6 : 5034 RHEL 8.7 : 4767 RHEL 8.8 : 4594

In RHEL 8.8 we have a total of 4594 known bugs with fixes that exist upstream, but for which known fixes have not been back-ported to RHEL 8.8. The situation is worse for RHEL 8.6 and RHEL 8.7 as they cut off back-porting earlier than RHEL 8.8 but of course that did not prevent new bugs from being discovered and fixed upstream….
This whitepaper is not meant as a criticism of the engineers working at any Linux vendors who are dedicated to producing high quality work in their products on behalf of their customers. This problem is extremely difficult to solve. We know this is an open secret amongst many in the industry and would like to put concrete numbers describing the problem to encourage discussion. Our hope is for Linux vendors and the community as a whole to rally behind the kernel.org stable kernels as the best long term supported solution. As engineers, we would prefer this to allow us to spend more time fixing customer specific bugs and submitting feature improvements upstream, rather than the endless grind of backporting upstream changes into vendor kernels, a practice which can introduce more bugs than it fixes.
ZDNet calls it “an open secret in the Linux community.”

It’s not enough to use a long-term support release. You must use the most up-to-date release to be as secure as possible. Unfortunately, almost no one does that. Nevertheless, as Google Linux kernel engineer Kees Cook explained, “So what is a vendor to do? The answer is simple: if painful: Continuously update to the latest kernel release, either major or stable.” Why? As Kroah-Hartman explained, “Any bug has the potential of being a security issue at the kernel level….”
Although [CIQ’s] programmers examined RHEL 8.8 specifically, this is a general problem. They would have found the same results if they had examined SUSE, Ubuntu, or Debian Linux. Rolling-release Linux distros such as Arch, Gentoo, and OpenSUSE Tumbleweed constantly release the latest updates, but they’re not used in businesses.

Jeremy Allison’s post points out that “the Linux kernel used by Android devices is based on the upstream kernel and also has a stable internal kernel ABI, so this isn’t an insurmountable problem…”

Read more of this story at Slashdot.

Are AI-Generated Search Results Still Protected by Section 230?

Starting this week millions will see AI-generated answers in Google’s search results by default. But the announcement Tuesday at Google’s annual developer conference suggests a future that’s “not without its risks, both to users and to Google itself,” argues the Washington Post:

For years, Google has been shielded for liability for linking users to bad, harmful or illegal information by Section 230 of the Communications Decency Act. But legal experts say that shield probably won’t apply when its AI answers search questions directly. “As we all know, generative AIs hallucinate,” said James Grimmelmann, professor of digital and information law at Cornell Law School and Cornell Tech. “So when Google uses a generative AI to summarize what webpages say, and the AI gets it wrong, Google is now the source of the harmful information,” rather than just the distributor of it…

Adam Thierer, senior fellow at the nonprofit free-market think tank R Street, worries that innovation could be throttled if Congress doesn’t extend Section 230 to cover AI tools. “As AI is integrated into more consumer-facing products, the ambiguity about liability will haunt developers and investors,” he predicted. “It is particularly problematic for small AI firms and open-source AI developers, who could be decimated as frivolous legal claims accumulate.” But John Bergmayer, legal director for the digital rights nonprofit Public Knowledge, said there are real concerns that AI answers could spell doom for many of the publishers and creators that rely on search traffic to survive — and which AI, in turn, relies on for credible information. From that standpoint, he said, a liability regime that incentivizes search engines to continue sending users to third-party websites might be “a really good outcome.”

Meanwhile, some lawmakers are looking to ditch Section 230 altogether. [Last] Sunday, the top Democrat and Republican on the House Energy and Commerce Committee released a draft of a bill that would sunset the statute within 18 months, giving Congress time to craft a new liability framework in its place. In a Wall Street Journal op-ed, Reps. Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr. (D-N.J.) argued that the law, which helped pave the way for social media and the modern internet, has “outlived its usefulness.”

The tech industry trade group NetChoice [which includes Google, Meta, X, and Amazon] fired back on Monday that scrapping Section 230 would “decimate small tech” and “discourage free speech online.”
The digital law professor points out Google has traditionally escaped legal liability by attributing its answers to specific sources — but it’s not just Google that has to worry about the issue. The article notes that Microsoft’s Bing search engine also supplies AI-generated answers (from Microsoft’s Copilot). “And Meta recently replaced the search bar in Facebook, Instagram and WhatsApp with its own AI chatbot.”

The article also note sthat several U.S. Congressional committees are considering “a bevy” of AI bills…

Read more of this story at Slashdot.

How an ‘Unprecedented’ Google Cloud Event Wiped Out a Major Customer’s Account

Ars Technica looks at what happened after Google’s answer to Amazon’s cloud service “accidentally deleted a giant customer account for no reason…”

“[A]ccording to UniSuper’s incident log, downtime started May 2, and a full restoration of services didn’t happen until May 15.”

UniSuper, an Australian pension fund that manages $135 billion worth of funds and has 647,000 members, had its entire account wiped out at Google Cloud, including all its backups that were stored on the service… UniSuper’s website is now full of must-read admin nightmare fuel about how this all happened. First is a wild page posted on May 8 titled “A joint statement from UniSuper CEO Peter Chun, and Google Cloud CEO, Thomas Kurian….” Google Cloud is supposed to have safeguards that don’t allow account deletion, but none of them worked apparently, and the only option was a restore from a separate cloud provider (shoutout to the hero at UniSuper who chose a multi-cloud solution)… The many stakeholders in the service meant service restoration wasn’t just about restoring backups but also processing all the requests and payments that still needed to happen during the two weeks of downtime.

The second must-read document in this whole saga is the outage update page, which contains 12 statements as the cloud devs worked through this catastrophe. The first update is May 2 with the ominous statement, “You may be aware of a service disruption affecting UniSuper’s systems….” Seven days after the outage, on May 9, we saw the first signs of life again for UniSuper. Logins started working for “online UniSuper accounts” (I think that only means the website), but the outage page noted that “account balances shown may not reflect transactions which have not yet been processed due to the outage….” May 13 is the first mention of the mobile app beginning to work again. This update noted that balances still weren’t up to date and that “We are processing transactions as quickly as we can.” The last update, on May 15, states, “UniSuper can confirm that all member-facing services have been fully restored, with our retirement calculators now available again.”

The joint statement and the outage updates are still not a technical post-mortem of what happened, and it’s unclear if we’ll get one. Google PR confirmed in multiple places it signed off on the statement, but a great breakdown from software developer Daniel Compton points out that the statement is not just vague, it’s also full of terminology that doesn’t align with Google Cloud products. The imprecise language makes it seem like the statement was written entirely by UniSuper.
Thanks to long-time Slashdot reader swm for sharing the news.

Read more of this story at Slashdot.

Utah Locals Are Getting Cheap 10 Gbps Fiber Thanks To Local Governments

Karl Bode writes via Techdirt: Tired of being underserved and overbilled by shitty regional broadband monopolies, back in 2002 a coalition of local Utah governments formed UTOPIA — (the Utah Telecommunication Open Infrastructure Agency). The inter-local agency collaborative venture then set about building an “open access” fiber network that allows any ISP to then come and compete on the shared network. Two decades later and the coalition just announced that 18 different ISPs now compete for Utah resident attention over a network that now covers 21 different Utah cities. In many instances, ISPs on the network are offering symmetrical (uncapped) gigabit fiber for as little as $45 a month (plus $30 network connection fee, so $75). Some ISPs are even offering symmetrical 10 Gbps fiber for around $150 a month: “Sumo Fiber, a veteran member of the UTOPIA Open Access Marketplace, is now offering 10 Gbps symmetrical for $119, plus a $30 UTOPIA Fiber infrastructure fee, bringing the total cost to $149 per month.”

It’s a collaborative hybrid that blurs the line between private companies and government, and it works. And the prices being offered here are significantly less than locals often pay in highly developed tech-centric urban hubs like New York, San Francisco, or Seattle. Yet giant local ISPs like Comcast and Qwest spent decades trying to either sue this network into oblivion, or using their proxy policy orgs (like the “Utah Taxpayer Association”) to falsely claim this effort would end in chaos and inevitable taxpayer tears. Yet miraculously UTOPIA is profitable, and for the last 15 years, every UTOPIA project has been paid for completely through subscriber revenues. […] For years, real world experience and several different studies and reports (including our Copia study on this concept) have made it clear that open access networks and policies result in faster, better, more affordable broadband access. UTOPIA is proving it at scale, but numerous other municipalities have been following suit with the help of COVID relief and infrastructure bill funding.

Read more of this story at Slashdot.