Apple Slashes iPhone Prices In China Amid Fierce Huawei Competition
Read more of this story at Slashdot.
Sales And Repair
1715 S. 3rd Ave. Suite #1
Yakima, WA. 98902
Mon - Fri: 8:30-5:30
Sat - Sun: Closed
Sales And Repair
1715 S. 3rd Ave. Suite #1
Yakima, WA. 98902
Mon - Fri: 8:30-5:30
Sat - Sun: Closed
Read more of this story at Slashdot.
Buterin also addressed the relatively low number of solo Ethereum stakers, as most stakers choose to stake with a staking provider, either a centralized offering like Coinbase or a decentralized offering like Lido or RocketPool, given the complexity, hardware requirement, and 32 eth minimum needed to operate an Ethereum node solo. While Buterin acknowledges the progress being made to reduce the cost and complexity around running a solo node, he also noted “once again there is more that we could do,” perhaps through reducing the time to withdraw staked ether or reducing the 32 eth minimum requirement to become a solo staker. “Incorrect answers could lead Ethereum down a path of centralization and ‘re-creating the traditional financial system with extra steps’; correct answers could create a shining example of a successful ecosystem with a wide and diverse set of solo stakers and highly decentralized staking pools,” Buterin wrote. […]
Buterin finished his post by imploring the Ethereum ecosystem to tackle the hard questions rather than shy away from them. “…We should have deep respect for the properties that make Ethereum unique, and continue to work to maintain and improve on those properties as Ethereum scales,” Buterin wrote. Buterin added today, in a post on X, that he was pleased to see civil debate among community members. “I’m really proud that ethereum does not have any culture of trying to prevent people from speaking their minds, even when they have very negative feelings toward major things in the protocol or ecosystem. Some wave the ideal of ‘open discourse’ as a flag, some take it seriously,” Buterin wrote.
Read more of this story at Slashdot.
Having broken into the the Top 20 in April 2021 Fortran has continued to rise and has now risen to it’s highest ever position at #10… The headline for this month’s report by Paul Jansen on the TIOBE index is:
Fortran in the top 10, what is going on?
Jansen’s explanation points to the fact that there are more than 1,000 hits on Amazon for “Fortran Programming” while languages such as Kotlin and Rust, barely hit 300 books for the same search query. He also explains that Fortran is still evolving with the new ISO Fortran 2023 definition published less than half a year ago….
The other legacy language that is on the rise in the TIOBE index is COBOL. We noticed it re-enter the Top 20 in January 2024 and, having dropped out in the interim, it is there again this month.
More details from TechRepublic:
Along with Fortran holding on to its spot in the rankings, there were a few small changes in the top 10. Go gained 0.61 percentage points year over year, rising from tenth place in May 2023 to eighth this year. C++ rose slightly in popularity year over year, from fourth place to third, while Java (-3.53%) and Visual Basic (-1.8) fell.
Here’s how TIOBE ranked the 10 most popular programming languages in May:
Python
C
C++
Java
C#
JavaScript
Visual Basic
Go
SQL
Fortran
On the rival PYPL ranking of programming language popularity, Fortran does not appear anywhere in the top 29.
A note on its page explains that “Worldwide, Python is the most popular language, Rust grew the most in the last 5 years (2.1%) and Java lost the most (-4.0%).” Here’s how it ranks the 10 most popular programming languages for May:
Python (28.98% share)
Java (15.97% share)
JavaScript (8.79%)
C# (6.78% share)
R (4.76% share)
PHP (4.55% share)
TypeScript (3.03% share)
Swift (2.76% share)
Rust (2.6% share)
Read more of this story at Slashdot.
It promises initiatives focused on “continuously built, turnkey software stacks,” as well as other initiatives including architecture support and performance regression testing. Its first open source technical projects are:
– Spack: the HPC package manager.
– Kokkos: a performance-portable programming model for writing modern C++ applications in a hardware-agnostic way.
– Viskores (formerly VTK-m): a toolkit of scientific visualization algorithms for accelerator architectures.
– HPCToolkit: performance measurement and analysis tools for computers ranging from desktop systems to GPU-accelerated supercomputers.
– Apptainer: Formerly known as Singularity, Apptainer is a Linux Foundation project providing a high performance, full featured HPC and computing optimized container subsystem.
– E4S: a curated, hardened distribution of scientific software packages.
As use of HPC becomes ubiquitous in scientific computing and digital engineering, and AI use cases multiply, more and more data centers deploy GPUs and other compute accelerators. The High Performance Software Foundation will provide a neutral space for pivotal projects in the high performance computing ecosystem, enabling industry, academia, and government entities to collaborate on the scientific software.
The High Performance Software Foundation benefits from strong support across the HPC landscape, including Premier Members Amazon Web Services (AWS), Hewlett Packard Enterprise, Lawrence Livermore National Laboratory, and Sandia National Laboratories; General Members AMD, Argonne National Laboratory, Intel, Kitware, Los Alamos National Laboratory, NVIDIA, and Oak Ridge National Laboratory; and Associate Members University of Maryland, University of Oregon, and Centre for Development of Advanced Computing.
In a statement, an AMD vice president said that by joining “we are using our collective hardware and software expertise to help develop a portable, open-source software stack for high-performance computing across industry, academia, and government.” And an AWS executive said the high-performance computing community “has a long history of innovation being driven by open source projects. AWS is thrilled to join the High Performance Software Foundation to build on this work. In particular, AWS has been deeply involved in contributing upstream to Spack, and we’re looking forward to working with the HPSF to sustain and accelerate the growth of key HPC projects so everyone can benefit.”
The new foundation will “set up a technical advisory committee to manage working groups tackling a variety of HPC topics,” according to the announcement, following a governance model based on the Cloud Native Computing Foundation.
Read more of this story at Slashdot.
Ubuntu first switched to using Wayland as its default display server in 2017 before reverting the following year. It tried again in 2021 and has stuck with it since. But while Wayland is what most of us now log into after installing Ubuntu, anyone doing so on a PC or laptop with an NVIDIA graphics card present instead logs into an Xorg/X11 session.
This is because NVIDIA’s proprietary graphics drivers (which many, especially gamers, opt for to get the best performance, access to full hardware capabilities, etc) have not supported Wayland as well as as they could’ve. Past tense as, thankfully, things have changed in the past few years. NVIDIA’s warmed up to Wayland (partly as it has no choice given that Wayland is now standard and a ‘maybe one day’ solution, and partly because it wants to: opportunities/benefits/security).
With the NVIDIA + Wayland sitch’ now in a better state than before — but not perfect — Canonical’s engineers say they feel confident enough in the experience to make the Ubuntu Wayland session default for NVIDIA graphics card users in Ubuntu 24.10.
Read more of this story at Slashdot.
But do carefully curated software patches (applied to a known “frozen” Linux kernel) really bring greater security? “After a lot of hard work and data analysis by my CIQ kernel engineering colleagues Ronnie Sahlberg and Jonathan Maple, we finally have an answer to this question. It’s no.”
The data shows that “frozen” vendor Linux kernels, created by branching off a release point and then using a team of engineers to select specific patches to back-port to that branch, are buggier than the upstream “stable” Linux kernel created by Greg Kroah-Hartman. How can this be? If you want the full details the link to the white paper is here. But the results of the analysis couldn’t be clearer.
– A “frozen” vendor kernel is an insecure kernel. A vendor kernel released later in the release schedule is doubly so.
– The number of known bugs in a “frozen” vendor kernel grows over time. The growth in the number of bugs even accelerates over time.
– There are too many open bugs in these kernels for it to be feasible to analyze or even classify them….
[T]hinking that you’re making a more secure choice by using a “frozen” vendor kernel isn’t a luxury we can still afford to believe. As Greg Kroah-Hartman explicitly said in his talk “Demystifying the Linux Kernel Security Process”: “If you are not using the latest stable / longterm kernel, your system is insecure.”
CIQ describes its report as “a count of all the known bugs from an upstream kernel that were introduced, but never fixed in RHEL 8.”
For the most recent RHEL 8 kernels, at the time of writing, these counts are: RHEL 8.6 : 5034 RHEL 8.7 : 4767 RHEL 8.8 : 4594
In RHEL 8.8 we have a total of 4594 known bugs with fixes that exist upstream, but for which known fixes have not been back-ported to RHEL 8.8. The situation is worse for RHEL 8.6 and RHEL 8.7 as they cut off back-porting earlier than RHEL 8.8 but of course that did not prevent new bugs from being discovered and fixed upstream….
This whitepaper is not meant as a criticism of the engineers working at any Linux vendors who are dedicated to producing high quality work in their products on behalf of their customers. This problem is extremely difficult to solve. We know this is an open secret amongst many in the industry and would like to put concrete numbers describing the problem to encourage discussion. Our hope is for Linux vendors and the community as a whole to rally behind the kernel.org stable kernels as the best long term supported solution. As engineers, we would prefer this to allow us to spend more time fixing customer specific bugs and submitting feature improvements upstream, rather than the endless grind of backporting upstream changes into vendor kernels, a practice which can introduce more bugs than it fixes.
ZDNet calls it “an open secret in the Linux community.”
It’s not enough to use a long-term support release. You must use the most up-to-date release to be as secure as possible. Unfortunately, almost no one does that. Nevertheless, as Google Linux kernel engineer Kees Cook explained, “So what is a vendor to do? The answer is simple: if painful: Continuously update to the latest kernel release, either major or stable.” Why? As Kroah-Hartman explained, “Any bug has the potential of being a security issue at the kernel level….”
Although [CIQ’s] programmers examined RHEL 8.8 specifically, this is a general problem. They would have found the same results if they had examined SUSE, Ubuntu, or Debian Linux. Rolling-release Linux distros such as Arch, Gentoo, and OpenSUSE Tumbleweed constantly release the latest updates, but they’re not used in businesses.
Jeremy Allison’s post points out that “the Linux kernel used by Android devices is based on the upstream kernel and also has a stable internal kernel ABI, so this isn’t an insurmountable problem…”
Read more of this story at Slashdot.
“[A]ccording to UniSuper’s incident log, downtime started May 2, and a full restoration of services didn’t happen until May 15.”
UniSuper, an Australian pension fund that manages $135 billion worth of funds and has 647,000 members, had its entire account wiped out at Google Cloud, including all its backups that were stored on the service… UniSuper’s website is now full of must-read admin nightmare fuel about how this all happened. First is a wild page posted on May 8 titled “A joint statement from UniSuper CEO Peter Chun, and Google Cloud CEO, Thomas Kurian….” Google Cloud is supposed to have safeguards that don’t allow account deletion, but none of them worked apparently, and the only option was a restore from a separate cloud provider (shoutout to the hero at UniSuper who chose a multi-cloud solution)… The many stakeholders in the service meant service restoration wasn’t just about restoring backups but also processing all the requests and payments that still needed to happen during the two weeks of downtime.
The second must-read document in this whole saga is the outage update page, which contains 12 statements as the cloud devs worked through this catastrophe. The first update is May 2 with the ominous statement, “You may be aware of a service disruption affecting UniSuper’s systems….” Seven days after the outage, on May 9, we saw the first signs of life again for UniSuper. Logins started working for “online UniSuper accounts” (I think that only means the website), but the outage page noted that “account balances shown may not reflect transactions which have not yet been processed due to the outage….” May 13 is the first mention of the mobile app beginning to work again. This update noted that balances still weren’t up to date and that “We are processing transactions as quickly as we can.” The last update, on May 15, states, “UniSuper can confirm that all member-facing services have been fully restored, with our retirement calculators now available again.”
The joint statement and the outage updates are still not a technical post-mortem of what happened, and it’s unclear if we’ll get one. Google PR confirmed in multiple places it signed off on the statement, but a great breakdown from software developer Daniel Compton points out that the statement is not just vague, it’s also full of terminology that doesn’t align with Google Cloud products. The imprecise language makes it seem like the statement was written entirely by UniSuper.
Thanks to long-time Slashdot reader swm for sharing the news.
Read more of this story at Slashdot.
For years, Google has been shielded for liability for linking users to bad, harmful or illegal information by Section 230 of the Communications Decency Act. But legal experts say that shield probably won’t apply when its AI answers search questions directly. “As we all know, generative AIs hallucinate,” said James Grimmelmann, professor of digital and information law at Cornell Law School and Cornell Tech. “So when Google uses a generative AI to summarize what webpages say, and the AI gets it wrong, Google is now the source of the harmful information,” rather than just the distributor of it…
Adam Thierer, senior fellow at the nonprofit free-market think tank R Street, worries that innovation could be throttled if Congress doesn’t extend Section 230 to cover AI tools. “As AI is integrated into more consumer-facing products, the ambiguity about liability will haunt developers and investors,” he predicted. “It is particularly problematic for small AI firms and open-source AI developers, who could be decimated as frivolous legal claims accumulate.” But John Bergmayer, legal director for the digital rights nonprofit Public Knowledge, said there are real concerns that AI answers could spell doom for many of the publishers and creators that rely on search traffic to survive — and which AI, in turn, relies on for credible information. From that standpoint, he said, a liability regime that incentivizes search engines to continue sending users to third-party websites might be “a really good outcome.”
Meanwhile, some lawmakers are looking to ditch Section 230 altogether. [Last] Sunday, the top Democrat and Republican on the House Energy and Commerce Committee released a draft of a bill that would sunset the statute within 18 months, giving Congress time to craft a new liability framework in its place. In a Wall Street Journal op-ed, Reps. Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr. (D-N.J.) argued that the law, which helped pave the way for social media and the modern internet, has “outlived its usefulness.”
The tech industry trade group NetChoice [which includes Google, Meta, X, and Amazon] fired back on Monday that scrapping Section 230 would “decimate small tech” and “discourage free speech online.”
The digital law professor points out Google has traditionally escaped legal liability by attributing its answers to specific sources — but it’s not just Google that has to worry about the issue. The article notes that Microsoft’s Bing search engine also supplies AI-generated answers (from Microsoft’s Copilot). “And Meta recently replaced the search bar in Facebook, Instagram and WhatsApp with its own AI chatbot.”
The article also note sthat several U.S. Congressional committees are considering “a bevy” of AI bills…
Read more of this story at Slashdot.
While this is a legitimate way to expand the capacity of a hard drive, it is necessary to note that 5TB 2.5-inch HDDs already feature a 15-mm z-height, which is the highest standard z-height for 2.5-inch form-factor storage devices. As a result, these 6TB 2.5-inch drives will unlikely fit into any desktop PC. When it comes to specifications of the latest My Passport, Black P10, and G-Drive ArmorATD external HDDs, Western Digital only discloses that they offer up to 130 MB/s read speed (just like their predecessors), feature a USB 3.2 Gen 1 (up to 5 GT/s) interface using either a modern USB Type-C or Micro USB Type-B connector and do not require an external power adapter.
Read more of this story at Slashdot.
It’s a collaborative hybrid that blurs the line between private companies and government, and it works. And the prices being offered here are significantly less than locals often pay in highly developed tech-centric urban hubs like New York, San Francisco, or Seattle. Yet giant local ISPs like Comcast and Qwest spent decades trying to either sue this network into oblivion, or using their proxy policy orgs (like the “Utah Taxpayer Association”) to falsely claim this effort would end in chaos and inevitable taxpayer tears. Yet miraculously UTOPIA is profitable, and for the last 15 years, every UTOPIA project has been paid for completely through subscriber revenues. […] For years, real world experience and several different studies and reports (including our Copia study on this concept) have made it clear that open access networks and policies result in faster, better, more affordable broadband access. UTOPIA is proving it at scale, but numerous other municipalities have been following suit with the help of COVID relief and infrastructure bill funding.
Read more of this story at Slashdot.