FORTRAN and COBOL Re-enter TIOBE’s Ranking of Programming Language Popularity

“The TIOBE Index sets out to reflect the relative popularity of computer languages,” writes i-Programmer, “so it comes as something of a surprise to see two languages dating from the 1950’s in this month’s Top 20.

Having broken into the the Top 20 in April 2021 Fortran has continued to rise and has now risen to it’s highest ever position at #10… The headline for this month’s report by Paul Jansen on the TIOBE index is:
Fortran in the top 10, what is going on?
Jansen’s explanation points to the fact that there are more than 1,000 hits on Amazon for “Fortran Programming” while languages such as Kotlin and Rust, barely hit 300 books for the same search query. He also explains that Fortran is still evolving with the new ISO Fortran 2023 definition published less than half a year ago….

The other legacy language that is on the rise in the TIOBE index is COBOL. We noticed it re-enter the Top 20 in January 2024 and, having dropped out in the interim, it is there again this month.

More details from TechRepublic:

Along with Fortran holding on to its spot in the rankings, there were a few small changes in the top 10. Go gained 0.61 percentage points year over year, rising from tenth place in May 2023 to eighth this year. C++ rose slightly in popularity year over year, from fourth place to third, while Java (-3.53%) and Visual Basic (-1.8) fell.

Here’s how TIOBE ranked the 10 most popular programming languages in May:

Python
C
C++
Java
C#
JavaScript
Visual Basic
Go
SQL
Fortran

On the rival PYPL ranking of programming language popularity, Fortran does not appear anywhere in the top 29.
A note on its page explains that “Worldwide, Python is the most popular language, Rust grew the most in the last 5 years (2.1%) and Java lost the most (-4.0%).” Here’s how it ranks the 10 most popular programming languages for May:

Python (28.98% share)
Java (15.97% share)
JavaScript (8.79%)
C# (6.78% share)
R (4.76% share)
PHP (4.55% share)
TypeScript (3.03% share)
Swift (2.76% share)
Rust (2.6% share)

Read more of this story at Slashdot.

Why a ‘Frozen’ Distribution Linux Kernel Isn’t the Safest Choice for Security

Jeremy Allison — Sam (Slashdot reader #8,157) is a Distinguished Engineer at Rocky Linux creator CIQ. This week he published a blog post responding to promises of Linux distros “carefully selecting only the most polished and pristine open source patches from the raw upstream open source Linux kernel in order to create the secure distribution kernel you depend on in your business.”

But do carefully curated software patches (applied to a known “frozen” Linux kernel) really bring greater security? “After a lot of hard work and data analysis by my CIQ kernel engineering colleagues Ronnie Sahlberg and Jonathan Maple, we finally have an answer to this question. It’s no.”

The data shows that “frozen” vendor Linux kernels, created by branching off a release point and then using a team of engineers to select specific patches to back-port to that branch, are buggier than the upstream “stable” Linux kernel created by Greg Kroah-Hartman. How can this be? If you want the full details the link to the white paper is here. But the results of the analysis couldn’t be clearer.
– A “frozen” vendor kernel is an insecure kernel. A vendor kernel released later in the release schedule is doubly so.
– The number of known bugs in a “frozen” vendor kernel grows over time. The growth in the number of bugs even accelerates over time.

– There are too many open bugs in these kernels for it to be feasible to analyze or even classify them….

[T]hinking that you’re making a more secure choice by using a “frozen” vendor kernel isn’t a luxury we can still afford to believe. As Greg Kroah-Hartman explicitly said in his talk “Demystifying the Linux Kernel Security Process”: “If you are not using the latest stable / longterm kernel, your system is insecure.”

CIQ describes its report as “a count of all the known bugs from an upstream kernel that were introduced, but never fixed in RHEL 8.”
For the most recent RHEL 8 kernels, at the time of writing, these counts are: RHEL 8.6 : 5034 RHEL 8.7 : 4767 RHEL 8.8 : 4594

In RHEL 8.8 we have a total of 4594 known bugs with fixes that exist upstream, but for which known fixes have not been back-ported to RHEL 8.8. The situation is worse for RHEL 8.6 and RHEL 8.7 as they cut off back-porting earlier than RHEL 8.8 but of course that did not prevent new bugs from being discovered and fixed upstream….
This whitepaper is not meant as a criticism of the engineers working at any Linux vendors who are dedicated to producing high quality work in their products on behalf of their customers. This problem is extremely difficult to solve. We know this is an open secret amongst many in the industry and would like to put concrete numbers describing the problem to encourage discussion. Our hope is for Linux vendors and the community as a whole to rally behind the kernel.org stable kernels as the best long term supported solution. As engineers, we would prefer this to allow us to spend more time fixing customer specific bugs and submitting feature improvements upstream, rather than the endless grind of backporting upstream changes into vendor kernels, a practice which can introduce more bugs than it fixes.
ZDNet calls it “an open secret in the Linux community.”

It’s not enough to use a long-term support release. You must use the most up-to-date release to be as secure as possible. Unfortunately, almost no one does that. Nevertheless, as Google Linux kernel engineer Kees Cook explained, “So what is a vendor to do? The answer is simple: if painful: Continuously update to the latest kernel release, either major or stable.” Why? As Kroah-Hartman explained, “Any bug has the potential of being a security issue at the kernel level….”
Although [CIQ’s] programmers examined RHEL 8.8 specifically, this is a general problem. They would have found the same results if they had examined SUSE, Ubuntu, or Debian Linux. Rolling-release Linux distros such as Arch, Gentoo, and OpenSUSE Tumbleweed constantly release the latest updates, but they’re not used in businesses.

Jeremy Allison’s post points out that “the Linux kernel used by Android devices is based on the upstream kernel and also has a stable internal kernel ABI, so this isn’t an insurmountable problem…”

Read more of this story at Slashdot.

Are AI-Generated Search Results Still Protected by Section 230?

Starting this week millions will see AI-generated answers in Google’s search results by default. But the announcement Tuesday at Google’s annual developer conference suggests a future that’s “not without its risks, both to users and to Google itself,” argues the Washington Post:

For years, Google has been shielded for liability for linking users to bad, harmful or illegal information by Section 230 of the Communications Decency Act. But legal experts say that shield probably won’t apply when its AI answers search questions directly. “As we all know, generative AIs hallucinate,” said James Grimmelmann, professor of digital and information law at Cornell Law School and Cornell Tech. “So when Google uses a generative AI to summarize what webpages say, and the AI gets it wrong, Google is now the source of the harmful information,” rather than just the distributor of it…

Adam Thierer, senior fellow at the nonprofit free-market think tank R Street, worries that innovation could be throttled if Congress doesn’t extend Section 230 to cover AI tools. “As AI is integrated into more consumer-facing products, the ambiguity about liability will haunt developers and investors,” he predicted. “It is particularly problematic for small AI firms and open-source AI developers, who could be decimated as frivolous legal claims accumulate.” But John Bergmayer, legal director for the digital rights nonprofit Public Knowledge, said there are real concerns that AI answers could spell doom for many of the publishers and creators that rely on search traffic to survive — and which AI, in turn, relies on for credible information. From that standpoint, he said, a liability regime that incentivizes search engines to continue sending users to third-party websites might be “a really good outcome.”

Meanwhile, some lawmakers are looking to ditch Section 230 altogether. [Last] Sunday, the top Democrat and Republican on the House Energy and Commerce Committee released a draft of a bill that would sunset the statute within 18 months, giving Congress time to craft a new liability framework in its place. In a Wall Street Journal op-ed, Reps. Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr. (D-N.J.) argued that the law, which helped pave the way for social media and the modern internet, has “outlived its usefulness.”

The tech industry trade group NetChoice [which includes Google, Meta, X, and Amazon] fired back on Monday that scrapping Section 230 would “decimate small tech” and “discourage free speech online.”
The digital law professor points out Google has traditionally escaped legal liability by attributing its answers to specific sources — but it’s not just Google that has to worry about the issue. The article notes that Microsoft’s Bing search engine also supplies AI-generated answers (from Microsoft’s Copilot). “And Meta recently replaced the search bar in Facebook, Instagram and WhatsApp with its own AI chatbot.”

The article also note sthat several U.S. Congressional committees are considering “a bevy” of AI bills…

Read more of this story at Slashdot.

How an ‘Unprecedented’ Google Cloud Event Wiped Out a Major Customer’s Account

Ars Technica looks at what happened after Google’s answer to Amazon’s cloud service “accidentally deleted a giant customer account for no reason…”

“[A]ccording to UniSuper’s incident log, downtime started May 2, and a full restoration of services didn’t happen until May 15.”

UniSuper, an Australian pension fund that manages $135 billion worth of funds and has 647,000 members, had its entire account wiped out at Google Cloud, including all its backups that were stored on the service… UniSuper’s website is now full of must-read admin nightmare fuel about how this all happened. First is a wild page posted on May 8 titled “A joint statement from UniSuper CEO Peter Chun, and Google Cloud CEO, Thomas Kurian….” Google Cloud is supposed to have safeguards that don’t allow account deletion, but none of them worked apparently, and the only option was a restore from a separate cloud provider (shoutout to the hero at UniSuper who chose a multi-cloud solution)… The many stakeholders in the service meant service restoration wasn’t just about restoring backups but also processing all the requests and payments that still needed to happen during the two weeks of downtime.

The second must-read document in this whole saga is the outage update page, which contains 12 statements as the cloud devs worked through this catastrophe. The first update is May 2 with the ominous statement, “You may be aware of a service disruption affecting UniSuper’s systems….” Seven days after the outage, on May 9, we saw the first signs of life again for UniSuper. Logins started working for “online UniSuper accounts” (I think that only means the website), but the outage page noted that “account balances shown may not reflect transactions which have not yet been processed due to the outage….” May 13 is the first mention of the mobile app beginning to work again. This update noted that balances still weren’t up to date and that “We are processing transactions as quickly as we can.” The last update, on May 15, states, “UniSuper can confirm that all member-facing services have been fully restored, with our retirement calculators now available again.”

The joint statement and the outage updates are still not a technical post-mortem of what happened, and it’s unclear if we’ll get one. Google PR confirmed in multiple places it signed off on the statement, but a great breakdown from software developer Daniel Compton points out that the statement is not just vague, it’s also full of terminology that doesn’t align with Google Cloud products. The imprecise language makes it seem like the statement was written entirely by UniSuper.
Thanks to long-time Slashdot reader swm for sharing the news.

Read more of this story at Slashdot.

Utah Locals Are Getting Cheap 10 Gbps Fiber Thanks To Local Governments

Karl Bode writes via Techdirt: Tired of being underserved and overbilled by shitty regional broadband monopolies, back in 2002 a coalition of local Utah governments formed UTOPIA — (the Utah Telecommunication Open Infrastructure Agency). The inter-local agency collaborative venture then set about building an “open access” fiber network that allows any ISP to then come and compete on the shared network. Two decades later and the coalition just announced that 18 different ISPs now compete for Utah resident attention over a network that now covers 21 different Utah cities. In many instances, ISPs on the network are offering symmetrical (uncapped) gigabit fiber for as little as $45 a month (plus $30 network connection fee, so $75). Some ISPs are even offering symmetrical 10 Gbps fiber for around $150 a month: “Sumo Fiber, a veteran member of the UTOPIA Open Access Marketplace, is now offering 10 Gbps symmetrical for $119, plus a $30 UTOPIA Fiber infrastructure fee, bringing the total cost to $149 per month.”

It’s a collaborative hybrid that blurs the line between private companies and government, and it works. And the prices being offered here are significantly less than locals often pay in highly developed tech-centric urban hubs like New York, San Francisco, or Seattle. Yet giant local ISPs like Comcast and Qwest spent decades trying to either sue this network into oblivion, or using their proxy policy orgs (like the “Utah Taxpayer Association”) to falsely claim this effort would end in chaos and inevitable taxpayer tears. Yet miraculously UTOPIA is profitable, and for the last 15 years, every UTOPIA project has been paid for completely through subscriber revenues. […] For years, real world experience and several different studies and reports (including our Copia study on this concept) have made it clear that open access networks and policies result in faster, better, more affordable broadband access. UTOPIA is proving it at scale, but numerous other municipalities have been following suit with the help of COVID relief and infrastructure bill funding.

Read more of this story at Slashdot.

WD Rolls Out New 2.5-Inch HDDs For the First Time In 7 Years

Western Digital has unveiled new 6TB external hard drives — “the first new capacity point for this hard drive drive form factor in about seven years,” reports Tom’s Hardware. “There is a catch, though: the HDD is slow and will unlikely fit into any mobile PCs, so it looks like it will exclusively serve portable and specialized storage products.” From the report: Western Digital’s 6TB 2.5-inch HDD is currently used for the latest versions of the company’s My Passport, Black P10, and G-Drive ArmorATD external storage devices and is not available separately. All of these drives (excluding the already very thick G-Drive ArmorATD) are thicker than their 5 TB predecessors, which may suggest that in a bid to increase the HDD’s capacity, the manufacturer simply installed another platter and made the whole drive thicker instead of developing new platters with a higher areal density.

While this is a legitimate way to expand the capacity of a hard drive, it is necessary to note that 5TB 2.5-inch HDDs already feature a 15-mm z-height, which is the highest standard z-height for 2.5-inch form-factor storage devices. As a result, these 6TB 2.5-inch drives will unlikely fit into any desktop PC. When it comes to specifications of the latest My Passport, Black P10, and G-Drive ArmorATD external HDDs, Western Digital only discloses that they offer up to 130 MB/s read speed (just like their predecessors), feature a USB 3.2 Gen 1 (up to 5 GT/s) interface using either a modern USB Type-C or Micro USB Type-B connector and do not require an external power adapter.

Read more of this story at Slashdot.

Palantir’s First-Ever AI Warfare Conference

An anonymous reader quotes a report from The Guardian, written by Caroline Haskins: On May 7th and 8th in Washington, D.C., the city’s biggest convention hall welcomed America’s military-industrial complex, its top technology companies and its most outspoken justifiers of war crimes. Of course, that’s not how they would describe it. It was the inaugural “AI Expo for National Competitiveness,” hosted by the Special Competitive Studies Project — better known as the “techno-economic” thinktank created by the former Google CEO and current billionaire Eric Schmidt. The conference’s lead sponsor was Palantir, a software company co-founded by Peter Thiel that’s best known for inspiring 2019 protests against its work with Immigration and Customs Enforcement (Ice) at the height of Trump’s family separation policy. Currently, Palantir is supplying some of its AI products to the Israel Defense Forces.

The conference hall was also filled with booths representing the U.S. military and dozens of its contractors, ranging from Booz Allen Hamilton to a random company that was described to me as Uber for airplane software. At industry conferences like these, powerful people tend to be more unfiltered – they assume they’re in a safe space, among friends and peers. I was curious, what would they say about the AI-powered violence in Gaza, or what they think is the future of war?

Attendees were told the conference highlight would be a series of panels in a large room toward the back of the hall. In reality, that room hosted just one of note. Featuring Schmidt and the Palantir CEO, Alex Karp, the fire-breathing panel would set the tone for the rest of the conference. More specifically, it divided attendees into two groups: those who see war as a matter of money and strategy, and those who see it as a matter of death. The vast majority of people there fell into group one. I’ve written about relationships between tech companies and the military before, so I shouldn’t have been surprised by anything I saw or heard at this conference. But when it ended, and I departed DC for home, it felt like my life force had been completely sucked out of my body. Some of the noteworthy quotes from the panel and convention, as highlighted in Haskins’ reporting, include:

“It’s always great when the CIA helps you out,” Schmidt joked when CIA deputy director David Cohen lent him his microphone when his didn’t work.

The U.S. has to “scare our adversaries to death” in war, said Karp. On university graduates protesting Israel’s war in Gaza, Karp described their views as a “pagan religion infecting our universities” and “an infection inside of our society.”

“The peace activists are war activists,” Karp insisted. “We are the peace activists.”

A huge aspect of war in a democracy, Karp went on to argue, is leaders successfully selling that war domestically. “If we lose the intellectual debate, you will not be able to deploy any armies in the west ever,” Karp said.

A man in nuclear weapons research jokingly referred to himself as “the new Oppenheimer.”

Read more of this story at Slashdot.

In a Milestone, the US Exceeds 5 Million Solar Installations

According to the Solar Energy Industries Association (SEIA), the U.S. has officially surpassed 5 million solar installations. “The 5 million milestone comes just eight years after the U.S. achieved its first million in 2016 — a stark contrast to the four decades it took to reach that initial milestone since the first grid-connected solar project in 1973,” reports Electrek. From the report: Since the beginning of 2020, more than half of all U.S. solar installations have come online, and over 25% have been activated since the Inflation Reduction Act became law 20 months ago. Solar arrays have been installed on homes and businesses and as utility-scale solar farms. The U.S. solar market was valued at $51 billion in 2023. Even with changes in state policies, market trends indicate robust growth in solar installations across the U.S. According to SEIA forecasts, the number of solar installations is expected to double to 10 million by 2030 and triple to 15 million by 2034.

The residential sector represents 97% of all U.S. solar installations. This sector has consistently set new records for annual installations over the past several years, achieving new highs for five straight years and in 10 out of the last 12 years. The significant growth in residential solar can be attributed to its proven value as an investment for homeowners who wish to manage their energy costs more effectively. California is the frontrunner with 2 million solar installations, though recent state policies have significantly damaged its rooftop solar market. Meanwhile, other states are experiencing rapid growth. For example, Illinois, which had only 2,500 solar installations in 2017, now boasts over 87,000. Similarly, Florida has seen its solar installations surge from 22,000 in 2017 to 235,000 today. By 2030, 22 states or territories are anticipated to surpass 100,000 solar installations. The U.S. has enough solar installed to cover every residential rooftop in the Four Corners states of Colorado, Utah, Arizona, and New Mexico.

Read more of this story at Slashdot.