‘Biggest Change Ever’ to Go Brings Generics, Native Fuzzing, and a Performance Boost

“Supporting generics has been Go’s most often requested feature, and we’re proud to deliver the generic support that the majority of users need today,” the Go blog announced this week. *

It’s part of what Go’s development team is calling the “biggest change ever to the language”.

SiliconANGLE writes that “Right out of the gate, Go 1.18 is getting a CPU speed performance boost of up to 20% for Apple M1, ARM64 and PowerPC64 chips. This is all from an expansion of Go 1.17’s calling conventions for the application binary interface on these processor architectures.”

And Go 1.18 also introduces native support for fuzz testing — the first major programming language to do so, writes ZDNet:

As Google explains, fuzz testing or ‘fuzzing’ is a means of testing the vulnerability of a piece of software by throwing arbitrary or invalid data at it to expose bugs and unknown errors. This adds an additional layer of security to Go’s code that will keep it protected as its functionality evolves — crucial as attacks on software continue to escalate both in frequency and complexity. “At Google we are committed to securing the online infrastructure and applications the world depends upon,” said Eric Brewer, VIP infrastructure at Google….
While other languages support fuzzing, Go is the first major programming language to incorporate it into its core toolchain, meaning — unlike other languages — third-party support integrations aren’t required.

Google is emphasizing Go’s security features — and its widespread adoption. ZDNet writes:

Google created Go in 2007 and was designed specifically to help software engineers build secure, open-source enterprise applications for modern, multi-core computing systems. More than three-quarters of Cloud Native Computing Foundation projects, including Kubernetes and Istio, are written in Go, says Google. [Also Docker and Etc.] According to data from Stack Overflow, some 10% of developers are writing in Go worldwide, and there are signs that more recruiters are seeking out Go coders in their search for tech talent….. “Although we have a dedicated Go team at Google, we welcome a significant amount of contributions from our community. It’s a shared effort, and with their updates we’re helping our community achieve Go’s long-term vision.
Or, as the Go blog says:

We want to thank every Go user who filed a bug, sent in a change, wrote a tutorial, or helped in any way to make Go 1.18 a reality. We couldn’t do it without you. Thank you.

Enjoy Go 1.18!

* Supporting generics “includes major — but fully backward-compatible — changes to the language,” explains the release notes. Although it adds a few cautionary notes:

These new language changes required a large amount of new code that has not had significant testing in production settings. That will only happen as more people write and use generic code. We believe that this feature is well implemented and high quality. However, unlike most aspects of Go, we can’t back up that belief with real world experience. Therefore, while we encourage the use of generics where it makes sense, please use appropriate caution when deploying generic code in production.

While we believe that the new language features are well designed and clearly specified, it is possible that we have made mistakes…. it is possible that there will be code using generics that will work with the 1.18 release but break in later releases. We do not plan or expect to make any such change. However, breaking 1.18 programs in future releases may become necessary for reasons that we cannot today foresee. We will minimize any such breakage as much as possible, but we can’t guarantee that the breakage will be zero.

Read more of this story at Slashdot.

More Apple M1 Ultra Benchmarks Show It Doesn’t Beat the Best GPUs from Nvidia and AMD

Tom’s Guide tested a Mac Studio workstation equipped with an M1 Ultra with the Geekbench 5.4 CPU benchmarks “to get a sense of how effectively it handles single-core and multi-core workflows.”

“Since our M1 Ultra is the best you can buy (at a rough price of $6,199) it sports a 20-core CPU and a 64-core GPU, as well as 128GB of unified memory (RAM) and a 2TB SSD.”

Slashdot reader exomondo shares their results:
We ran the M1 Ultra through the Geekbench 5.4 CPU benchmarking test multiple times and after averaging the results, we found that the M1 Ultra does indeed outperform top-of-the-line Windows gaming PCs when it comes to multi-core CPU performance. Specifically, the M1 Ultra outperformed a recent Alienware Aurora R13 desktop we tested (w/ Intel Core i7-12700KF, GeForce RTX 3080, 32GB RAM), an Origin Millennium (2022) we just reviewed (Core i9-12900K CPU, RTX 3080 Ti GPU, 32GB RAM), and an even more 3090-equipped HP Omen 45L we tested recently (Core i9-12900K, GeForce RTX 3090, 64GB RAM) in the Geekbench 5.4 multi-core CPU benchmark.

However, as you can see from the chart of results below, the M1 Ultra couldn’t match its Intel-powered competition in terms of CPU single-core performance. The Ultra-powered Studio also proved slower to transcode video than the afore-mentioned gaming PCs, taking nearly 4 minutes to transcode a 4K video down to 1080p using Handbrake. All of the gaming PCs I just mentioned completed the same task faster, over 30 seconds faster in the case of the Origin Millennium. Before we even get into the GPU performance tests it’s clear that while the M1 Ultra excels at multi-core workflows, it doesn’t trounce the competition across the board. When we ran our Mac Studio review unit through the Geekbench 5.4 OpenCL test (which benchmarks GPU performance by simulating common tasks like image processing), the Ultra earned an average score of 83,868. That’s quite good, but again it fails to outperform Nvidia GPUs in similarly-priced systems.

They also share some results from the OpenCL Benchmarks browser, which publicly displays scores from different GPUs that users have uploaded:
Apple’s various M1 chips are on the list as well, and while the M1 Ultra leads that pack it’s still quite a ways down the list, with an average score of 83,940. Incidentally, that means it ranks below much older GPUs like Nvidia’s GeForce RTX 2070 (85,639) and AMD’s Radeon VII (86,509). So here again we see that while the Ultra is fast, it can’t match the graphical performance of GPUs that are 2-3 years old at this point — at least, not in these synthetic benchmarks. These tests don’t always accurately reflect real-world CPU and GPU performance, which can be dramatically influenced by what programs you’re running and how they’re optimized to make use of your PC’s components.
Their conclusion?
When it comes to tasks like photo editing or video and music production, the M1 Ultra w/ 128GB of RAM blazes through workloads, and it does so while remaining whisper-quiet. It also makes the Mac Studio a decent gaming machine, as I was able to play less demanding games like Crusader Kings III, Pathfinder: Wrath of the Righteous and Total War: Warhammer II at reasonable (30+ fps) framerates. But that’s just not on par with the performance we expect from high-end GPUs like the Nvidia GeForce RTX 3090….
Of course, if you don’t care about games and are in the market for a new Mac with more power than just about anything Apple’s ever made, you want the Studio with M1 Ultra.

Read more of this story at Slashdot.

The Free Software Foundation’s ‘LibrePlanet’ Conference Happens Online This Weekend

LibrePlanet, the annual conference hosted by the Free Software Foundation, will be happening online this weekend. The event “provides an opportunity for community activists, domain experts, and people seeking solutions for themselves to come together in order to discuss current issues in technology and ethics,” according to its web page. This year’s LibrePlanet theme is “Living Liberation”.

And while you’re listening to the presentations, you can apparently also interact with the rest of the community:

Each LibrePlanet room has its own IRC channel on the Libera.Chat network… Want to interact with other conference-goers in a virtual space? Join us on LibreAdventure, where you’ll be able to video chat with fellow free software users, journey to the stars, and walk around a replica of the FSF office!

Our Minetest server is back by popular demand, and now running version 5.x of everyone’s favorite free software, voxel sandbox game. You can install Minetest through your GNU/Linux distro’s package manager, and point your client to minetest.libreplanet.org with the default port 30000.

Sunday’s presentations include “Living in freedom with GNU Emacs” and “Hacking my brain: Free virtual reality implementations and their potential for therapeutic use.”

And Sunday will also include a talk from Seth Schoen, the first staff technologist at the Electronic Frontier Foundation (who helped develop the Let’s Encrypt certificate authority) titled “Reducing Internet address waste: The IPv4 unicast extensions project.”

View the complete schedule here.

Read more of this story at Slashdot.

GTA V is Back for a New Generation

Rockstar’s anarchic masterpiece has been freshened up for PlayStation 5 and Xbox Series X nine years after it was originally released. From a report: And so the boys are back in town. Michael, Trevor and Franklin, the sociopathic trio that lit up the gaming scene nine years ago, have been made over for the 2020s with this crisp new reworking of Grand Theft Auto V for PlayStation 5 and Xbox Series X. The game’s violent narrative of shifting loyalties and doomed machismo felt wild and edgy back in 2013, so how does it fare in the modern era? The good news is, the overhauled visuals definitely give the game new zest and freshness. You can play in either 4K at 30 frames-per-second or in a performance mode that lowers the resolution but bumps up the frame rate to 60, giving wonderful fluidity to car chases, swooping helicopter rides and mass shootouts. The DualSense controller features on PlayStation 5 are very good too: improved driving feedback via the analogue triggers makes the game’s cumbersome handling a little easier to, well, handle. It’s been quite a joy to rediscover this alternate-reality California; to see the sun drop behind the downtown skyscrapers, or to hit Senora as dawn splashes orange-yellow light across the burning desert.

What the vast upshift in resolution can’t hide is the fact that GTA V is a game originally designed for consoles that are now two generations out of date. The character models and facial details look positively archaic compared with, say, Horizon Forbidden West, and the building architecture too seems almost quaint in its stylised blockiness. Compare it with 2018’s Red Dead Redemption 2 and you can see just how far Rockstar has come in its building of intricate next-gen worlds. In many ways, however, the design of the world itself has not been been bettered in the decade since it arrived. The size of San Andreas, the sheer variety of landscapes and the diversity of actions and activities is still incredible — Cyberpunk 2077 may look better, but it doesn’t let you play golf or tennis, or go on day trips on a bike, or set up incredibly complex car or helicopter stunts. Los Santos is a vast playground, a gangster Fortnite — a factor underlined by the massive community that still gathers in GTA Online (which is where we find this new version’s only totally new content — Hao’s Special Works, which lets you unlock faster cars and new tasks).

Read more of this story at Slashdot.

NYT Takes Down Third-Party Wordle Archive

The New York Times, which acquired Wordle in January, is putting an end to unofficial takes of the game. The latest casualty is Wordle Archive, a website that let users play through hundreds of previous daily five-letter Wordle puzzles. According to Ars Technica, the site “has been taken down at the request of Wordle owner The New York Times.” From the report: The archival site, which offered a backward-looking play feature that’s not available in the NYT’s official version of Wordle, had been up since early January. But it was taken down last week and replaced with a message saying, “Sadly, the New York Times has requested that the Wordle Archive be taken down.” A Twitter search shows dozens of daily Wordle Archive players who were willing to share their results on social media up through March 7. “The usage was unauthorized, and we were in touch with them,” a New York Times representative said in response to an Ars Technica comment request. “We don’t plan to comment beyond that.”

The Wordle Archive is still fully playable in its own archived form (as of March 5) at the Internet Archive, appropriately enough. Other sites that allow you to play archived Wordle puzzles are not hard to find, as are sites that let you play unlimited Wordle puzzles beyond the usual one-a-day limit. But some of those sites may be under threat, if the Times’ treatment of Wordle Archive is any indication.

Read more of this story at Slashdot.

The Original Winamp Skin Is Selling As An NFT

Winamp will sell a non-fungible token (NFT) linked to its media player’s original 1997 graphical skin, becoming the latest company to blend nostalgia and crypto. The Verge reports: Winamp will put the NFT up for auction through OpenSea between May 16th and May 22nd, followed by a separate sale of 1997 total NFTs based on 20 artworks derived from the original skin. The proceeds will go to the Winamp Foundation, which promises to donate them to charity projects, starting with the Belgian nonprofit Music Fund.

The NFT sale appears to be a combination of a publicity move and a fundraising effort. Winamp is sourcing the derivative art NFTs by asking artists to submit Winamp-based works between now and April 15th, then giving selected artists 20 percent of the proceeds from each sale of their image as an NFT. Nineteen of the pieces will sell in editions of 100 copies, and the remaining one will have 97; they’ll all sell for 0.08 Ethereum — around $210 at current exchange rates. The artists will get 10 percent of any royalties on later sales, where the seller will set their own price.

Winamp’s head of business development Thierry Ascarez tells The Verge that buyers will get a blockchain token linked to an image of either the original skin seen above or one of its derivatives, which is a common setup for NFTs. Buyers will have the right to “copy, reproduce, and display” the image, but they won’t own the copyright. Likewise, selected artists will agree to transfer all intellectual property for their work to Winamp, according to a page of terms and conditions (PDF).

Read more of this story at Slashdot.

Congressional Bills Would Ban Tech Mergers Over $5 Billion

Senator Elizabeth Warren and House Representative Mondaire Jones have introduced legislation in their respective congressional chambers that would effectively ban large technology mergers. Engadget reports: The Prohibiting Anticompetitive Mergers Act (PAMA) would make it illegal to pursue “prohibited mergers,” including those worth more than $5 billion or which provide market shares beyond 25 percent for employers and 33 percent for sellers. The bills would also give antitrust regulators more power to halt and review mergers. They would have authority to reject mergers outright, without requiring court orders. They would likewise bar mergers from companies with track records of antitrust violations or other instances of “corporate crime” in the past decade. Officials would have to gauge the impact of these acquisition on labor forces, and wouldn’t be allowed to negotiate with the companies to secure “remedies” for clearing mergers.

Crucially, PAMA would formalize procedures for reviewing past mergers and breaking up “harmful deals” that allegedly hurt competition. The Federal Trade Commission has signaled a willingness to split up tech giants like Meta despite approving mergers years earlier. PAMA might make it easier to unwind those acquisitions and force brands like Instagram and WhatsApp to operate as separate businesses.

Read more of this story at Slashdot.

Brain-Imaging Studies Hampered by Small Data Sets, Study Finds

For two decades, researchers have used brain-imaging technology to try to identify how the structure and function of a person’s brain connects to a range of mental-health ailments, from anxiety and depression to suicidal tendencies. But a new paper, published Wednesday in Nature, calls into question whether much of this research is actually yielding valid findings. The New York Times reports: Many such studies, the paper’s authors found, tend to include fewer than two dozen participants, far shy of the number needed to generate reliable results. “You need thousands of individuals,” said Scott Marek, a psychiatric researcher at the Washington University School of Medicine in St. Louis and an author of the paper. He described the finding as a “gut punch” for the typical studies that use imaging to try to better understand mental health.

Studies that use magnetic-resonance imaging technology commonly temper their conclusions with a cautionary statement noting the small sample size. But enlisting participants can be time-consuming and expensive, ranging from $600 to $2,000 an hour, said Dr. Nico Dosenbach, a neurologist at Washington University School of Medicine and another author on the paper. The median number of subjects in mental-health-related studies that use brain imaging is around 23, he added. But the Nature paper demonstrates that the data drawn from just two dozen subjects is generally insufficient to be reliable and can in fact yield ‘massively inflated’ findings,” Dr. Dosenbach said. The findings from the Nature paper can “absolutely” be applied to other fields beyond mental health, said Marek. “My hunch this is much more about population science than it is about any one of those fields,” he said.

Read more of this story at Slashdot.