Rite Aid Says Breach Exposes Sensitive Details of 2.2 Million Customers

Rite Aid, the third-largest U.S. drug store chain, reported it a ransomware attack that compromised the personal data of 2.2 million customers. The data exposed includes names, addresses, dates of birth, and driver’s license numbers or other forms of government-issued ID from transactions between June 2017 and July 2018.

“On June 6, 2024, an unknown third party impersonated a company employee to compromise their business credentials and gain access to certain business systems,” the company said in a filing. “We detected the incident within 12 hours and immediately launched an internal investigation to terminate the unauthorized access, remediate affected systems and ascertain if any customer data was impacted.” Ars Technica’s Dan Goodin reports: RansomHub, the name of a relatively new ransomware group, has taken credit for the attack, which it said yielded more than 10GB of customer data. RansomHub emerged earlier this year as a rebranded version of a group known as Knight. According to security firm Check Point, RansomHub became the most prevalent ransomware group following an international operation by law enforcement in May that took down much of the infrastructure used by rival ransomware group Lockbit.

On its dark web site, RansomHub said it was in advanced stages of negotiation with Rite Aid officials when the company suddenly cut off communications. A Rite Aid official didn’t respond to questions sent by email. Rite Aid has also declined to say if the employee account compromised in the breach was protected by multifactor authentication.

Read more of this story at Slashdot.

Microsoft Unveils a Large Language Model That Excels At Encoding Spreadsheets

Microsoft has quietly announced the first details of its new “SpreadsheetLLM,” claiming it has the “potential to transform spreadsheet data management and analysis, paving the way for more intelligent and efficient user interactions.” You can read more details about the model in a pre-print paper available here. Jasper Hamill reports via The Stack: One of the problems with using LLMs in spreadsheets is that they get bogged down by too many tokens (basic units of information the model processes). To tackle this, Microsoft developed SheetCompressor, an “innovative encoding framework that compresses spreadsheets effectively for LLMs.” “It significantly improves performance in spreadsheet table detection tasks, outperforming the vanilla approach by 25.6% in GPT4’s in-context learning setting,” Microsoft added. The model is made of three modules: structural-anchor-based compression, inverse index translation, and data-format-aware aggregation.

The first of these modules involves placing “structural anchors” throughout the spreadsheet to help the LLM understand what’s going on better. It then removes “distant, homogeneous rows and columns” to produce a condensed “skeleton” version of the table. Index translation addresses the challenge caused by spreadsheets with numerous empty cells and repetitive values, which use up too many tokens. “To improve efficiency, we depart from traditional row-by-row and column-by-column serialization and employ a lossless inverted index translation in JSON format,” Microsoft wrote. “This method creates a dictionary that indexes non-empty cell texts and merges addresses with identical text, optimizing token usage while preserving data integrity.” […]

After conducting a “comprehensive evaluation of our method on a variety of LLMs” Microsoft found that SheetCompressor significantly reduces token usage for spreadsheet encoding by 96%. Moreover, SpreadsheetLLM shows “exceptional performance in spreadsheet table detection,” which is the “foundational task of spreadsheet understanding.” The new LLM builds on the Chain of Thought methodology to introduce a framework called “Chain of Spreadsheet” (CoS), which can “decompose” spreadsheet reasoning into a table detection-match-reasoning pipeline.

Read more of this story at Slashdot.

Italy Reconsiders Nuclear Energy 35 Years After Shutting Down Last Reactor

Italian Prime Minister Giorgia Meloni plans to revive Italy’s nuclear energy sector, focusing on small modular reactors to be operational within a decade. He said that nuclear energy could constitute at least 11% of the country’s electricity mix by 2050. Semafor reports: Italy’s energy minister told the Financial Times that the government would introduce legislation to support investment in small modular reactors, which could be operational within 10 years. […] In Italy, concerns about energy security since Russia’s invasion of Ukraine have pushed the government to reconsider nuclear power, Bloomberg wrote. Energy minister Pichetto Fratin told the Financial Times he was confident that Italians’ historic “aversion” could be overcome, as nuclear technology now has “different levels of safety and benefits families and businesses.” In Italy, safety is also top of mind: The Chernobyl tragedy of 1986 was the trigger for it to cease nuclear production in the first place, and the 2011 Fukushima disaster reignited those concerns. As of April, only 51% of Italians approved of nuclear power, according to polls shared by Il Sole 24 Ore.

The plan to introduce small modular reactors in Italy could add to the country’s history of failure in nuclear energy, a former Italian lawmaker and researcher argued in Italian outlet Il Fatto Quotidiano, writing that these reactors are expensive and produce too little energy to justify an investment in them.They could also become obsolete within the next decade, the timeline for the government to introduce them, Italian outlet Domani added, and be overtaken by nuclear fusion reactors, which are more efficient and have “virtually no environmental impact.” Italy’s main oil company, Eni, has signed a deal with MIT spinout Commonwealth Fusion System, with the goal of providing the first operational nuclear fusion plant by 2030.

Read more of this story at Slashdot.

OW2: ‘The European Union Must Keep Funding Free Software’

OW2, the non-profit international consortium dedicated to developing open-source middleware, published an open letter to the European Commission today. They’re urging the European Union to continue funding free software after noticing that the Next Generation Internet (NGI) programs were no longer mentioned in Cluster 4 of the 2025 Horizon Europe funding plans.

OW2 argues that discontinuing NGI funding would weaken Europe’s technological ecosystem, leaving many projects under-resourced and jeopardizing Europe’s position in the global digital landscape. The letter reads, in part: NGI programs have shown their strength and importance to support the European software infrastructure, as a generic funding instrument to fund digital commons and ensure their long-term sustainability. We find this transformation incomprehensible, moreover when NGI has proven efficient and economical to support free software as a whole, from the smallest to the most established initiatives. This ecosystem diversity backs the strength of European technological innovation, and maintaining the NGI initiative to provide structural support to software projects at the heart of worldwide innovation is key to enforce the sovereignty of a European infrastructure. Contrary to common perception, technical innovations often originate from European rather than North American programming communities, and are mostly initiated by small-scaled organizations.

Previous Cluster 4 allocated 27 millions euros to:
– “Human centric Internet aligned with values and principles commonly shared in Europe”;
– “A flourishing internet, based on common building blocks created within NGI, that enables better control of our digital life”;
– “A structured eco-system of talented contributors driving the creation of new internet commons and the evolution of existing internet commons.”

In the name of these challenges, more than 500 projects received NGI funding in the first 5 years, backed by 18 organizations managing these European funding consortia.

Read more of this story at Slashdot.

How Will AI Transform the Future of Work?

An anonymous reader shared this report from the Guardian:

In March, after analysing 22,000 tasks in the UK economy, covering every type of job, a model created by the Institute for Public Policy Research predicted that 59% of tasks currently done by humans — particularly women and young people — could be affected by AI in the next three to five years. In the worst-case scenario, this would trigger a “jobs apocalypse” where eight million people lose their jobs in the UK alone…. Darrell West, author of The Future of Work: AI, Robots and Automation, says that just as policy innovations were needed in Thomas Paine’s time to help people transition from an agrarian to an industrial economy, they are needed today, as we transition to an AI economy. “There’s a risk that AI is going to take a lot of jobs,” he says. “A basic income could help navigate that situation.”

AI’s impact will be far-reaching, he predicts, affecting blue- and white-collar jobs. “It’s not just going to be entry-level people who are affected. And so we need to think about what this means for the economy, what it means for society as a whole. What are people going to do if robots and AI take a lot of the jobs?”

Nell Watson, a futurist who focuses on AI ethics, has a more pessimistic view. She believes we are witnessing the dawn of an age of “AI companies”: corporate environments where very few — if any — humans are employed at all. Instead, at these companies, lots of different AI sub-personalities will work independently on different tasks, occasionally hiring humans for “bits and pieces of work”. These AI companies have the potential to be “enormously more efficient than human businesses”, driving almost everyone else out of business, “apart from a small selection of traditional old businesses that somehow stick in there because their traditional methods are appreciated”… As a result, she thinks it could be AI companies, not governments, that end up paying people a basic income.

AI companies, meanwhile, will have no salaries to pay. “Because there are no human beings in the loop, the profits and dividends of this company could be given to the needy. This could be a way of generating support income in a way that doesn’t need the state welfare. It’s fully compatible with capitalism. It’s just that the AI is doing it.”

Read more of this story at Slashdot.

Linux Kernel 6.10 Released

“The latest version of the Linux kernel adds an array of improvements,” writes the blog OMG Ubuntu, ” including a new memory sealing system call, a speed boost for AES-XTS encryption on Intel and AMD CPUs, and expanding Rust language support within the kernel to RISC-V.”
Plus, like in all kernel releases, there’s a glut of groundwork to offer “initial support” for upcoming CPUs, GPUs, NPUs, Wi-Fi, and other hardware (that most of us don’t use yet, but require Linux support to be in place for when devices that use them filter out)…

Linux 6.10 adds (after much gnashing) the mseal() system call to prevent changes being made to portions of the virtual address space. For now, this will mainly benefit Google Chrome, which plans to use it to harden its sandboxing. Work is underway by kernel contributors to allow other apps to benefit, though. A similarly initially-controversial change merged is a new memory-allocation profiling subsystem. This helps developers fine-tune memory usage and more readily identify memory leaks. An explainer from LWN summarizes it well.

Elsewhere, Linux 6.10 offers encrypted interactions with trusted platform modules (TPM) in order to “make the kernel’s use of the TPM reasonably robust in the face of external snooping and packet alteration attacks”. The documentation for this feature explains: “for every in-kernel operation we use null primary salted HMAC to protect the integrity [and] we use parameter encryption to protect key sealing and parameter decryption to protect key unsealing and random number generation.” Sticking with security, the Linux kernel’s Landlock security module can now apply policies to ioctl() calls (Input/Output Control), restricting potential misuse and improving overall system security.

On the networking side there’s significant performance improvements to zero-copy send operations using io_uring, and the newly-added ability to “bundle” multiple buffers for send and receive operations also offers an uptick in performance…

A couple of months ago Canonical announced Ubuntu support for the RISC-V Milk-V Mars single-board computer. Linux 6.10 mainlines support for the Milk-V Mars, which will make that effort a lot more viable (especially with the Ubuntu 24.10 kernel likely to be v6.10 or newer). Others RISC-V improvements abound in Linux 6.10, including support for the Rust language, boot image compression in BZ2, LZ4, LZMA, LZO, and Zstandard (instead of only Gzip); and newer AMD GPUs thanks to kernel-mode FPU support in RISC-V.

Phoronix has their own rundown of Linux 6.10, plus a list of some of the highlights, which includes:

The initial DRM Panic infrastructure
The new Panthor DRM driver for newer Arm Mali graphics
Better AMD ROCm/AMDKFD support for “small” Ryzen APUs and new additions for AMD Zen 5.
AMD GPU display support on RISC-V hardware thanks to RISC-V kernel mode FPU
More Intel Xe2 graphics preparations
Better IO_uring zero-copy performance
Faster AES-XTS disk/file encryption with modern Intel and AMD CPUs
Continued online repair work for XFS
Steam Deck IMU support
TPM bus encryption and integrity protection

Read more of this story at Slashdot.