Cloudflare Reports Almost 7% of Internet Traffic Is Malicious

In its latest State of Application Security Report, Cloudflare says 6.8% of traffic on the internet is malicious, “up a percentage point from last year’s study,” writes ZDNet’s Steven Vaughan-Nichols. “Cloudflare, the content delivery network and security services company, thinks the rise is due to wars and elections. For example, many attacks against Western-interest websites are coming from pro-Russian hacktivist groups such as REvil, KillNet, and Anonymous Sudan.” From the report: […] Distributed Denial of Service (DDoS) attacks continue to be cybercriminals’ weapon of choice, making up over 37% of all mitigated traffic. The scale of these attacks is staggering. In the first quarter of 2024 alone, Cloudflare blocked 4.5 million unique DDoS attacks. That total is nearly a third of all the DDoS attacks they mitigated the previous year. But it’s not just about the sheer volume of DDoS attacks. The sophistication of these attacks is increasing, too. Last August, Cloudflare mitigated a massive HTTP/2 Rapid Reset DDoS attack that peaked at 201 million requests per second (RPS). That number is three times bigger than any previously observed attack.

The report also highlights the increased importance of application programming interface (API) security. With 60% of dynamic web traffic now API-related, these interfaces are a prime target for attackers. API traffic is growing twice as fast as traditional web traffic. What’s worrying is that many organizations appear not to be even aware of a quarter of their API endpoints. Organizations that don’t have a tight grip on their internet services or website APIs can’t possibly protect themselves from attackers. Evidence suggests the average enterprise application now uses 47 third-party scripts and connects to nearly 50 third-party destinations. Do you know and trust these scripts and connections? You should — each script of connection is a potential security risk. For instance, the recent Polyfill.io JavaScript incident affected over 380,000 sites.

Finally, about 38% of all HTTP requests processed by Cloudflare are classified as automated bot traffic. Some bots are good and perform a needed service, such as customer service chatbots, or are authorized search engine crawlers. However, as many as 93% of bots are potentially bad.

Read more of this story at Slashdot.

Rite Aid Says Breach Exposes Sensitive Details of 2.2 Million Customers

Rite Aid, the third-largest U.S. drug store chain, reported it a ransomware attack that compromised the personal data of 2.2 million customers. The data exposed includes names, addresses, dates of birth, and driver’s license numbers or other forms of government-issued ID from transactions between June 2017 and July 2018.

“On June 6, 2024, an unknown third party impersonated a company employee to compromise their business credentials and gain access to certain business systems,” the company said in a filing. “We detected the incident within 12 hours and immediately launched an internal investigation to terminate the unauthorized access, remediate affected systems and ascertain if any customer data was impacted.” Ars Technica’s Dan Goodin reports: RansomHub, the name of a relatively new ransomware group, has taken credit for the attack, which it said yielded more than 10GB of customer data. RansomHub emerged earlier this year as a rebranded version of a group known as Knight. According to security firm Check Point, RansomHub became the most prevalent ransomware group following an international operation by law enforcement in May that took down much of the infrastructure used by rival ransomware group Lockbit.

On its dark web site, RansomHub said it was in advanced stages of negotiation with Rite Aid officials when the company suddenly cut off communications. A Rite Aid official didn’t respond to questions sent by email. Rite Aid has also declined to say if the employee account compromised in the breach was protected by multifactor authentication.

Read more of this story at Slashdot.

Microsoft Unveils a Large Language Model That Excels At Encoding Spreadsheets

Microsoft has quietly announced the first details of its new “SpreadsheetLLM,” claiming it has the “potential to transform spreadsheet data management and analysis, paving the way for more intelligent and efficient user interactions.” You can read more details about the model in a pre-print paper available here. Jasper Hamill reports via The Stack: One of the problems with using LLMs in spreadsheets is that they get bogged down by too many tokens (basic units of information the model processes). To tackle this, Microsoft developed SheetCompressor, an “innovative encoding framework that compresses spreadsheets effectively for LLMs.” “It significantly improves performance in spreadsheet table detection tasks, outperforming the vanilla approach by 25.6% in GPT4’s in-context learning setting,” Microsoft added. The model is made of three modules: structural-anchor-based compression, inverse index translation, and data-format-aware aggregation.

The first of these modules involves placing “structural anchors” throughout the spreadsheet to help the LLM understand what’s going on better. It then removes “distant, homogeneous rows and columns” to produce a condensed “skeleton” version of the table. Index translation addresses the challenge caused by spreadsheets with numerous empty cells and repetitive values, which use up too many tokens. “To improve efficiency, we depart from traditional row-by-row and column-by-column serialization and employ a lossless inverted index translation in JSON format,” Microsoft wrote. “This method creates a dictionary that indexes non-empty cell texts and merges addresses with identical text, optimizing token usage while preserving data integrity.” […]

After conducting a “comprehensive evaluation of our method on a variety of LLMs” Microsoft found that SheetCompressor significantly reduces token usage for spreadsheet encoding by 96%. Moreover, SpreadsheetLLM shows “exceptional performance in spreadsheet table detection,” which is the “foundational task of spreadsheet understanding.” The new LLM builds on the Chain of Thought methodology to introduce a framework called “Chain of Spreadsheet” (CoS), which can “decompose” spreadsheet reasoning into a table detection-match-reasoning pipeline.

Read more of this story at Slashdot.

Italy Reconsiders Nuclear Energy 35 Years After Shutting Down Last Reactor

Italian Prime Minister Giorgia Meloni plans to revive Italy’s nuclear energy sector, focusing on small modular reactors to be operational within a decade. He said that nuclear energy could constitute at least 11% of the country’s electricity mix by 2050. Semafor reports: Italy’s energy minister told the Financial Times that the government would introduce legislation to support investment in small modular reactors, which could be operational within 10 years. […] In Italy, concerns about energy security since Russia’s invasion of Ukraine have pushed the government to reconsider nuclear power, Bloomberg wrote. Energy minister Pichetto Fratin told the Financial Times he was confident that Italians’ historic “aversion” could be overcome, as nuclear technology now has “different levels of safety and benefits families and businesses.” In Italy, safety is also top of mind: The Chernobyl tragedy of 1986 was the trigger for it to cease nuclear production in the first place, and the 2011 Fukushima disaster reignited those concerns. As of April, only 51% of Italians approved of nuclear power, according to polls shared by Il Sole 24 Ore.

The plan to introduce small modular reactors in Italy could add to the country’s history of failure in nuclear energy, a former Italian lawmaker and researcher argued in Italian outlet Il Fatto Quotidiano, writing that these reactors are expensive and produce too little energy to justify an investment in them.They could also become obsolete within the next decade, the timeline for the government to introduce them, Italian outlet Domani added, and be overtaken by nuclear fusion reactors, which are more efficient and have “virtually no environmental impact.” Italy’s main oil company, Eni, has signed a deal with MIT spinout Commonwealth Fusion System, with the goal of providing the first operational nuclear fusion plant by 2030.

Read more of this story at Slashdot.

OW2: ‘The European Union Must Keep Funding Free Software’

OW2, the non-profit international consortium dedicated to developing open-source middleware, published an open letter to the European Commission today. They’re urging the European Union to continue funding free software after noticing that the Next Generation Internet (NGI) programs were no longer mentioned in Cluster 4 of the 2025 Horizon Europe funding plans.

OW2 argues that discontinuing NGI funding would weaken Europe’s technological ecosystem, leaving many projects under-resourced and jeopardizing Europe’s position in the global digital landscape. The letter reads, in part: NGI programs have shown their strength and importance to support the European software infrastructure, as a generic funding instrument to fund digital commons and ensure their long-term sustainability. We find this transformation incomprehensible, moreover when NGI has proven efficient and economical to support free software as a whole, from the smallest to the most established initiatives. This ecosystem diversity backs the strength of European technological innovation, and maintaining the NGI initiative to provide structural support to software projects at the heart of worldwide innovation is key to enforce the sovereignty of a European infrastructure. Contrary to common perception, technical innovations often originate from European rather than North American programming communities, and are mostly initiated by small-scaled organizations.

Previous Cluster 4 allocated 27 millions euros to:
– “Human centric Internet aligned with values and principles commonly shared in Europe”;
– “A flourishing internet, based on common building blocks created within NGI, that enables better control of our digital life”;
– “A structured eco-system of talented contributors driving the creation of new internet commons and the evolution of existing internet commons.”

In the name of these challenges, more than 500 projects received NGI funding in the first 5 years, backed by 18 organizations managing these European funding consortia.

Read more of this story at Slashdot.

How Will AI Transform the Future of Work?

An anonymous reader shared this report from the Guardian:

In March, after analysing 22,000 tasks in the UK economy, covering every type of job, a model created by the Institute for Public Policy Research predicted that 59% of tasks currently done by humans — particularly women and young people — could be affected by AI in the next three to five years. In the worst-case scenario, this would trigger a “jobs apocalypse” where eight million people lose their jobs in the UK alone…. Darrell West, author of The Future of Work: AI, Robots and Automation, says that just as policy innovations were needed in Thomas Paine’s time to help people transition from an agrarian to an industrial economy, they are needed today, as we transition to an AI economy. “There’s a risk that AI is going to take a lot of jobs,” he says. “A basic income could help navigate that situation.”

AI’s impact will be far-reaching, he predicts, affecting blue- and white-collar jobs. “It’s not just going to be entry-level people who are affected. And so we need to think about what this means for the economy, what it means for society as a whole. What are people going to do if robots and AI take a lot of the jobs?”

Nell Watson, a futurist who focuses on AI ethics, has a more pessimistic view. She believes we are witnessing the dawn of an age of “AI companies”: corporate environments where very few — if any — humans are employed at all. Instead, at these companies, lots of different AI sub-personalities will work independently on different tasks, occasionally hiring humans for “bits and pieces of work”. These AI companies have the potential to be “enormously more efficient than human businesses”, driving almost everyone else out of business, “apart from a small selection of traditional old businesses that somehow stick in there because their traditional methods are appreciated”… As a result, she thinks it could be AI companies, not governments, that end up paying people a basic income.

AI companies, meanwhile, will have no salaries to pay. “Because there are no human beings in the loop, the profits and dividends of this company could be given to the needy. This could be a way of generating support income in a way that doesn’t need the state welfare. It’s fully compatible with capitalism. It’s just that the AI is doing it.”

Read more of this story at Slashdot.