Russia Likely Launched Counter Space Weapon Into Low Earth Orbit Last Week, Pentagon Says

The United States has assessed that Russia launched what is likely a counter space weapon last week that’s now in the same orbit as a U.S. government satellite, Pentagon spokesman Maj. Gen. Pat Ryder confirmed Tuesday. From a report: “What I’m tracking here is on May 16, as you highlighted, Russia launched a satellite into low Earth orbit that we that we assess is likely a counter space weapon presumably capable of attacking other satellites in low Earth orbit,” Ryder said when questioned by ABC News about the information, which was made public earlier Tuesday by Robert Wood, deputy U.S. ambassador to the United Nations.

“Russia deployed this new counter space weapon into the same orbit as a U.S. government satellite,” Ryder continued. “And so assessments further indicate characteristics resembling previously deployed counter space payloads from 2019 and 2022.” Ryder added: “Obviously, that’s something that we’ll continue to monitor. Certainly, we would say that we have a responsibility to be ready to protect and defend the space domain and ensure continuous and uninterrupted support to the joint and combined force. And we’ll continue to balance the need to protect our interests in space with our desire to preserve a stable and sustainable space environment.” When asked if the Russian counter space weapon posed a threat to the U.S. satellite, Ryder responded: “Well, it’s a counter space weapon in the same orbit as a U.S. government satellite.”

Read more of this story at Slashdot.

‘Pay Researchers To Spot Errors in Published Papers’

Borrowing the idea of “bug bounties” from the technology industry could provide a systematic way to detect and correct the errors that litter the scientific literature. Malte Elson, writing at Nature: Just as many industries devote hefty funding to incentivizing people to find and report bugs and glitches, so the science community should reward the detection and correction of errors in the scientific literature. In our industry, too, the costs of undetected errors are staggering. That’s why I have joined with meta-scientist Ian Hussey at the University of Bern and psychologist Ruben Arslan at Leipzig University in Germany to pilot a bug-bounty programme for science, funded by the University of Bern. Our project, Estimating the Reliability and Robustness of Research (ERROR), pays specialists to check highly cited published papers, starting with the social and behavioural sciences (see go.nature.com/4bmlvkj). Our reviewers are paid a base rate of up to 1,000 Swiss francs (around US$1,100) for each paper they check, and a bonus for any errors they find. The bigger the error, the greater the reward — up to a maximum of 2,500 francs.

Authors who let us scrutinize their papers are compensated, too: 250 francs to cover the work needed to prepare files or answer reviewer queries, and a bonus 250 francs if no errors (or only minor ones) are found in their work. ERROR launched in February and will run for at least four years. So far, we have sent out almost 60 invitations, and 13 sets of authors have agreed to have their papers assessed. One review has been completed, revealing minor errors. I hope that the project will demonstrate the value of systematic processes to detect errors in published research. I am convinced that such systems are needed, because current checks are insufficient. Unpaid peer reviewers are overburdened, and have little incentive to painstakingly examine survey responses, comb through lists of DNA sequences or cell lines, or go through computer code line by line. Mistakes frequently slip through. And researchers have little to gain personally from sifting through published papers looking for errors. There is no financial compensation for highlighting errors, and doing so can see people marked out as troublemakers.

Read more of this story at Slashdot.

DOJ Makes Its First Known Arrest For AI-Generated CSAM

In what’s believed to be the first case of its kind, the U.S. Department of Justice arrested a Wisconsin man last week for generating and distributing AI-generated child sexual abuse material (CSAM). Even if no children were used to create the material, the DOJ “looks to establish a judicial precedent that exploitative materials are still illegal,” reports Engadget. From the report: The DOJ says 42-year-old software engineer Steven Anderegg of Holmen, WI, used a fork of the open-source AI image generator Stable Diffusion to make the images, which he then used to try to lure an underage boy into sexual situations. The latter will likely play a central role in the eventual trial for the four counts of “producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16.” The government says Anderegg’s images showed “nude or partially clothed minors lasciviously displaying or touching their genitals or engaging in sexual intercourse with men.” The DOJ claims he used specific prompts, including negative prompts (extra guidance for the AI model, telling it what not to produce) to spur the generator into making the CSAM.

Cloud-based image generators like Midjourney and DALL-E 3 have safeguards against this type of activity, but Ars Technica reports that Anderegg allegedly used Stable Diffusion 1.5, a variant with fewer boundaries. Stability AI told the publication that fork was produced by Runway ML. According to the DOJ, Anderegg communicated online with the 15-year-old boy, describing how he used the AI model to create the images. The agency says the accused sent the teen direct messages on Instagram, including several AI images of “minors lasciviously displaying their genitals.” To its credit, Instagram reported the images to the National Center for Missing and Exploited Children (NCMEC), which alerted law enforcement. Anderegg could face five to 70 years in prison if convicted on all four counts. He’s currently in federal custody before a hearing scheduled for May 22.

Read more of this story at Slashdot.

EU Sets Benchmark For Rest of the World With Landmark AI Laws

An anonymous reader quotes a report from Reuters: Europe’s landmark rules on artificial intelligence will enter into force next month after EU countries endorsed on Tuesday a political deal reached in December, setting a potential global benchmark for a technology used in business and everyday life. The European Union’s AI Act is more comprehensive than the United States’ light-touch voluntary compliance approach while China’s approach aims to maintain social stability and state control. The vote by EU countries came two months after EU lawmakers backed the AI legislation drafted by the European Commission in 2021 after making a number of key changes. […]

The AI Act imposes strict transparency obligations on high-risk AI systems while such requirements for general-purpose AI models will be lighter.
It restricts governments’ use of real-time biometric surveillance in public spaces to cases of certain crimes, prevention of terrorist attacks and searches for people suspected of the most serious crimes. The new legislation will have an impact beyond the 27-country bloc, said Patrick van Eecke at law firm Cooley. “The Act will have global reach. Companies outside the EU who use EU customer data in their AI platforms will need to comply. Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR,” he said, referring to EU privacy rules.

While the new legislation will apply in 2026, bans on the use of artificial intelligence in social scoring, predictive policing and untargeted scraping of facial images from the internet or CCTV footage will kick in in six months once the new regulation enters into force. Obligations for general purpose AI models will apply after 12 months and rules for AI systems embedded into regulated products in 36 months. Fines for violations range from $8.2 million or 1.5% of turnover to 35 million euros or 7% of global turnover depending on the type of violations.

Read more of this story at Slashdot.

Vitalik Buterin Addresses Threats To Ethereum’s Decentralization In New Blog Post

In a new blog post, Ethereum co-founder Vitalik Buterin has shared his thoughts on three issues core to Ethereum’s decentralization: MEV, liquid staking, and the hardware requirements of nodes. The Block reports: In his post, published on May 17, Buterin first addresses the issue of MEV, or the financial gain that sophisticated node operators can capture by reordering the transactions within a block. Buterin characterizes the two approaches to MEV as “minimization” (reducing MEV through smart protocol design, such as CowSwap) and “quarantining” (attempting to reduce or eliminate MEV altogether through in-protocol techniques). While MEV quarantining seems like an alluring option, Buterin notes that the prospect comes with some centralization risks. “If builders have the power to exclude transactions from a block entirely, there are attacks that can quite easily arise,” Buterin noted. However, Buterin championed the builders working on MEV quarantining through concepts like transaction inclusion lists, which “take away the builder’s ability to push transactions out of the block entirely.” “I think ideas in this direction – really pushing the quarantine box to be as small as possible – are really interesting, and I’m in favor of going in that direction,” Buterin concluded.

Buterin also addressed the relatively low number of solo Ethereum stakers, as most stakers choose to stake with a staking provider, either a centralized offering like Coinbase or a decentralized offering like Lido or RocketPool, given the complexity, hardware requirement, and 32 eth minimum needed to operate an Ethereum node solo. While Buterin acknowledges the progress being made to reduce the cost and complexity around running a solo node, he also noted “once again there is more that we could do,” perhaps through reducing the time to withdraw staked ether or reducing the 32 eth minimum requirement to become a solo staker. “Incorrect answers could lead Ethereum down a path of centralization and ‘re-creating the traditional financial system with extra steps’; correct answers could create a shining example of a successful ecosystem with a wide and diverse set of solo stakers and highly decentralized staking pools,” Buterin wrote. […]

Buterin finished his post by imploring the Ethereum ecosystem to tackle the hard questions rather than shy away from them. “…We should have deep respect for the properties that make Ethereum unique, and continue to work to maintain and improve on those properties as Ethereum scales,” Buterin wrote. Buterin added today, in a post on X, that he was pleased to see civil debate among community members. “I’m really proud that ethereum does not have any culture of trying to prevent people from speaking their minds, even when they have very negative feelings toward major things in the protocol or ecosystem. Some wave the ideal of ‘open discourse’ as a flag, some take it seriously,” Buterin wrote.

Read more of this story at Slashdot.

HP Resurrects ’90s OmniBook Branding, Kills Spectre and Dragonfly

HP announced today that it will resurrect the “Omni” branding it first coined for its business-oriented laptops introduced in 1993. The vintage branding will now be used for the company’s new consumer-facing laptops, with HP retiring the Spectre and Dragonfly brands in the process. Furthermore, computers under consumer PC series names like Pavilion will also no longer be released. “Instead, every consumer computer from HP will be called either an OmniBook for laptops, an OmniDesk for desktops, or an OmniStudio for AIOs,” reports Ars Technica. From the report: The computers will also have a modifier, ranging from 3 up to 5, 7, X, or Ultra to denote computers that are entry-level all the way up to advanced. For instance, an HP OmniBook Ultra would represent HP’s highest-grade consumer laptop. “For example, an HP OmniBook 3 will appeal to customers who prioritize entertainment and personal use, while the OmniBook X will be designed for those with higher creative and technical demands,” Stacy Wolff, SVP of design and sustainability at HP, said via a press announcement today. […] So far, HP has announced one new Omni computer, the OmniBook X. It has a 12-core Snapdragon X Elite X1E-78-100, 16GB or 32GB of MPDDR5x-8448 memory, up to 2TB of storage, and a 14-inch, 2240×1400 IPS display. HP is pointing to the Latin translation of omni, meaning “all” (or everything), as the rationale behind the naming update. The new name should give shoppers confidence that the computers will provide all the things that they need.

HP is also getting rid of some of its commercial series names, like Pro. From now on, new, lower-end commercial laptops will be ProBooks. There will also be ProDesktop desktops and ProStudio AIOs. These computers will have either a 2 modifier for entry-level designs or a 4 modifier for ones with a little more power. For example, an HP ProDesk 2 is less powerful than an HP ProDesk 4. Anything more powerful will be considered either an EliteBook (laptops), EliteDesk (desktops), or EliteStudio (AIOs). For the Elite computers, the modifiers go from 6 to 8, X, and then Ultra. A Dragonfly laptop today would fall into the Ultra category. HP did less overhauling of its commercial lineup because it “recognized a need to preserve the brand equity and familiarity with our current sub-brands,” Wolff said, adding that HP “acknowledged the creation of additional product names like Dragonfly made those products stand out, rather than be seen as part of a holistic portfolio.” […]

As you might now expect of any tech rebranding, marketing push, or product release these days, HP is also announcing a new emblem that will appear on its computers, as well as other products or services, that substantially incorporate AI. The two laptops announced today carry the logo. According to Wolff, on computers, the logo means that the systems have an integrated NPU “at 40+ trillions of operations per second.” They also come with a chatbot based on ChatGPT 4, an HP spokesperson told me.

Read more of this story at Slashdot.

Linux Foundation Announces Launch of ‘High Performance Software Foundation’

This week the nonprofit Linux Foundation announced the launch of the High Performance Software Foundation, which “aims to build, promote, and advance a portable core software stack for high performance computing” (or HPC) by “increasing adoption, lowering barriers to contribution, and supporting development efforts.”

It promises initiatives focused on “continuously built, turnkey software stacks,” as well as other initiatives including architecture support and performance regression testing. Its first open source technical projects are:

– Spack: the HPC package manager.
– Kokkos: a performance-portable programming model for writing modern C++ applications in a hardware-agnostic way.
– Viskores (formerly VTK-m): a toolkit of scientific visualization algorithms for accelerator architectures.
– HPCToolkit: performance measurement and analysis tools for computers ranging from desktop systems to GPU-accelerated supercomputers.
– Apptainer: Formerly known as Singularity, Apptainer is a Linux Foundation project providing a high performance, full featured HPC and computing optimized container subsystem.
– E4S: a curated, hardened distribution of scientific software packages.

As use of HPC becomes ubiquitous in scientific computing and digital engineering, and AI use cases multiply, more and more data centers deploy GPUs and other compute accelerators. The High Performance Software Foundation will provide a neutral space for pivotal projects in the high performance computing ecosystem, enabling industry, academia, and government entities to collaborate on the scientific software.

The High Performance Software Foundation benefits from strong support across the HPC landscape, including Premier Members Amazon Web Services (AWS), Hewlett Packard Enterprise, Lawrence Livermore National Laboratory, and Sandia National Laboratories; General Members AMD, Argonne National Laboratory, Intel, Kitware, Los Alamos National Laboratory, NVIDIA, and Oak Ridge National Laboratory; and Associate Members University of Maryland, University of Oregon, and Centre for Development of Advanced Computing.
In a statement, an AMD vice president said that by joining “we are using our collective hardware and software expertise to help develop a portable, open-source software stack for high-performance computing across industry, academia, and government.” And an AWS executive said the high-performance computing community “has a long history of innovation being driven by open source projects. AWS is thrilled to join the High Performance Software Foundation to build on this work. In particular, AWS has been deeply involved in contributing upstream to Spack, and we’re looking forward to working with the HPSF to sustain and accelerate the growth of key HPC projects so everyone can benefit.”

The new foundation will “set up a technical advisory committee to manage working groups tackling a variety of HPC topics,” according to the announcement, following a governance model based on the Cloud Native Computing Foundation.

Read more of this story at Slashdot.