$2.4 Million Texas Home Listing Boasts Built-In 5,786 sq ft Data Center

A Zillow listing for a $2.4 million house in a Dallas suburb is grabbing attention for its 5,786-square-foot data center with immersion cooling tanks, massive server racks, and two separate power grids. Tom’s Hardware reports: With a brick exterior, cute paving, and mini-McMansion arch stylings, the building certainly looks to be a residential home for the archetypal Texas family. Prospective home-buyers will thus be disappointed by the 0 bedroom, 1 bathroom setup, which becomes a warehouse-feeling office from the first step inside where you are met with a glass-shielded reception desk in a white-brick corridor. The “Crypto Collective” branding betrays the former life of the unit, which served admirably as a crypto mining base.

The purchase of the “upgraded turnkey Tier 2 Data Center” will include all of its cooling and power infrastructure. Three Engineered Fluids “SLICTanks,” single-phase liquid immersion cooling tanks for use with dielectric coolant, will come with pumps and a 500kW dry cooler. The tanks are currently filled with at least 80 mining computers visible from the photos, though the SLICTanks can be configured to fit more machines. Also visible in proximity to the cooling array is a deep row of classic server racks and a staggering amount of networking.

The listing advertises a host of potential uses for future customers, from “AI services, cloud hosting, traditional data center, servers or even Bitcoin Mining”. Also packed into the 5,786 square feet of real estate is two separate power grids, 5 HVAC units, a hefty amount of four levels of warehouse-style storage aisles, a lounge/office space, and a fully-paved backyard. In other good news, its future corporate residents will not have an HOA to deal with, and will only be 20 minutes outside of the heart of Dallas, sitting just out of earshot of two major highways.

Read more of this story at Slashdot.

UK Imposes Mysterious Ban On Quantum Computer Exports

Longtime Slashdot reader MattSparkes shares a report from NewScientist: Quantum computing experts are baffled by the UK government’s new export restrictions on the exotic devices (source paywalled), saying they make little sense. [The UK government has set limits on the capabilities of quantum computers that can be exported — starting with those above 34 qubits, and rising as long as error rates are also higher — and has declined to explain these limits on the grounds of national security.] The legislation applies to both existing, small quantum computers that are of no practical use and larger computers that don’t actually exist, so cannot be exported. Instead, there are fears the limits will restrict sales and add bureaucracy to a new and growing sector. For more context, here’s an excerpt from an article published by The Telegraph in March: The technology has been added to a list of “dual use” items that could have military uses maintained by the Export Control Joint Unit, which scrutinizes sales of sensitive goods. A national quantum computer strategy published last year described the technology as being “critically important” for defense and national security and said the UK was in a “global race” to develop it. […] The changes have been introduced as part of a broader update to export rules agreed by Western allies including the US and major European countries. Several nations with particular expertise on quantum computer technologies have added specific curbs, including France which introduced rules at the start of this month.

Last year, industry body Quantum UK said British companies were concerned about the prospect of further export controls, and that they could even put off US companies seeking to relocate to the UK. Quantum computer exports only previously required licenses in specific cases, such as when they were likely to lead to military use. Oxford Instruments, which makes cooling systems for quantum computers, said last year that sales in China had been hit by increasing curbs. James Lindop of law firm Eversheds Sutherland said: “Semiconductor and quantum technologies — two areas in which the UK already holds a world-leading position — are increasingly perceived to be highly strategic and critical to UK national security. This will undoubtedly create an additional compliance burden for businesses active in the development and production of the targeted technologies.”

Read more of this story at Slashdot.

Linux Foundation Announces Launch of ‘High Performance Software Foundation’

This week the nonprofit Linux Foundation announced the launch of the High Performance Software Foundation, which “aims to build, promote, and advance a portable core software stack for high performance computing” (or HPC) by “increasing adoption, lowering barriers to contribution, and supporting development efforts.”

It promises initiatives focused on “continuously built, turnkey software stacks,” as well as other initiatives including architecture support and performance regression testing. Its first open source technical projects are:

– Spack: the HPC package manager.
– Kokkos: a performance-portable programming model for writing modern C++ applications in a hardware-agnostic way.
– Viskores (formerly VTK-m): a toolkit of scientific visualization algorithms for accelerator architectures.
– HPCToolkit: performance measurement and analysis tools for computers ranging from desktop systems to GPU-accelerated supercomputers.
– Apptainer: Formerly known as Singularity, Apptainer is a Linux Foundation project providing a high performance, full featured HPC and computing optimized container subsystem.
– E4S: a curated, hardened distribution of scientific software packages.

As use of HPC becomes ubiquitous in scientific computing and digital engineering, and AI use cases multiply, more and more data centers deploy GPUs and other compute accelerators. The High Performance Software Foundation will provide a neutral space for pivotal projects in the high performance computing ecosystem, enabling industry, academia, and government entities to collaborate on the scientific software.

The High Performance Software Foundation benefits from strong support across the HPC landscape, including Premier Members Amazon Web Services (AWS), Hewlett Packard Enterprise, Lawrence Livermore National Laboratory, and Sandia National Laboratories; General Members AMD, Argonne National Laboratory, Intel, Kitware, Los Alamos National Laboratory, NVIDIA, and Oak Ridge National Laboratory; and Associate Members University of Maryland, University of Oregon, and Centre for Development of Advanced Computing.
In a statement, an AMD vice president said that by joining “we are using our collective hardware and software expertise to help develop a portable, open-source software stack for high-performance computing across industry, academia, and government.” And an AWS executive said the high-performance computing community “has a long history of innovation being driven by open source projects. AWS is thrilled to join the High Performance Software Foundation to build on this work. In particular, AWS has been deeply involved in contributing upstream to Spack, and we’re looking forward to working with the HPSF to sustain and accelerate the growth of key HPC projects so everyone can benefit.”

The new foundation will “set up a technical advisory committee to manage working groups tackling a variety of HPC topics,” according to the announcement, following a governance model based on the Cloud Native Computing Foundation.

Read more of this story at Slashdot.

Intel Aurora Supercomputer Breaks Exascale Barrier

Josh Norem reports via ExtremeTech: At the recent International supercomputing conference called ISC 2024, Intel’s newest Aurora supercomputer installed at Argonne National Laboratory raised a few eyebrows by finally surpassing the exascale barrier. Before this, only AMD’s Frontier system had been able to achieve this level of performance. Intel also achieved what it says is the world’s best performance for AI at 10.61 “AI exaflops.” Intel reported the news on its blog, stating Aurora was now officially the fastest supercomputer for AI in the world. It shares the distinction in collaboration with Argonne National Laboratory and Hewlett Packard Enterprise (HPE), which both built and houses the system in its current state, which Intel says was at 87% functionality for the recent tests. In the all-important Linpack (HPL) test, the Aurora computer hit 1.012 exaflops, meaning it has almost doubled the performance on tap since its initial “partial run” in late 2023, where it hit just 585.34 petaflops. The company then said it expected to cross the exascale barrier with Aurora eventually, and now it has.

Intel says for the ISC 2024 tests, Aurora was operating with 9,234 nodes. The company notes it ranked second overall in LINPACK, meaning it’s still unable to dethrone AMD’s Frontier system, which is also an HPE supercomputer. AMD’s Frontier was the first supercomputer to break the exascale barrier in June 2022. Frontier sits at around 1.2 exaflops in Linpack, so Intel is knocking on its door but still has a way to go before it can topple it. However, Intel says Aurora came in first in the Linpack-mixed benchmark, reportedly highlighting its unparalleled AI performance. Intel’s Aurora supercomputer uses the company’s latest CPU and GPU hardware, with 21,248 Sapphire Rapids Xeon CPUs and 63,744 Ponte Vecchio GPUs. When it’s fully operational later this year, Intel believes the system will eventually be capable of crossing the 2-exaflop barrier.

Read more of this story at Slashdot.

Russia Cobbles Together Supercomputing Platform To Wean Off Foreign Suppliers

Russia is adapting to a world where it no longer has access to many technologies abroad with the development of a new supercomputer platform that can use foreign x86 processors such as Intel’s in combination with the country’s homegrown Elbrus processors. The Register reports: The new supercomputer reference system, dubbed “RSK Tornado,” was developed on behalf of the Russian government by HPC system integrator RSC Group, according to an English translation of a Russian-language press release published March 30. RSC said it created RSK Tornado as a “unified interoperable” platform to “accelerate the pace of important substitution” for HPC systems, data processing centers and data storage systems in Russia. In other words, the HPC system architecture is meant to help Russia quickly adjust to the fact that major chip companies such as Intel, AMD and TSMC — plus several other technology vendors, like Dell and Lenovo — have suspended product shipments to the country as a result of sanctions by the US and other countries in reaction to Russia’s invasion of Ukraine.

RSK Tornado supports up to 104 servers in a rack, with the idea being to support foreign x86 processors (should they come available) as well as Russia’s Elbrus processors, which debuted in 2015. The hope appears to be the ability for Russian developers to port HPC, AI and big data applications from x86 architectures to the Elbrus architecture, which, in theory, will make it easier for Russia to rely on its own supply chain and better cope with continued sanctions from abroad. RSK Tornado systems software is RSC proprietary and is currently used to orchestrate supercomputer resources at the Interdepartmental Supercomputer Center of the Russian Academy of Sciences, St Petersburg Polytechnic University and the Joint Institute for Nuclear Research. RSC claims to have also developed its own liquid-cooling system for supercomputers and data storage systems, the latter of which can use Elbrus CPUs too.

Read more of this story at Slashdot.