Intel Aurora Supercomputer Breaks Exascale Barrier

Josh Norem reports via ExtremeTech: At the recent International supercomputing conference called ISC 2024, Intel’s newest Aurora supercomputer installed at Argonne National Laboratory raised a few eyebrows by finally surpassing the exascale barrier. Before this, only AMD’s Frontier system had been able to achieve this level of performance. Intel also achieved what it says is the world’s best performance for AI at 10.61 “AI exaflops.” Intel reported the news on its blog, stating Aurora was now officially the fastest supercomputer for AI in the world. It shares the distinction in collaboration with Argonne National Laboratory and Hewlett Packard Enterprise (HPE), which both built and houses the system in its current state, which Intel says was at 87% functionality for the recent tests. In the all-important Linpack (HPL) test, the Aurora computer hit 1.012 exaflops, meaning it has almost doubled the performance on tap since its initial “partial run” in late 2023, where it hit just 585.34 petaflops. The company then said it expected to cross the exascale barrier with Aurora eventually, and now it has.

Intel says for the ISC 2024 tests, Aurora was operating with 9,234 nodes. The company notes it ranked second overall in LINPACK, meaning it’s still unable to dethrone AMD’s Frontier system, which is also an HPE supercomputer. AMD’s Frontier was the first supercomputer to break the exascale barrier in June 2022. Frontier sits at around 1.2 exaflops in Linpack, so Intel is knocking on its door but still has a way to go before it can topple it. However, Intel says Aurora came in first in the Linpack-mixed benchmark, reportedly highlighting its unparalleled AI performance. Intel’s Aurora supercomputer uses the company’s latest CPU and GPU hardware, with 21,248 Sapphire Rapids Xeon CPUs and 63,744 Ponte Vecchio GPUs. When it’s fully operational later this year, Intel believes the system will eventually be capable of crossing the 2-exaflop barrier.

Read more of this story at Slashdot.

Biden Admin Shells Out $120 Million To Return Chip Startup To US Ownership

Brandon Vigliarolo reports via The Register: Not everything in the semiconductor industry is about shearing off every last nanometer, which is why the Biden administration is splashing out CHIPS Act funding to those pursuing less cutting edge processor production. Case in point, today’s announcement that Bloomington, Minnesota-based Polar Semiconductor could be getting up to $120 million in CHIPS funds to double production capacity over the next two years, along with a possible buyout to return the business to U.S. hands.

Polar, which manufactures semiconductors used primarily for the energy industry and electric vehicles, will use the funds to double its production capacity of sensor and power chips and upgrade its manufacturing kit, as well as adding 160 jobs to boot. Along with expanding production, the U.S. Department of Commerce said the funding would trigger additional private capital investment to “transform Polar from a majority foreign-owned in-house manufacturer to a majority U.S.-owned commercial foundry, expanding opportunities for U.S. chip designers to innovate and produce technologies domestically.” In other words – sure it’ll expand the output, but the real win is another majority U.S.-owned foundry for the White House to tout.

According to its website, Polar is currently owned by Korean conglomerate SK Group and serves as the primary fab and engineering center for Japanese firm Sanken Electric. Not exactly companies in countries with poor U.S. relations – but overseas owners, nonetheless. “This proposed investment in Polar will crowd in private capital, which will help make Polar a U.S.-based, independent foundry,” said U.S. Commerce secretary Gina Raimondo. “They will be able to expand their customer base and create a stable domestic supply of critical chips, made in America’s heartland.”

Read more of this story at Slashdot.

Reddit Grows, Seeks More AI Deals, Plans ‘Award’ Shops, and Gets Sued

Reddit reported its first results since going public in late March. Yahoo Finance reports:

Daily active users increased 37% year over year to 82.7 million. Weekly active unique users rose 40% from the prior year. Total revenue improved 48% to $243 million, nearly doubling the growth rate from the prior quarter, due to strength in advertising. The company delivered adjusted operating profits of $10 million, versus a $50.2 million loss a year ago. [Reddit CEO Steve] Huffman declined to say when the company would be profitable on a net income basis, noting it’s a focus for the management team. Other areas of focus include rolling out a new user interface this year, introducing shopping capabilities, and searching for another artificial intelligence content licensing deal like the one with Google.

Bloomberg notes that already Reddit “has signed licensing agreements worth $203 million in total, with terms ranging from two to three years. The company generated about $20 million from AI content deals last quarter, and expects to bring in more than $60 million by the end of the year.”

And elsewhere Bloomberg writes that Reddit “plans to expand its revenue streams outside of advertising into what Huffman calls the ‘user economy’ — users making money from others on the platform… ”

In the coming months Reddit plans to launch new versions of awards, which are digital gifts users can give to each other, along with other products… Reddit also plans to continue striking data licensing deals with artificial intelligence companies, expanding into international markets and evaluating potential acquisition targets in areas such as search, he said.

Meanwhile, ZDNet notes that this week a Reddit announcement “introduced a new public content policy that lays out a framework for how partners and third parties can access user-posted content on its site.”

The post explains that more and more companies are using unsavory means to access user data in bulk, including Reddit posts. Once a company gets this data, there’s no limit to what it can do with it. Reddit will continue to block “bad actors” that use unauthorized methods to get data, the company says, but it’s taking additional steps to keep users safe from the site’s partners…. Reddit still supports using its data for research: It’s creating a new subreddit — r/reddit4researchers — to support these initiatives, and partnering with OpenMined to help improve research. Private data is, however, going to stay private.

If a company wants to use Reddit data for commercial purposes, including advertising or training AI, it will have to pay. Reddit made this clear by saying, “If you’re interested in using Reddit data to power, augment, or enhance your product or service for any commercial purposes, we require a contract.” To be clear, Reddit is still selling users’ data — it’s just making sure that unscrupulous actors have a tougher time accessing that data for free and researchers have an easier time finding what they need.

And finally, there’s some court action, according to the Register. Reddit “was sued by an unhappy advertiser who claims that internet giga-forum sold ads but provided no way to verify that real people were responsible for clicking on them.”

The complaint [PDF] was filed this week in a U.S. federal court in northern California on behalf of LevelFields, a Virginia-based investment research platform that relies on AI. It says the biz booked pay-per-click ads on the discussion site starting September 2022… That arrangement called for Reddit to use reasonable means to ensure that LevelField’s ads were delivered to and clicked on by actual people rather than bots and the like. But according to the complaint, Reddit broke that contract…

LevelFields argues that Reddit is in a particularly good position to track click fraud because it’s serving ads on its own site, as opposed to third-party properties where it may have less visibility into network traffic… Nonetheless, LevelFields’s effort to obtain IP address data to verify the ads it was billed for went unfulfilled. The social media site “provided click logs without IP addresses,” the complaint says. “Reddit represented that it was not able to provide IP addresses.”

“The plaintiffs aspire to have their claim certified as a class action,” the article adds — along with an interesting statistic.
“According to Juniper Research, 22 percent of ad spending last year was lost to click fraud, amounting to $84 billion.”

Read more of this story at Slashdot.

Linux Kernel 6.9 Officially Released

“6.9 is now out,” Linus Torvalds posted on the Linux kernel mailing list, “and last week has looked quite stable (and the whole release has felt pretty normal).”

Phoronix writes that Linux 6.9 “has a number of exciting features and improvements for those habitually updating to the newest version.” And Slashdot reader prisoninmate shared this report from 9to5Linux:

Highlights of Linux kernel 6.9 include Rust support on AArch64 (ARM64) architectures, support for the Intel FRED (Flexible Return and Event Delivery) mechanism for improved low-level event delivery, support for AMD SNP (Secure Nested Paging) guests, and a new dm-vdo (virtual data optimizer) target in device mapper for inline deduplication, compression, zero-block elimination, and thin provisioning.

Linux kernel 6.9 also supports the Named Address Spaces feature in GCC (GNU Compiler Collection) that allows the compiler to better optimize per-CPU data access, adds initial support for FUSE passthrough to allow the kernel to serve files from a user-space FUSE server directly, adds support for the Energy Model to be updated dynamically at run time, and introduces a new LPA2 mode for ARM 64-bit processors…

Linux kernel 6.9 will be a short-lived branch supported for only a couple of months. It will be succeeded by Linux kernel 6.10, whose merge window has now been officially opened by Linus Torvalds. Linux kernel 6.10 is expected to be released in mid or late September 2024.

“Rust language has been updated to version 1.76.0 in Linux 6.9,” according to the article. And Linus Torvalds shared one more details on the Linux kernel mailing list.

“I now have a more powerful arm64 machine (thanks to Ampere), so the last week I’ve been doing almost as many arm64 builds as I have x86-64, and that should obviously continue during the upcoming merge window too.”

Read more of this story at Slashdot.

Australia Criticized For Ramping Up Gas Extraction Through ‘2050 and Beyond’

Slashdot reader sonlas shared this report from the BBC:
Australia has announced it will ramp up its extraction and use of gas until “2050 and beyond”, despite global calls to phase out fossil fuels. Prime Minister Anthony Albanese’s government says the move is needed to shore up domestic energy supply while supporting a transition to net zero… Australia — one of the world’s largest exporters of liquefied natural gas — has also said the policy is based on “its commitment to being a reliable trading partner”. Released on Thursday, the strategy outlines the government’s plans to work with industry and state leaders to increase both the production and exploration of the fossil fuel. The government will also continue to support the expansion of the country’s existing gas projects, the largest of which are run by Chevron and Woodside Energy Group in Western Australia…

The policy has sparked fierce backlash from environmental groups and critics — who say it puts the interest of powerful fossil fuel companies before people. “Fossil gas is not a transition fuel. It’s one of the main contributors to global warming and has been the largest source of increases of CO2 [emissions] over the last decade,” Prof Bill Hare, chief executive of Climate Analytics and author of numerous UN climate change reports told the BBC… Successive Australian governments have touted gas as a key “bridging fuel”, arguing that turning it off too soon could have “significant adverse impacts” on Australia’s economy and energy needs. But Prof Hare and other scientists have warned that building a net zero policy around gas will “contribute to locking in 2.7-3C global warming, which will have catastrophic consequences”.

Read more of this story at Slashdot.

RHEL (and Rocky and Alma Linux) 9.4 Released – Plus AI Offerings

Red Hat Enterprise Linux 9.4 has been released. But also released is Rocky Linux 9.4, reports 9to5Linux:

Rocky Linux 9.4 also adds openSUSE’s KIWI next-generation appliance builder as a new image build workflow and process for building images that are feature complete with the old images… Under the hood, Rocky Linux 9.4 includes the same updated components from the upstream Red Hat Enterprise Linux 9.4

This week also saw the release of Alma Linux 9.4 stable (the “forever-free enterprise Linux distribution… binary compatible with RHEL.”) The Register points out that while Alma Linux is “still supporting some aging hardware that the official RHEL 9.4 drops, what’s new is largely the same in them both.”

And last week also saw the launch of the AlmaLinux High-Performance Computing and AI Special Interest Group (SIG). HPCWire reports:

“AlmaLinux’s status as a community-driven enterprise Linux holds incredible promise for the future of HPC and AI,” said Hayden Barnes, SIG leader and Senior Open Source Community Manager for AI Software at HPE. “Its transparency and stability empowers researchers, developers and organizations to collaborate, customize and optimize their computing environments, fostering a culture of innovation and accelerating breakthroughs in scientific research and cutting-edge AI/ML.”

And this week, InfoWorld reported:

Red Hat has launched Red Hat Enterprise Linux AI (RHEL AI), described as a foundation model platform that allows users to more seamlessly develop and deploy generative AI models. Announced May 7 and available now as a developer preview, RHEL AI includes the Granite family of open-source large language models (LLMs) from IBM, InstructLab model alignment tools based on the LAB (Large-Scale Alignment for Chatbots) methodology, and a community-driven approach to model development through the InstructLab project, Red Hat said.

Read more of this story at Slashdot.

Google Employees Question Execs Over ‘Decline in Morale’ After Blowout Earnings

“Google’s business is growing at its fastest rate in two years,” reports CNBC, “and a blowout earnings report in April sparked the biggest rally in Alphabet shares since 2015, pushing the company’s market cap past $2 trillion.

“But at an all-hands meeting last week with CEO Sundar Pichai and CFO Ruth Porat, employees were more focused on why that performance isn’t translating into higher pay, and how long the company’s cost-cutting measures are going to be in place.”

“We’ve noticed a significant decline in morale, increased distrust and a disconnect between leadership and the workforce,” a comment posted on an internal forum ahead of the meeting read. “How does leadership plan to address these concerns and regain the trust, morale and cohesion that have been foundational to our company’s success?”

Google is using artificial intelligence to summarize employee comments and questions for the forum.
Alphabet’s top leadership has been on the defensive for the past few years, as vocal staffers have railed about post-pandemic return-to-office mandates, the company’s cloud contracts with the military, fewer perks and an extended stretch of layoffs — totaling more than 12,000 last year — along with other cost cuts that began when the economy turned in 2022. Employees have also complained about a lack of trust and demands that they work on tighter deadlines with fewer resources and diminished opportunities for internal advancement.

The internal strife continues despite Alphabet’s better-than-expected first-quarter earnings report, in which the company also announced its first dividend as well as a $70 billion buyback. “Despite the company’s stellar performance and record earnings, many Googlers have not received meaningful compensation increases” a top-rated employee question read. “When will employee compensation fairly reflect the company’s success and is there a conscious decision to keep wages lower due to a cooling employment market?”

Read more of this story at Slashdot.

Apple Will Revamp Siri To Catch Up To Its Chatbot Competitors

An anonymous reader quotes a report from the New York Times: Apple’s top software executives decided early last year that Siri, the company’s virtual assistant, needed a brain transplant. The decision came after the executives Craig Federighi and John Giannandrea spent weeks testing OpenAI’s new chatbot, ChatGPT. The product’s use of generative artificial intelligence, which can write poetry, create computer code and answer complex questions, made Siri look antiquated, said two people familiar with the company’s work, who didn’t have permission to speak publicly. Introduced in 2011 as the original virtual assistant in every iPhone, Siri had been limited for years to individual requests and had never been able to follow a conversation. It often misunderstood questions. ChatGPT, on the other hand, knew that if someone asked for the weather in San Francisco and then said, “What about New York?” that user wanted another forecast.

The realization that new technology had leapfrogged Siri set in motion the tech giant’s most significant reorganization in more than a decade. Determined to catch up in the tech industry’s A.I. race, Apple has made generative A.I. a tent pole project — the company’s special, internal label that it uses to organize employees around once-in-a-decade initiatives. Apple is expected to show off its A.I. work at its annual developers conference on June 10 when it releases an improved Siri that is more conversational and versatile, according to three people familiar with the company’s work, who didn’t have permission to speak publicly. Siri’s underlying technology will include a new generative A.I. system that will allow it to chat rather than respond to questions one at a time. The update to Siri is at the forefront of a broader effort to embrace generative A.I. across Apple’s business. The company is also increasing the memory in this year’s iPhones to support its new Siri capabilities. And it has discussed licensing complementary A.I. models that power chatbots from several companies, including Google, Cohere and OpenAI. Further reading: Apple Might Bring AI Transcription To Voice Memos and Notes

Read more of this story at Slashdot.

G5 Severe Geomagnetic Storm Watch Issued For First Time Since 2003

Longtime Slashdot reader davidwr shares a report from Space Weather Prediction Center (SWPC): On Thursday, May 9, 2024, the NOAA Space Weather Prediction Center issued a Severe (G4) Geomagnetic Storm Watch. At least five earth-directed coronal mass ejections (CMEs) were observed and expected to arrive as early as midday Friday, May 10, 2024, and persist through Sunday, May 12, 2024. Several strong flares have been observed over the past few days and were associated with a large and magnetically complex sunspot cluster (NOAA region 3664), which is 16 times the diameter of Earth. [The agency notes this is the first time it’s issued a G4 watch since January, 2005.] “Geomagnetic storms can impact infrastructure in near-Earth orbit and on Earth’s surface, potentially disrupting communications, the electric power grid, navigation, radio and satellite operations,” NOAA said. “[The Space Weather Prediction Center] has notified the operators of these systems so they can take protective action.” The agency said it will continue to monitor the ongoing storm and “provide additional warnings as necessary.”

A visual byproduct of the storm will be “spectacular displays of aurora,” also known as the Northern Lights, that could be seen for much of the northern half of the country “as far south as Alabama to northern California,” said the NOAA. “Northern Montana, Minnesota, Wisconsin and the majority of North Dakota appear to have the best chances to see it,” reports Axios, citing the SWPC’s aurora viewline. “Forecast models Friday showed the activity will likely be the strongest from Friday night to Saturday morning Eastern time.”

UPDATE 6:54 P.M. EDT: G5 conditions have been observed — the first time since 2003, says Broadcast Meteorologist James Spann.
This is a developing story. More information is available at spaceweather.gov, Google News, and the NOAA.

Read more of this story at Slashdot.