Google Targets Filmmakers With Veo, Its New Generative AI Video Model

At its I/O developer conference today, Google announced Veo, its latest generative AI video model, that “can generate ‘high-quality’ 1080p resolution videos over a minute in length in a wide variety of visual and cinematic styles,” reports The Verge. From the report: Veo has “an advanced understanding of natural language,” according to Google’s press release, enabling the model to understand cinematic terms like “timelapse” or “aerial shots of a landscape.” Users can direct their desired output using text, image, or video-based prompts, and Google says the resulting videos are “more consistent and coherent,” depicting more realistic movement for people, animals, and objects throughout shots. Google DeepMind CEO Demis Hassabis said in a press preview on Monday that video results can be refined using additional prompts and that Google is exploring additional features to enable Veo to produce storyboards and longer scenes.

As is the case with many of these AI model previews, most folks hoping to try Veo out themselves will likely have to wait a while. Google says it’s inviting select filmmakers and creators to experiment with the model to determine how it can best support creatives and will build on these collaborations to ensure “creators have a voice” in how Google’s AI technologies are developed. Some Veo features will also be made available to “select creators in the coming weeks” in a private preview inside VideoFX — you can sign up for the waitlist here for an early chance to try it out. Otherwise, Google is also planning to add some of its capabilities to YouTube Shorts “in the future.” Along with its new AI models and tools, Google said it’s expanding its AI content watermarking and detection technology. The company’s new upgraded SynthID watermark imprinting system “can now mark video that was digitally generated, as well as AI-generated text,” reports The Verge in a separate report.

Read more of this story at Slashdot.

Intel Aurora Supercomputer Breaks Exascale Barrier

Josh Norem reports via ExtremeTech: At the recent International supercomputing conference called ISC 2024, Intel’s newest Aurora supercomputer installed at Argonne National Laboratory raised a few eyebrows by finally surpassing the exascale barrier. Before this, only AMD’s Frontier system had been able to achieve this level of performance. Intel also achieved what it says is the world’s best performance for AI at 10.61 “AI exaflops.” Intel reported the news on its blog, stating Aurora was now officially the fastest supercomputer for AI in the world. It shares the distinction in collaboration with Argonne National Laboratory and Hewlett Packard Enterprise (HPE), which both built and houses the system in its current state, which Intel says was at 87% functionality for the recent tests. In the all-important Linpack (HPL) test, the Aurora computer hit 1.012 exaflops, meaning it has almost doubled the performance on tap since its initial “partial run” in late 2023, where it hit just 585.34 petaflops. The company then said it expected to cross the exascale barrier with Aurora eventually, and now it has.

Intel says for the ISC 2024 tests, Aurora was operating with 9,234 nodes. The company notes it ranked second overall in LINPACK, meaning it’s still unable to dethrone AMD’s Frontier system, which is also an HPE supercomputer. AMD’s Frontier was the first supercomputer to break the exascale barrier in June 2022. Frontier sits at around 1.2 exaflops in Linpack, so Intel is knocking on its door but still has a way to go before it can topple it. However, Intel says Aurora came in first in the Linpack-mixed benchmark, reportedly highlighting its unparalleled AI performance. Intel’s Aurora supercomputer uses the company’s latest CPU and GPU hardware, with 21,248 Sapphire Rapids Xeon CPUs and 63,744 Ponte Vecchio GPUs. When it’s fully operational later this year, Intel believes the system will eventually be capable of crossing the 2-exaflop barrier.

Read more of this story at Slashdot.

Biden Admin Shells Out $120 Million To Return Chip Startup To US Ownership

Brandon Vigliarolo reports via The Register: Not everything in the semiconductor industry is about shearing off every last nanometer, which is why the Biden administration is splashing out CHIPS Act funding to those pursuing less cutting edge processor production. Case in point, today’s announcement that Bloomington, Minnesota-based Polar Semiconductor could be getting up to $120 million in CHIPS funds to double production capacity over the next two years, along with a possible buyout to return the business to U.S. hands.

Polar, which manufactures semiconductors used primarily for the energy industry and electric vehicles, will use the funds to double its production capacity of sensor and power chips and upgrade its manufacturing kit, as well as adding 160 jobs to boot. Along with expanding production, the U.S. Department of Commerce said the funding would trigger additional private capital investment to “transform Polar from a majority foreign-owned in-house manufacturer to a majority U.S.-owned commercial foundry, expanding opportunities for U.S. chip designers to innovate and produce technologies domestically.” In other words – sure it’ll expand the output, but the real win is another majority U.S.-owned foundry for the White House to tout.

According to its website, Polar is currently owned by Korean conglomerate SK Group and serves as the primary fab and engineering center for Japanese firm Sanken Electric. Not exactly companies in countries with poor U.S. relations – but overseas owners, nonetheless. “This proposed investment in Polar will crowd in private capital, which will help make Polar a U.S.-based, independent foundry,” said U.S. Commerce secretary Gina Raimondo. “They will be able to expand their customer base and create a stable domestic supply of critical chips, made in America’s heartland.”

Read more of this story at Slashdot.

Reddit Grows, Seeks More AI Deals, Plans ‘Award’ Shops, and Gets Sued

Reddit reported its first results since going public in late March. Yahoo Finance reports:

Daily active users increased 37% year over year to 82.7 million. Weekly active unique users rose 40% from the prior year. Total revenue improved 48% to $243 million, nearly doubling the growth rate from the prior quarter, due to strength in advertising. The company delivered adjusted operating profits of $10 million, versus a $50.2 million loss a year ago. [Reddit CEO Steve] Huffman declined to say when the company would be profitable on a net income basis, noting it’s a focus for the management team. Other areas of focus include rolling out a new user interface this year, introducing shopping capabilities, and searching for another artificial intelligence content licensing deal like the one with Google.

Bloomberg notes that already Reddit “has signed licensing agreements worth $203 million in total, with terms ranging from two to three years. The company generated about $20 million from AI content deals last quarter, and expects to bring in more than $60 million by the end of the year.”

And elsewhere Bloomberg writes that Reddit “plans to expand its revenue streams outside of advertising into what Huffman calls the ‘user economy’ — users making money from others on the platform… ”

In the coming months Reddit plans to launch new versions of awards, which are digital gifts users can give to each other, along with other products… Reddit also plans to continue striking data licensing deals with artificial intelligence companies, expanding into international markets and evaluating potential acquisition targets in areas such as search, he said.

Meanwhile, ZDNet notes that this week a Reddit announcement “introduced a new public content policy that lays out a framework for how partners and third parties can access user-posted content on its site.”

The post explains that more and more companies are using unsavory means to access user data in bulk, including Reddit posts. Once a company gets this data, there’s no limit to what it can do with it. Reddit will continue to block “bad actors” that use unauthorized methods to get data, the company says, but it’s taking additional steps to keep users safe from the site’s partners…. Reddit still supports using its data for research: It’s creating a new subreddit — r/reddit4researchers — to support these initiatives, and partnering with OpenMined to help improve research. Private data is, however, going to stay private.

If a company wants to use Reddit data for commercial purposes, including advertising or training AI, it will have to pay. Reddit made this clear by saying, “If you’re interested in using Reddit data to power, augment, or enhance your product or service for any commercial purposes, we require a contract.” To be clear, Reddit is still selling users’ data — it’s just making sure that unscrupulous actors have a tougher time accessing that data for free and researchers have an easier time finding what they need.

And finally, there’s some court action, according to the Register. Reddit “was sued by an unhappy advertiser who claims that internet giga-forum sold ads but provided no way to verify that real people were responsible for clicking on them.”

The complaint [PDF] was filed this week in a U.S. federal court in northern California on behalf of LevelFields, a Virginia-based investment research platform that relies on AI. It says the biz booked pay-per-click ads on the discussion site starting September 2022… That arrangement called for Reddit to use reasonable means to ensure that LevelField’s ads were delivered to and clicked on by actual people rather than bots and the like. But according to the complaint, Reddit broke that contract…

LevelFields argues that Reddit is in a particularly good position to track click fraud because it’s serving ads on its own site, as opposed to third-party properties where it may have less visibility into network traffic… Nonetheless, LevelFields’s effort to obtain IP address data to verify the ads it was billed for went unfulfilled. The social media site “provided click logs without IP addresses,” the complaint says. “Reddit represented that it was not able to provide IP addresses.”

“The plaintiffs aspire to have their claim certified as a class action,” the article adds — along with an interesting statistic.
“According to Juniper Research, 22 percent of ad spending last year was lost to click fraud, amounting to $84 billion.”

Read more of this story at Slashdot.

Linux Kernel 6.9 Officially Released

“6.9 is now out,” Linus Torvalds posted on the Linux kernel mailing list, “and last week has looked quite stable (and the whole release has felt pretty normal).”

Phoronix writes that Linux 6.9 “has a number of exciting features and improvements for those habitually updating to the newest version.” And Slashdot reader prisoninmate shared this report from 9to5Linux:

Highlights of Linux kernel 6.9 include Rust support on AArch64 (ARM64) architectures, support for the Intel FRED (Flexible Return and Event Delivery) mechanism for improved low-level event delivery, support for AMD SNP (Secure Nested Paging) guests, and a new dm-vdo (virtual data optimizer) target in device mapper for inline deduplication, compression, zero-block elimination, and thin provisioning.

Linux kernel 6.9 also supports the Named Address Spaces feature in GCC (GNU Compiler Collection) that allows the compiler to better optimize per-CPU data access, adds initial support for FUSE passthrough to allow the kernel to serve files from a user-space FUSE server directly, adds support for the Energy Model to be updated dynamically at run time, and introduces a new LPA2 mode for ARM 64-bit processors…

Linux kernel 6.9 will be a short-lived branch supported for only a couple of months. It will be succeeded by Linux kernel 6.10, whose merge window has now been officially opened by Linus Torvalds. Linux kernel 6.10 is expected to be released in mid or late September 2024.

“Rust language has been updated to version 1.76.0 in Linux 6.9,” according to the article. And Linus Torvalds shared one more details on the Linux kernel mailing list.

“I now have a more powerful arm64 machine (thanks to Ampere), so the last week I’ve been doing almost as many arm64 builds as I have x86-64, and that should obviously continue during the upcoming merge window too.”

Read more of this story at Slashdot.

Australia Criticized For Ramping Up Gas Extraction Through ‘2050 and Beyond’

Slashdot reader sonlas shared this report from the BBC:
Australia has announced it will ramp up its extraction and use of gas until “2050 and beyond”, despite global calls to phase out fossil fuels. Prime Minister Anthony Albanese’s government says the move is needed to shore up domestic energy supply while supporting a transition to net zero… Australia — one of the world’s largest exporters of liquefied natural gas — has also said the policy is based on “its commitment to being a reliable trading partner”. Released on Thursday, the strategy outlines the government’s plans to work with industry and state leaders to increase both the production and exploration of the fossil fuel. The government will also continue to support the expansion of the country’s existing gas projects, the largest of which are run by Chevron and Woodside Energy Group in Western Australia…

The policy has sparked fierce backlash from environmental groups and critics — who say it puts the interest of powerful fossil fuel companies before people. “Fossil gas is not a transition fuel. It’s one of the main contributors to global warming and has been the largest source of increases of CO2 [emissions] over the last decade,” Prof Bill Hare, chief executive of Climate Analytics and author of numerous UN climate change reports told the BBC… Successive Australian governments have touted gas as a key “bridging fuel”, arguing that turning it off too soon could have “significant adverse impacts” on Australia’s economy and energy needs. But Prof Hare and other scientists have warned that building a net zero policy around gas will “contribute to locking in 2.7-3C global warming, which will have catastrophic consequences”.

Read more of this story at Slashdot.

RHEL (and Rocky and Alma Linux) 9.4 Released – Plus AI Offerings

Red Hat Enterprise Linux 9.4 has been released. But also released is Rocky Linux 9.4, reports 9to5Linux:

Rocky Linux 9.4 also adds openSUSE’s KIWI next-generation appliance builder as a new image build workflow and process for building images that are feature complete with the old images… Under the hood, Rocky Linux 9.4 includes the same updated components from the upstream Red Hat Enterprise Linux 9.4

This week also saw the release of Alma Linux 9.4 stable (the “forever-free enterprise Linux distribution… binary compatible with RHEL.”) The Register points out that while Alma Linux is “still supporting some aging hardware that the official RHEL 9.4 drops, what’s new is largely the same in them both.”

And last week also saw the launch of the AlmaLinux High-Performance Computing and AI Special Interest Group (SIG). HPCWire reports:

“AlmaLinux’s status as a community-driven enterprise Linux holds incredible promise for the future of HPC and AI,” said Hayden Barnes, SIG leader and Senior Open Source Community Manager for AI Software at HPE. “Its transparency and stability empowers researchers, developers and organizations to collaborate, customize and optimize their computing environments, fostering a culture of innovation and accelerating breakthroughs in scientific research and cutting-edge AI/ML.”

And this week, InfoWorld reported:

Red Hat has launched Red Hat Enterprise Linux AI (RHEL AI), described as a foundation model platform that allows users to more seamlessly develop and deploy generative AI models. Announced May 7 and available now as a developer preview, RHEL AI includes the Granite family of open-source large language models (LLMs) from IBM, InstructLab model alignment tools based on the LAB (Large-Scale Alignment for Chatbots) methodology, and a community-driven approach to model development through the InstructLab project, Red Hat said.

Read more of this story at Slashdot.