Netflix To Take On Google and Amazon By Building Its Own Ad Server

Lauren Forristal writes via TechCrunch: Netflix announced during its Upfronts presentation on Wednesday that it’s launching its own advertising technology platform only a year and a half after entering the ads business. This move pits it against other industry heavyweights with ad servers, like Google, Amazon and Comcast. The announcement signifies a significant shake-up in the streaming giant’s advertising approach. The company originally partnered with Microsoft to develop its ad tech, letting Netflix enter the ad space quickly and catch up with rivals like Hulu, which has had its own ad server for over a decade.

With the launch of its in-house ad tech, Netflix is poised to take full control of its advertising future. This strategic move will empower the company to create targeted and personalized ad experiences that resonate with its massive user base of 270 million subscribers. […] Netflix didn’t say exactly how its in-house solution will change the way ads are delivered, but it’s likely it’ll move away from generic advertisements. According to the Financial Times, Netflix wants to experiment with “episodic” campaigns, which involve a series of ads that tell a story rather than delivering repetitive ads. During the presentation, Netflix also noted that it’ll expand its buying capabilities this summer, which will now include The Trade Desk, Google’s Display & Video 360 and Magnite as partners. Notably, competitor Disney+ also has an advertising agreement with The Trade Desk. Netflix also touted the success of its ad-supported tier, reporting that 40 million global monthly active users opt for the plan. The ad tier had around 5 million users within six months of launching.

Read more of this story at Slashdot.

US Regulators Approve Rule That Could Speed Renewables

Longtime Slashdot reader necro81 writes: The U.S. Federal Energy Regulatory Commission (FERC), which controls interstate energy infrastructure, approved a rule Monday that should boost new transmission infrastructure and make it easier to connect renewable energy projects. (More coverage here, here, and here.)

Some 11,000 projects totaling 2,600 GW of capacity are in planning, waiting to break ground, or connect to the grid. But they’re stymied by the need for costly upgrades, or simply waiting for review. The frustrations are many. Each proposed project undergoes a lengthy grid-impact study and assessed the cost of necessary upgrades. Each project is considered in isolation, regardless of whether similar projects are happening nearby that could share the upgrade costs or auger different improvements. The planning process tends to be reactive — examining only the applications in front of them — rather than considering trends over the coming years. It’s a first-come, first-served queue: if one project is ready to break ground, it must wait behind another project that’s still securing funding or permitting.

Two years in development, the dryly-named Improvements to Generator Interconnection Procedures and Agreements directs utility operators to plan infrastructure improvements with a 20-yr forecast of new energy sources and increased demand. Rather than examining each project in isolation, similar projects will be clustered and examined together. Instead of a First-Come, First-Served serial process, operators will instead examine First-Ready, allowing shovel-ready projects to jump the queue. The expectation is that these new rules will speed up and streamline the process of developing and connecting new energy projects through more holistic planning, penalties for delays, sensible cost-sharing for upgrades, and justification for long-term investments.

Read more of this story at Slashdot.

Project Astra Is Google’s ‘Multimodal’ Answer to the New ChatGPT

At Google I/O today, Google introduced a “next-generation AI assistant” called Project Astra that can “make sense of what your phone’s camera sees,” reports Wired. It follows yesterday’s launch of GPT-4o, a new AI model from OpenAI that can quickly respond to prompts via voice and talk about what it ‘sees’ through a smartphone camera or on a computer screen. It “also uses a more humanlike voice and emotionally expressive tone, simulating emotions like surprise and even flirtatiousness,” notes Wired. From the report: In response to spoken commands, Astra was able to make sense of objects and scenes as viewed through the devices’ cameras, and converse about them in natural language. It identified a computer speaker and answered questions about its components, recognized a London neighborhood from the view out of an office window, read and analyzed code from a computer screen, composed a limerick about some pencils, and recalled where a person had left a pair of glasses. […] Google says Project Astra will be made available through a new interface called Gemini Live later this year. [Demis Hassabis, the executive leading the company’s effort to reestablish leadership inÂAI] said that the company is still testing several prototype smart glasses and has yet to make a decision on whether to launch any of them.

Hassabis believes that imbuing AI models with a deeper understanding of the physical world will be key to further progress in AI, and to making systems like Project Astra more robust. Other frontiers of AI, including Google DeepMind’s work on game-playing AI programs could help, he says. Hassabis and others hope such work could be revolutionary for robotics, an area that Google is also investing in.
“A multimodal universal agent assistant is on the sort of track to artificial general intelligence,” Hassabis said in reference to a hoped-for but largely undefined future point where machines can do anything and everything that a human mind can. “This is not AGI or anything, but it’s the beginning of something.”

Read more of this story at Slashdot.

Google Targets Filmmakers With Veo, Its New Generative AI Video Model

At its I/O developer conference today, Google announced Veo, its latest generative AI video model, that “can generate ‘high-quality’ 1080p resolution videos over a minute in length in a wide variety of visual and cinematic styles,” reports The Verge. From the report: Veo has “an advanced understanding of natural language,” according to Google’s press release, enabling the model to understand cinematic terms like “timelapse” or “aerial shots of a landscape.” Users can direct their desired output using text, image, or video-based prompts, and Google says the resulting videos are “more consistent and coherent,” depicting more realistic movement for people, animals, and objects throughout shots. Google DeepMind CEO Demis Hassabis said in a press preview on Monday that video results can be refined using additional prompts and that Google is exploring additional features to enable Veo to produce storyboards and longer scenes.

As is the case with many of these AI model previews, most folks hoping to try Veo out themselves will likely have to wait a while. Google says it’s inviting select filmmakers and creators to experiment with the model to determine how it can best support creatives and will build on these collaborations to ensure “creators have a voice” in how Google’s AI technologies are developed. Some Veo features will also be made available to “select creators in the coming weeks” in a private preview inside VideoFX — you can sign up for the waitlist here for an early chance to try it out. Otherwise, Google is also planning to add some of its capabilities to YouTube Shorts “in the future.” Along with its new AI models and tools, Google said it’s expanding its AI content watermarking and detection technology. The company’s new upgraded SynthID watermark imprinting system “can now mark video that was digitally generated, as well as AI-generated text,” reports The Verge in a separate report.

Read more of this story at Slashdot.

Intel Aurora Supercomputer Breaks Exascale Barrier

Josh Norem reports via ExtremeTech: At the recent International supercomputing conference called ISC 2024, Intel’s newest Aurora supercomputer installed at Argonne National Laboratory raised a few eyebrows by finally surpassing the exascale barrier. Before this, only AMD’s Frontier system had been able to achieve this level of performance. Intel also achieved what it says is the world’s best performance for AI at 10.61 “AI exaflops.” Intel reported the news on its blog, stating Aurora was now officially the fastest supercomputer for AI in the world. It shares the distinction in collaboration with Argonne National Laboratory and Hewlett Packard Enterprise (HPE), which both built and houses the system in its current state, which Intel says was at 87% functionality for the recent tests. In the all-important Linpack (HPL) test, the Aurora computer hit 1.012 exaflops, meaning it has almost doubled the performance on tap since its initial “partial run” in late 2023, where it hit just 585.34 petaflops. The company then said it expected to cross the exascale barrier with Aurora eventually, and now it has.

Intel says for the ISC 2024 tests, Aurora was operating with 9,234 nodes. The company notes it ranked second overall in LINPACK, meaning it’s still unable to dethrone AMD’s Frontier system, which is also an HPE supercomputer. AMD’s Frontier was the first supercomputer to break the exascale barrier in June 2022. Frontier sits at around 1.2 exaflops in Linpack, so Intel is knocking on its door but still has a way to go before it can topple it. However, Intel says Aurora came in first in the Linpack-mixed benchmark, reportedly highlighting its unparalleled AI performance. Intel’s Aurora supercomputer uses the company’s latest CPU and GPU hardware, with 21,248 Sapphire Rapids Xeon CPUs and 63,744 Ponte Vecchio GPUs. When it’s fully operational later this year, Intel believes the system will eventually be capable of crossing the 2-exaflop barrier.

Read more of this story at Slashdot.

Biden Admin Shells Out $120 Million To Return Chip Startup To US Ownership

Brandon Vigliarolo reports via The Register: Not everything in the semiconductor industry is about shearing off every last nanometer, which is why the Biden administration is splashing out CHIPS Act funding to those pursuing less cutting edge processor production. Case in point, today’s announcement that Bloomington, Minnesota-based Polar Semiconductor could be getting up to $120 million in CHIPS funds to double production capacity over the next two years, along with a possible buyout to return the business to U.S. hands.

Polar, which manufactures semiconductors used primarily for the energy industry and electric vehicles, will use the funds to double its production capacity of sensor and power chips and upgrade its manufacturing kit, as well as adding 160 jobs to boot. Along with expanding production, the U.S. Department of Commerce said the funding would trigger additional private capital investment to “transform Polar from a majority foreign-owned in-house manufacturer to a majority U.S.-owned commercial foundry, expanding opportunities for U.S. chip designers to innovate and produce technologies domestically.” In other words – sure it’ll expand the output, but the real win is another majority U.S.-owned foundry for the White House to tout.

According to its website, Polar is currently owned by Korean conglomerate SK Group and serves as the primary fab and engineering center for Japanese firm Sanken Electric. Not exactly companies in countries with poor U.S. relations – but overseas owners, nonetheless. “This proposed investment in Polar will crowd in private capital, which will help make Polar a U.S.-based, independent foundry,” said U.S. Commerce secretary Gina Raimondo. “They will be able to expand their customer base and create a stable domestic supply of critical chips, made in America’s heartland.”

Read more of this story at Slashdot.

Australia Criticized For Ramping Up Gas Extraction Through ‘2050 and Beyond’

Slashdot reader sonlas shared this report from the BBC:
Australia has announced it will ramp up its extraction and use of gas until “2050 and beyond”, despite global calls to phase out fossil fuels. Prime Minister Anthony Albanese’s government says the move is needed to shore up domestic energy supply while supporting a transition to net zero… Australia — one of the world’s largest exporters of liquefied natural gas — has also said the policy is based on “its commitment to being a reliable trading partner”. Released on Thursday, the strategy outlines the government’s plans to work with industry and state leaders to increase both the production and exploration of the fossil fuel. The government will also continue to support the expansion of the country’s existing gas projects, the largest of which are run by Chevron and Woodside Energy Group in Western Australia…

The policy has sparked fierce backlash from environmental groups and critics — who say it puts the interest of powerful fossil fuel companies before people. “Fossil gas is not a transition fuel. It’s one of the main contributors to global warming and has been the largest source of increases of CO2 [emissions] over the last decade,” Prof Bill Hare, chief executive of Climate Analytics and author of numerous UN climate change reports told the BBC… Successive Australian governments have touted gas as a key “bridging fuel”, arguing that turning it off too soon could have “significant adverse impacts” on Australia’s economy and energy needs. But Prof Hare and other scientists have warned that building a net zero policy around gas will “contribute to locking in 2.7-3C global warming, which will have catastrophic consequences”.

Read more of this story at Slashdot.

Linux Kernel 6.9 Officially Released

“6.9 is now out,” Linus Torvalds posted on the Linux kernel mailing list, “and last week has looked quite stable (and the whole release has felt pretty normal).”

Phoronix writes that Linux 6.9 “has a number of exciting features and improvements for those habitually updating to the newest version.” And Slashdot reader prisoninmate shared this report from 9to5Linux:

Highlights of Linux kernel 6.9 include Rust support on AArch64 (ARM64) architectures, support for the Intel FRED (Flexible Return and Event Delivery) mechanism for improved low-level event delivery, support for AMD SNP (Secure Nested Paging) guests, and a new dm-vdo (virtual data optimizer) target in device mapper for inline deduplication, compression, zero-block elimination, and thin provisioning.

Linux kernel 6.9 also supports the Named Address Spaces feature in GCC (GNU Compiler Collection) that allows the compiler to better optimize per-CPU data access, adds initial support for FUSE passthrough to allow the kernel to serve files from a user-space FUSE server directly, adds support for the Energy Model to be updated dynamically at run time, and introduces a new LPA2 mode for ARM 64-bit processors…

Linux kernel 6.9 will be a short-lived branch supported for only a couple of months. It will be succeeded by Linux kernel 6.10, whose merge window has now been officially opened by Linus Torvalds. Linux kernel 6.10 is expected to be released in mid or late September 2024.

“Rust language has been updated to version 1.76.0 in Linux 6.9,” according to the article. And Linus Torvalds shared one more details on the Linux kernel mailing list.

“I now have a more powerful arm64 machine (thanks to Ampere), so the last week I’ve been doing almost as many arm64 builds as I have x86-64, and that should obviously continue during the upcoming merge window too.”

Read more of this story at Slashdot.