In a Milestone, the US Exceeds 5 Million Solar Installations

According to the Solar Energy Industries Association (SEIA), the U.S. has officially surpassed 5 million solar installations. “The 5 million milestone comes just eight years after the U.S. achieved its first million in 2016 — a stark contrast to the four decades it took to reach that initial milestone since the first grid-connected solar project in 1973,” reports Electrek. From the report: Since the beginning of 2020, more than half of all U.S. solar installations have come online, and over 25% have been activated since the Inflation Reduction Act became law 20 months ago. Solar arrays have been installed on homes and businesses and as utility-scale solar farms. The U.S. solar market was valued at $51 billion in 2023. Even with changes in state policies, market trends indicate robust growth in solar installations across the U.S. According to SEIA forecasts, the number of solar installations is expected to double to 10 million by 2030 and triple to 15 million by 2034.

The residential sector represents 97% of all U.S. solar installations. This sector has consistently set new records for annual installations over the past several years, achieving new highs for five straight years and in 10 out of the last 12 years. The significant growth in residential solar can be attributed to its proven value as an investment for homeowners who wish to manage their energy costs more effectively. California is the frontrunner with 2 million solar installations, though recent state policies have significantly damaged its rooftop solar market. Meanwhile, other states are experiencing rapid growth. For example, Illinois, which had only 2,500 solar installations in 2017, now boasts over 87,000. Similarly, Florida has seen its solar installations surge from 22,000 in 2017 to 235,000 today. By 2030, 22 states or territories are anticipated to surpass 100,000 solar installations. The U.S. has enough solar installed to cover every residential rooftop in the Four Corners states of Colorado, Utah, Arizona, and New Mexico.

Read more of this story at Slashdot.

Netflix To Take On Google and Amazon By Building Its Own Ad Server

Lauren Forristal writes via TechCrunch: Netflix announced during its Upfronts presentation on Wednesday that it’s launching its own advertising technology platform only a year and a half after entering the ads business. This move pits it against other industry heavyweights with ad servers, like Google, Amazon and Comcast. The announcement signifies a significant shake-up in the streaming giant’s advertising approach. The company originally partnered with Microsoft to develop its ad tech, letting Netflix enter the ad space quickly and catch up with rivals like Hulu, which has had its own ad server for over a decade.

With the launch of its in-house ad tech, Netflix is poised to take full control of its advertising future. This strategic move will empower the company to create targeted and personalized ad experiences that resonate with its massive user base of 270 million subscribers. […] Netflix didn’t say exactly how its in-house solution will change the way ads are delivered, but it’s likely it’ll move away from generic advertisements. According to the Financial Times, Netflix wants to experiment with “episodic” campaigns, which involve a series of ads that tell a story rather than delivering repetitive ads. During the presentation, Netflix also noted that it’ll expand its buying capabilities this summer, which will now include The Trade Desk, Google’s Display & Video 360 and Magnite as partners. Notably, competitor Disney+ also has an advertising agreement with The Trade Desk. Netflix also touted the success of its ad-supported tier, reporting that 40 million global monthly active users opt for the plan. The ad tier had around 5 million users within six months of launching.

Read more of this story at Slashdot.

Bay Area City Orders Scientists To Stop Controversial Cloud Brightening Experiment

Last month, researchers from the University of Washington started conducting an experiment on a decommissioned naval ship in Alameda to test if spraying salt water into the air could brighten clouds and cool the planet. However, their project was forced to stop this month after the city got word of what was going on. SFGate reports: According to a city press release, scientists were ordered to halt the experiment because it violated Alameda’s lease with the USS Hornet, the aircraft carrier from which researchers were spraying saltwater into the air using “a machine resembling a snowmaker.” The news was first reported by the Alameda Post. “City staff are working with a team of biological and hazardous materials consultants to independently evaluate the health and environmental safety of this particular experiment,” the press release states. Specifically, chemicals present in the experiment’s aerosol spray are being evaluated to study whether or not they pose any threats to humans, animals or the environment. So far, there isn’t any evidence that they do, the city stated.

The prospect of a city-conducted review was not unexpected, the University of Washington said in a statement shared with SFGATE. “In fact, the CAARE (Coastal Aerosol Research and Engagement) facility is designed to help regulators, community members and others engage with the research closely, and we consider the current interactions with the city to be an integral part of that process,” the statement reads. “We are happy to support their review and it has been a highly constructive process so far.” The marine cloud brightening (MCB) technique involves spraying fine particles of sea salt into the atmosphere from ships or specialized machines. These sea salt particles are chosen because they are a natural source of cloud-forming aerosols and can increase the number of cloud droplets, making the clouds more reflective. The particles sprayed are extremely small, about 1/1000th the width of a human hair, ensuring they remain suspended in the air and interact with cloud droplets effectively.

By reflecting more sunlight, these brightened clouds can reduce the amount of solar energy reaching the Earth’s surface, leading to localized cooling. If implemented on a large scale, this cooling effect could potentially offset some of the warming caused by greenhouse gases.

You can learn more about the experiment here.

Read more of this story at Slashdot.

US Regulators Approve Rule That Could Speed Renewables

Longtime Slashdot reader necro81 writes: The U.S. Federal Energy Regulatory Commission (FERC), which controls interstate energy infrastructure, approved a rule Monday that should boost new transmission infrastructure and make it easier to connect renewable energy projects. (More coverage here, here, and here.)

Some 11,000 projects totaling 2,600 GW of capacity are in planning, waiting to break ground, or connect to the grid. But they’re stymied by the need for costly upgrades, or simply waiting for review. The frustrations are many. Each proposed project undergoes a lengthy grid-impact study and assessed the cost of necessary upgrades. Each project is considered in isolation, regardless of whether similar projects are happening nearby that could share the upgrade costs or auger different improvements. The planning process tends to be reactive — examining only the applications in front of them — rather than considering trends over the coming years. It’s a first-come, first-served queue: if one project is ready to break ground, it must wait behind another project that’s still securing funding or permitting.

Two years in development, the dryly-named Improvements to Generator Interconnection Procedures and Agreements directs utility operators to plan infrastructure improvements with a 20-yr forecast of new energy sources and increased demand. Rather than examining each project in isolation, similar projects will be clustered and examined together. Instead of a First-Come, First-Served serial process, operators will instead examine First-Ready, allowing shovel-ready projects to jump the queue. The expectation is that these new rules will speed up and streamline the process of developing and connecting new energy projects through more holistic planning, penalties for delays, sensible cost-sharing for upgrades, and justification for long-term investments.

Read more of this story at Slashdot.

Project Astra Is Google’s ‘Multimodal’ Answer to the New ChatGPT

At Google I/O today, Google introduced a “next-generation AI assistant” called Project Astra that can “make sense of what your phone’s camera sees,” reports Wired. It follows yesterday’s launch of GPT-4o, a new AI model from OpenAI that can quickly respond to prompts via voice and talk about what it ‘sees’ through a smartphone camera or on a computer screen. It “also uses a more humanlike voice and emotionally expressive tone, simulating emotions like surprise and even flirtatiousness,” notes Wired. From the report: In response to spoken commands, Astra was able to make sense of objects and scenes as viewed through the devices’ cameras, and converse about them in natural language. It identified a computer speaker and answered questions about its components, recognized a London neighborhood from the view out of an office window, read and analyzed code from a computer screen, composed a limerick about some pencils, and recalled where a person had left a pair of glasses. […] Google says Project Astra will be made available through a new interface called Gemini Live later this year. [Demis Hassabis, the executive leading the company’s effort to reestablish leadership inÂAI] said that the company is still testing several prototype smart glasses and has yet to make a decision on whether to launch any of them.

Hassabis believes that imbuing AI models with a deeper understanding of the physical world will be key to further progress in AI, and to making systems like Project Astra more robust. Other frontiers of AI, including Google DeepMind’s work on game-playing AI programs could help, he says. Hassabis and others hope such work could be revolutionary for robotics, an area that Google is also investing in.
“A multimodal universal agent assistant is on the sort of track to artificial general intelligence,” Hassabis said in reference to a hoped-for but largely undefined future point where machines can do anything and everything that a human mind can. “This is not AGI or anything, but it’s the beginning of something.”

Read more of this story at Slashdot.

Google Targets Filmmakers With Veo, Its New Generative AI Video Model

At its I/O developer conference today, Google announced Veo, its latest generative AI video model, that “can generate ‘high-quality’ 1080p resolution videos over a minute in length in a wide variety of visual and cinematic styles,” reports The Verge. From the report: Veo has “an advanced understanding of natural language,” according to Google’s press release, enabling the model to understand cinematic terms like “timelapse” or “aerial shots of a landscape.” Users can direct their desired output using text, image, or video-based prompts, and Google says the resulting videos are “more consistent and coherent,” depicting more realistic movement for people, animals, and objects throughout shots. Google DeepMind CEO Demis Hassabis said in a press preview on Monday that video results can be refined using additional prompts and that Google is exploring additional features to enable Veo to produce storyboards and longer scenes.

As is the case with many of these AI model previews, most folks hoping to try Veo out themselves will likely have to wait a while. Google says it’s inviting select filmmakers and creators to experiment with the model to determine how it can best support creatives and will build on these collaborations to ensure “creators have a voice” in how Google’s AI technologies are developed. Some Veo features will also be made available to “select creators in the coming weeks” in a private preview inside VideoFX — you can sign up for the waitlist here for an early chance to try it out. Otherwise, Google is also planning to add some of its capabilities to YouTube Shorts “in the future.” Along with its new AI models and tools, Google said it’s expanding its AI content watermarking and detection technology. The company’s new upgraded SynthID watermark imprinting system “can now mark video that was digitally generated, as well as AI-generated text,” reports The Verge in a separate report.

Read more of this story at Slashdot.