MTV News Website Goes Dark, Archives Pulled Offline

MTVNews.com has been shut down, with more than two decades’ worth of content no longer available. “Content on its sister site, CMT.com, seems to have met a similar fate,” adds Variety. From the report: In 2023, MTV News was shuttered amid the financial woes of parent company Paramount Global. As of Monday, trying to access MTV News articles on mtvnews.com or mtv.com/news resulted in visitors being redirected to the main MTV website.

The now-unavailable content includes decades of music journalism comprising thousands of articles and interviews with countless major artists, dating back to the site’s launch in 1996. Perhaps the most significant loss is MTV News’ vast hip-hop-related archives, particularly its weekly “Mixtape Monday” column, which ran for nearly a decade in the 2000s and 2010s and featured interviews, reviews and more with many artists, producers and others early in their careers. “So, mtvnews.com no longer exists. Eight years of my life are gone without a trace,” Patrick Hosken, former music editor for MTV News, wrote on X. “All because it didn’t fit some executives’ bottom lines. Infuriating is too small a word.”

“sickening (derogatory) to see the entire @mtvnews archive wiped from the internet,” Crystal Bell, culture editor at Mashable and one-time entertainment director of MTV News, posted on X.”decades of music history gone… including some very early k-pop stories.”

“This is disgraceful. They’ve completely wiped the MTV News archive,” longtime Rolling Stone senior writer Brian Hiatt commented. “Decades of pop culture history research material gone, and why?”

The report notes that some MTV News articles may be available via internet archiving services like the Wayback Machine. However, older articles aren’t available.

Read more of this story at Slashdot.

Researchers Upend AI Status Quo By Eliminating Matrix Multiplication In LLMs

Researchers from UC Santa Cruz, UC Davis, LuxiTech, and Soochow University have developed a new method to run AI language models more efficiently by eliminating matrix multiplication, potentially reducing the environmental impact and operational costs of AI systems. Ars Technica’s Benj Edwards reports: Matrix multiplication (often abbreviated to “MatMul”) is at the center of most neural network computational tasks today, and GPUs are particularly good at executing the math quickly because they can perform large numbers of multiplication operations in parallel. […] In the new paper, titled “Scalable MatMul-free Language Modeling,” the researchers describe creating a custom 2.7 billion parameter model without using MatMul that features similar performance to conventional large language models (LLMs). They also demonstrate running a 1.3 billion parameter model at 23.8 tokens per second on a GPU that was accelerated by a custom-programmed FPGA chip that uses about 13 watts of power (not counting the GPU’s power draw). The implication is that a more efficient FPGA “paves the way for the development of more efficient and hardware-friendly architectures,” they write.

The paper doesn’t provide power estimates for conventional LLMs, but this post from UC Santa Cruz estimates about 700 watts for a conventional model. However, in our experience, you can run a 2.7B parameter version of Llama 2 competently on a home PC with an RTX 3060 (that uses about 200 watts peak) powered by a 500-watt power supply. So, if you could theoretically completely run an LLM in only 13 watts on an FPGA (without a GPU), that would be a 38-fold decrease in power usage. The technique has not yet been peer-reviewed, but the researchers — Rui-Jie Zhu, Yu Zhang, Ethan Sifferman, Tyler Sheaves, Yiqiao Wang, Dustin Richmond, Peng Zhou, and Jason Eshraghian — claim that their work challenges the prevailing paradigm that matrix multiplication operations are indispensable for building high-performing language models. They argue that their approach could make large language models more accessible, efficient, and sustainable, particularly for deployment on resource-constrained hardware like smartphones. […]

The researchers say that scaling laws observed in their experiments suggest that the MatMul-free LM may also outperform traditional LLMs at very large scales. The researchers project that their approach could theoretically intersect with and surpass the performance of standard LLMs at scales around 10^23 FLOPS, which is roughly equivalent to the training compute required for models like Meta’s Llama-3 8B or Llama-2 70B. However, the authors note that their work has limitations. The MatMul-free LM has not been tested on extremely large-scale models (e.g., 100 billion-plus parameters) due to computational constraints. They call for institutions with larger resources to invest in scaling up and further developing this lightweight approach to language modeling.

Read more of this story at Slashdot.

Meta Is Tagging Real Photos As ‘Made With AI,’ Says Photographers

Since May, Meta has been labeling photos created with AI tools on its social networks to help users better identify the content they’re consuming. However, as TechCrunch’s Ivan Mehta reports, this approach has faced criticism as many photos not created using AI tools have been incorrectly labeled, prompting Meta to reevaluate its labeling strategy to better reflect the actual use of AI in images. From the report: There are plenty of examples of Meta automatically attaching the label to photos that were not created through AI. For example, this photo of Kolkata Knight Riders winning the Indian Premier League Cricket tournament. Notably, the label is only visible on the mobile apps and not on the web. Plenty of other photographers have raised concerns over their images having been wrongly tagged with the “Made with AI” label. Their point is that simply editing a photo with a tool should not be subject to the label.

Former White House photographer Pete Souza said in an Instagram post that one of his photos was tagged with the new label. Souza told TechCrunch in an email that Adobe changed how its cropping tool works and you have to “flatten the image” before saving it as a JPEG image. He suspects that this action has triggered Meta’s algorithm to attach this label. “What’s annoying is that the post forced me to include the ‘Made with AI’ even though I unchecked it,” Souza told TechCrunch.

Meta would not answer on the record to TechCrunch’s questions about Souza’s experience or other photographers’ posts who said their posts were incorrectly tagged. However, after publishing of the story, Meta said the company is evaluating its approach to indicate labels reflect the amount of AI used in an image. “Our intent has always been to help people know when they see content that has been made with AI. We are taking into account recent feedback and continue to evaluate our approach so that our labels reflect the amount of AI used in an image,” a Meta spokesperson told TechCrunch. “For now, Meta provides no separate labels to indicate if a photographer used a tool to clean up their photo, or used AI to create it,” notes TechCrunch. “For users, it might be hard to understand how much AI was involved in a photo.”

“Meta’s label specifies that ‘Generative AI may have been used to create or edit content in this post’ — but only if you tap on the label. Despite this approach, there are plenty of photos on Meta’s platforms that are clearly AI-generated, and Meta’s algorithm hasn’t labeled them.”

Read more of this story at Slashdot.