Scientists Turn To AI To Make Beer Taste Even Better

Researchers say they have used AI to make brews even better. From a report: Prof Kevin Verstrepen, of KU Leuven university, who led the research, said AI could help tease apart the complex relationships involved in human aroma perception. “Beer — like most food products — contains hundreds of different aroma molecules that get picked up by our tongue and nose, and our brain then integrates these into one picture. However, the compounds interact with each other, so how we perceive one depends also on the concentrations of the others,” he said.

Writing in the journal Nature Communications, Verstrepen and his colleagues report how they analysed the chemical makeup of 250 commercial Belgian beers of 22 different styles including lagers, fruit beers, blonds, West Flanders ales, and non-alcoholic beers. Among the properties studied were alcohol content, pH, sugar concentration, and the presence and concentration of more than 200 different compounds involved in flavour — such as esters that are produced by yeasts and terpenoids from hops, both of which are involved in creating fruity notes.

A tasting panel of 16 participants sampled and scored each of the 250 beers for 50 different attributes, such as hop flavours, sweetness, and acidity — a process that took three years. The researchers also collected 180,000 reviews of different beers from the online consumer review platform RateBeer, finding that while appreciation of the brews was biased by features such as price meaning they differed from the tasting panel’s ratings, the ratings and comments relating to other features — such as bitterness, sweetness, alcohol and malt aroma — these correlated well with those from the tasting panel.

Read more of this story at Slashdot.

World Poker Tour Bets on AI Dubbing of Tournaments for Latin America

Georg Szalai reports via the Hollywood Reporter: The World Poker Tour (WPT) is betting on AI-powered dubbing tools under a partnership with Papercup, a London-based AI dubbing company, that will replace WPT’s traditional localization methods in Latin America. Papercup will work with the World Poker Tour to translate 184 of the franchise’s 44-minute-long episodes into Brazilian Portuguese, the companies said.

“This will amount to nearly 140 hours of content and enable viewers across South America to access WPT’s latest shows and tournaments in their native language quicker than ever before,” they explained. “Forced to deal with lead times of up to six months, the company experienced ongoing challenges with timely content delivery and adaptation.” The Papercup deal will cut those lead times in half, the partners said. “Now the premier poker content produced by WPT will be able to reach international fans watching on OTT platforms, as well as its own FAST channel, faster than ever before,” they touted. Financial terms weren’t disclosed.

Papercup uses a combination of machine-learning tools and expert human translators to “deliver maximal linguistic and tonal accuracy.” Its AI voices are built using data from real voice actors to ensure they “have all the warmth and expressivity of human speech,” it says. “The quality of Papercup dubbing has been second to none. A big part of that is down to their AI voices and expert translators who go through every sentence to make sure the moment is truly captured in the new AI dubs,” said Marc Dion, director of distribution & ad sales at WPT. “The major streaming platforms have very stringent criteria when it comes to dubbed content and if it’s going to connect with our shared viewers.”

Read more of this story at Slashdot.

GitHub Introduces AI-Powered Tool That Suggests Ways It Can Auto-Fix Your Code

“It’s a bad day for bugs,” joked TechCrunch on Wednesday. “Earlier today, Sentry announced its AI Autofix feature for debugging production code…”

And then the same day, BleepingComputer reported that GitHub “introduced a new AI-powered feature capable of speeding up vulnerability fixes while coding.”

This feature is in public beta and automatically enabled on all private repositories for GitHub Advanced Security customers. Known as Code Scanning Autofix and powered by GitHub Copilot and CodeQL, it helps deal with over 90% of alert types in JavaScript, Typescript, Java, and Python… After being toggled on, it provides potential fixes that GitHub claims will likely address more than two-thirds of found vulnerabilities while coding with little or no editing.

“When a vulnerability is discovered in a supported language, fix suggestions will include a natural language explanation of the suggested fix, together with a preview of the code suggestion that the developer can accept, edit, or dismiss,” GitHub’s Pierre Tempel and Eric Tooley said…
Last month, the company also enabled push protection by default for all public repositories to stop the accidental exposure of secrets like access tokens and API keys when pushing new code. This was a significant issue in 2023, as GitHub users accidentally exposed 12.8 million authentication and sensitive secrets via more than 3 million public repositories throughout the year.

GitHub will continue adding support for more languages, with C# and Go coming next, according to their announcement.

“Our vision for application security is an environment where found means fixed.”

Read more of this story at Slashdot.

Nvidia’s Jensen Huang Says AGI Is 5 Years Away

Haje Jan Kamps writes via TechCrunch: Artificial General Intelligence (AGI) — often referred to as “strong AI,” “full AI,” “human-level AI” or “general intelligent action” — represents a significant future leap in the field of artificial intelligence. Unlike narrow AI, which is tailored for specific tasks (such as detecting product flaws, summarize the news, or build you a website), AGI will be able to perform a broad spectrum of cognitive tasks at or above human levels. Addressing the press this week at Nvidia’s annual GTC developer conference, CEO Jensen Huang appeared to be getting really bored of discussing the subject — not least because he finds himself misquoted a lot, he says. The frequency of the question makes sense: The concept raises existential questions about humanity’s role in and control of a future where machines can outthink, outlearn and outperform humans in virtually every domain. The core of this concern lies in the unpredictability of AGI’s decision-making processes and objectives, which might not align with human values or priorities (a concept explored in depth in science fiction since at least the 1940s). There’s concern that once AGI reaches a certain level of autonomy and capability, it might become impossible to contain or control, leading to scenarios where its actions cannot be predicted or reversed.

When sensationalist press asks for a timeframe, it is often baiting AI professionals into putting a timeline on the end of humanity — or at least the current status quo. Needless to say, AI CEOs aren’t always eager to tackle the subject. Predicting when we will see a passable AGI depends on how you define AGI, Huang argues, and draws a couple of parallels: Even with the complications of time-zones, you know when new year happens and 2025 rolls around. If you’re driving to the San Jose Convention Center (where this year’s GTC conference is being held), you generally know you’ve arrived when you can see the enormous GTC banners. The crucial point is that we can agree on how to measure that you’ve arrived, whether temporally or geospatially, where you were hoping to go. “If we specified AGI to be something very specific, a set of tests where a software program can do very well — or maybe 8% better than most people — I believe we will get there within 5 years,” Huang explains. He suggests that the tests could be a legal bar exam, logic tests, economic tests or perhaps the ability to pass a pre-med exam. Unless the questioner is able to be very specific about what AGI means in the context of the question, he’s not willing to make a prediction. Fair enough.

Read more of this story at Slashdot.

Nvidia Reveals Blackwell B200 GPU, the ‘World’s Most Powerful Chip’ For AI

Sean Hollister reports via The Verge: Nvidia’s must-have H100 AI chip made it a multitrillion-dollar company, one that may be worth more than Alphabet and Amazon, and competitors have been fighting to catch up. But perhaps Nvidia is about to extend its lead — with the new Blackwell B200 GPU and GB200 “superchip.” Nvidia says the new B200 GPU offers up to 20 petaflops of FP4 horsepower from its 208 billion transistors and that a GB200 that combines two of those GPUs with a single Grace CPU can offer 30 times the performance for LLM inference workloads while also potentially being substantially more efficient. It “reduces cost and energy consumption by up to 25x” over an H100, says Nvidia.

Training a 1.8 trillion parameter model would have previously taken 8,000 Hopper GPUs and 15 megawatts of power, Nvidia claims. Today, Nvidia’s CEO says 2,000 Blackwell GPUs can do it while consuming just four megawatts. On a GPT-3 LLM benchmark with 175 billion parameters, Nvidia says the GB200 has a somewhat more modest seven times the performance of an H100, and Nvidia says it offers 4x the training speed. Nvidia told journalists one of the key improvements is a second-gen transformer engine that doubles the compute, bandwidth, and model size by using four bits for each neuron instead of eight (thus, the 20 petaflops of FP4 I mentioned earlier). A second key difference only comes when you link up huge numbers of these GPUs: a next-gen NVLink switch that lets 576 GPUs talk to each other, with 1.8 terabytes per second of bidirectional bandwidth. That required Nvidia to build an entire new network switch chip, one with 50 billion transistors and some of its own onboard compute: 3.6 teraflops of FP8, says Nvidia. Further reading: Nvidia in Talks To Acquire AI Infrastructure Platform Run:ai

Read more of this story at Slashdot.

Cognition Emerges From Stealth To Launch AI Software Engineer ‘Devin’

Longtime Slashdot reader ahbond shares a report from VentureBeat: Today, Cognition, a recently formed AI startup backed by Peter Thiel’s Founders Fund and tech industry leaders including former Twitter executive Elad Gil and Doordash co-founder Tony Xu, announced a fully autonomous AI software engineer called “Devin.” While there are multiple coding assistants out there, including the famous Github Copilot, Devin is said to stand out from the crowd with its ability to handle entire development projects end-to-end, right from writing the code and fixing the bugs associated with it to final execution. This is the first offering of this kind and even capable of handling projects on Upwork, the startup has demonstrated. […]

In a blog post today on Cognition’s website, Scott Wu, the founder and CEO of Cognition and an award-winning sports coder, explained Devin can access common developer tools, including its own shell, code editor and browser, within a sandboxed compute environment to plan and execute complex engineering tasks requiring thousands of decisions. The human user simply types a natural language prompt into Devin’s chatbot style interface, and the AI software engineer takes it from there, developing a detailed, step-by-step plan to tackle the problem. It then begins the project using its developer tools, just like how a human would use them, writing its own code, fixing issues, testing and reporting on its progress in real-time, allowing the user to keep an eye on everything as it works. […]

According to demos shared by Wu, Devin is capable of handling a range of tasks in its current form. This includes common engineering projects like deploying and improving apps/websites end-to-end and finding and fixing bugs in codebases to more complex things like setting up fine-tuning for a large language model using the link to a research repository on GitHub or learning how to use unfamiliar technologies. In one case, it learned from a blog post how to run the code to produce images with concealed messages. Meanwhile, in another, it handled an Upwork project to run a computer vision model by writing and debugging the code for it. In the SWE-bench test, which challenges AI assistants with GitHub issues from real-world open-source projects, the AI software engineer was able to correctly resolve 13.86% of the cases end-to-end — without any assistance from humans. In comparison, Claude 2 could resolve just 4.80% while SWE-Llama-13b and GPT-4 could handle 3.97% and 1.74% of the issues, respectively. All these models even required assistance, where they were told which file had to be fixed. Currently, Devin is available only to a select few customers. Bloomberg journalist Ashlee Vance wrote a piece about his experience using it here.

“The Doom of Man is at hand,” captions Slashdot reader ahbond. “It will start with the low-hanging Jira tickets, and in a year or two, able to handle 99% of them. In the short term, software engineers may become like bot farmers, herding 10-1000 bots writing code, etc. Welcome to the future.”

Read more of this story at Slashdot.

Qualcomm Launches First True ‘App Store’ For AI With 75 Free Models

Wayne Williams reports via TechRadar: Qualcomm has unveiled its AI Hub, an all-inclusive library of pre-optimized AI models ready for use on devices running on Snapdragon and Qualcomm platforms. These models support a wide range of applications including natural language processing, computer vision, and anomaly detection, and are designed to deliver high performance with minimal power consumption, a critical factor for mobile and edge devices. The AI Hub library currently includes more than 75 popular AI and generative AI models including Whisper, ControlNet, Stable Diffusion, and Baichuan 7B. All models are bundled in various runtimes and are optimized to leverage the Qualcomm AI Engine’s hardware acceleration across all cores (NPU, CPU, and GPU). According to Qualcomm, they’ll deliver four times faster inferencing times.

The AI Hub also handles model translation from the source framework to popular runtimes automatically. It works directly with the Qualcomm AI Engine direct SDK and applies hardware-aware optimizations. Developers can search for models based on their needs, download them, and integrate them into their applications, saving time and resources. The AI Hub also provides tools and resources for developers to customize these models, and they can fine-tune them using the Qualcomm Neural Processing SDK and the AI Model Efficiency Toolkit, both available on the platform.

Read more of this story at Slashdot.

The Intercept, Raw Story, and AlterNet Sue OpenAI and Microsoft

The Intercept, Raw Story, and AlterNet have filed separate lawsuits against OpenAI and Microsoft, alleging copyright infringement and the removal of copyright information while training AI models. The Verge reports: The publications said ChatGPT “at least some of the time” reproduces “verbatim or nearly verbatim copyright-protected works of journalism without providing author, title, copyright or terms of use information contained in those works.” According to the plaintiffs, if ChatGPT trained on material that included copyright information, the chatbot “would have learned to communicate that information when providing responses.”

Raw Story and AlterNet’s lawsuit goes further (PDF), saying OpenAI and Microsoft “had reason to know that ChatGPT would be less popular and generate less revenue if users believed that ChatGPT responses violated third-party copyrights.” Both Microsoft and OpenAI offer legal cover to paying customers in case they get sued for violating copyright for using Copilot or ChatGPT Enterprise. The lawsuits say that OpenAI and Microsoft are aware of potential copyright infringement. As evidence, the publications point to how OpenAI offers an opt-out system so website owners can block content from its web crawlers. The New York Times also filed a lawsuit in December against OpenAI, claiming ChatGPT faithfully reproduces journalistic work. OpenAI claims the publication exploited a bug on the chatbot to regurgitate its articles.

Read more of this story at Slashdot.

Google Admits Gemini Is ‘Missing the Mark’ With Image Generation of Historical People

Google’s Gemini AI chatbot is under fire for generating historically inaccurate images, particularly when depicting people from different eras and nationalities. Google acknowledges the issue and is actively working to refine Gemini’s accuracy, emphasizing that while diversity in image generation is valued, adjustments are necessary to meet historical accuracy standards. 9to5Google reports: The Twitter/X post in particular that brought this issue to light showed prompts to Gemini asking for the AI to generate images of Australian, American, British, and German women. All four prompts resulted in images of women with darker skin tones, which, as Google’s Jack Krawcyczk pointed out, is not incorrect, but may not be what is expected.

But a bigger issue that was noticed in the wake of that post was that Gemini also struggles to accurately depict human beings in a historical context, with those being depicted often having darker skin tones or being of particular nationalities that are not historically accurate. Google, in a statement posted to Twitter/X, admits that Gemini AI image generation is “missing the mark” on historical depictions and that the company is working to improve it. Google also does say that the diversity represented in images generated by Gemini is “generally a good thing,” but it’s clear some fine-tuning needs to happen. Further reading: Why Google’s new AI Gemini accused of refusing to acknowledge the existence of white people (The Daily Dot)

Read more of this story at Slashdot.