Adobe Scolded For Selling ‘Ansel Adams-Style’ Images Generated By AI

The Ansel Adams estate said it was “officially on our last nerve” after Adobe was caught selling AI-generated images imitating the late photographer’s work. The Verge reports: While Adobe permits AI-generated images to be hosted and sold on its stock image platform, users are required to hold the appropriate rights or ownership over the content they upload. Adobe Stock’s Contributor Terms specifically prohibits content “created using prompts containing other artist names, or created using prompts otherwise intended to copy another artist.” Adobe responded to the callout, saying it had removed the offending content and had privately messaged the Adams estate to get in touch directly in the future. The Adams estate, however, said it had contacted Adobe directly multiple times since August 2023.

“Assuming you want to be taken seriously re: your purported commitment to ethical, responsible AI, while demonstrating respect for the creative community, we invite you to become proactive about complaints like ours, & to stop putting the onus on individual artists/artists’ estates to continuously police our IP on your platform, on your terms,” said the Adams estate on Threads. “It’s past time to stop wasting resources that don’t belong to you.”

Adobe Stock Vice President Matthew Smith previously told The Verge that the company generally moderates all “crowdsourced” Adobe Stock assets before they are made available to customers, employing a “variety” of methods that include “an experienced team of moderators who review submissions.” As of January 2024, Smith said the strongest action the company can take to enforce its platform rules is to block Adobe Stock users who violate them. Bassil Elkadi, Adobe’s Director of Communications and Public Relations, told The Verge that Adobe is “actively in touch with Ansel Adams on this matter,” and that “appropriate steps were taken given the user violated Stock terms.” The Adams estate has since thanked Adobe for removing the images, and said that it expects “it will stick this time.” “We don’t have a problem with anyone taking inspiration from Ansel’s photography,” said the Adams estate. “But we strenuously object to the unauthorized use of his name to sell products of any kind, including digital products, and this includes AI-generated output — regardless of whether his name has been used on the input side, or whether a given model has been trained on his work.”

Read more of this story at Slashdot.

AI Researchers Analyze Similarities of Scarlett Johanssson’s Voice to OpenAI’s ‘Sky’

AI models can evaluate how similar voices are to each other. So NPR asked forensic voice experts at Arizona State University to compare the voice and speech patterns of OpenAI’s “Sky” to Scarlett Johansson’s…

The researchers measured Sky, based on audio from demos OpenAI delivered last week, against the voices of around 600 professional actresses. They found that Johansson’s voice is more similar to Sky than 98% of the other actresses.

Yet she wasn’t always the top hit in the multiple AI models that scanned the Sky voice. The researchers found that Sky was also reminiscent of other Hollywood stars, including Anne Hathaway and Keri Russell. The analysis of Sky often rated Hathaway and Russell as being even more similar to the AI than Johansson.

The lab study shows that the voices of Sky and Johansson have undeniable commonalities — something many listeners believed, and that now can be supported by statistical evidence, according to Arizona State University computer scientist Visar Berisha, who led the voice analysis in the school’s College of Health Solutions and the College of Engineering. “Our analysis shows that the two voices are similar but likely not identical,” Berisha said…

OpenAI maintains that Sky was not created with Johansson in mind, saying it was never meant to mimic the famous actress. “It’s not her voice. It’s not supposed to be. I’m sorry for the confusion. Clearly you think it is,” Altman said at a conference this week. He said whether one voice is really similar to another will always be the subject of debate.

Read more of this story at Slashdot.

Elon Musk Says AI Could Eliminate Our Need to Work at Jobs

In the future, “Probably none of us will have a job,” Elon Musk said Thursday, speaking remotely to the VivaTech 2024 conference in Paris. Instead, jobs will be optional — something we’d do like a hobby — “But otherwise, AI and the robots will provide any goods and services that you want.”
CNN reports that Musk added this would require “universal high income” — and “There would be no shortage of goods or services.”
In a job-free future, though, Musk questioned whether people would feel emotionally fulfilled. “The question will really be one of meaning — if the computer and robots can do everything better than you, does your life have meaning?” he said. “I do think there’s perhaps still a role for humans in this — in that we may give AI meaning.”

CNN accompanied their article with this counterargument:
In January, researchers at MIT’s Computer Science and Artificial Intelligence Lab found workplaces are adopting AI much more slowly than some had expected and feared. The report also said the majority of jobs previously identified as vulnerable to AI were not economically beneficial for employers to automate at that time. Experts also largely believe that many jobs that require a high emotional intelligence and human interaction will not need replacing, such as mental health professionals, creatives and teachers.

CNN notes that Musk “also used his stage time to urge parents to limit the amount of social media that children can see because ‘they’re being programmed by a dopamine-maximizing AI’.”

Read more of this story at Slashdot.

FTC Chair: AI Models Could Violate Antitrust Laws

An anonymous reader quotes a report from The Hill: Federal Trade Commission (FTC) Chair Lina Khan said Wednesday that companies that train their artificial intelligence (A) models on data from news websites, artists’ creations or people’s personal information could be in violation of antitrust laws. At The Wall Street Journal’s “Future of Everything Festival,” Khan said the FTC is examining ways in which major companies’ data scraping could hinder competition or potentially violate people’s privacy rights. “The FTC Act prohibits unfair methods of competition and unfair or deceptive acts or practices,” Khan said at the event. “So, you can imagine, if somebody’s content or information is being scraped that they have produced, and then is being used in ways to compete with them and to dislodge them from the market and divert businesses, in some cases, that could be an unfair method of competition.”

Khan said concern also lies in companies using people’s data without their knowledge or consent, which can also raise legal concerns. “We’ve also seen a lot of concern about deception, about unfairness, if firms are making one set of representations when you’re signing up to use them, but then are secretly or quietly using the data you’re feeding them — be it your personal data, be it, if you’re a business, your proprietary data, your competitively significant data — if they’re then using that to feed their models, to compete with you, to abuse your privacy, that can also raise legal concerns,” she said.

Khan also recognized people’s concerns about companies retroactively changing their terms of service to let them use customers’ content, including personal photos or family videos, to feed into their AI models. “I think that’s where people feel a sense of violation, that that’s not really what they signed up for and oftentimes, they feel that they don’t have recourse,” Khan said. “Some of these services are essential for navigating day to day life,” she continued, “and so, if the choice — ‘choice’ — you’re being presented with is: sign off on not just being endlessly surveilled, but all of that data being fed into these models, or forego using these services entirely, I think that’s a really tough spot to put people in.” Khan said she thinks many government agencies have an important role to play as AI continues to develop, saying, “I think in Washington, there’s increasingly a recognition that we can’t, as a government, just be totally hands off and stand out of the way.” You can watch the interview with Khan here.

Read more of this story at Slashdot.

DOJ Makes Its First Known Arrest For AI-Generated CSAM

In what’s believed to be the first case of its kind, the U.S. Department of Justice arrested a Wisconsin man last week for generating and distributing AI-generated child sexual abuse material (CSAM). Even if no children were used to create the material, the DOJ “looks to establish a judicial precedent that exploitative materials are still illegal,” reports Engadget. From the report: The DOJ says 42-year-old software engineer Steven Anderegg of Holmen, WI, used a fork of the open-source AI image generator Stable Diffusion to make the images, which he then used to try to lure an underage boy into sexual situations. The latter will likely play a central role in the eventual trial for the four counts of “producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16.” The government says Anderegg’s images showed “nude or partially clothed minors lasciviously displaying or touching their genitals or engaging in sexual intercourse with men.” The DOJ claims he used specific prompts, including negative prompts (extra guidance for the AI model, telling it what not to produce) to spur the generator into making the CSAM.

Cloud-based image generators like Midjourney and DALL-E 3 have safeguards against this type of activity, but Ars Technica reports that Anderegg allegedly used Stable Diffusion 1.5, a variant with fewer boundaries. Stability AI told the publication that fork was produced by Runway ML. According to the DOJ, Anderegg communicated online with the 15-year-old boy, describing how he used the AI model to create the images. The agency says the accused sent the teen direct messages on Instagram, including several AI images of “minors lasciviously displaying their genitals.” To its credit, Instagram reported the images to the National Center for Missing and Exploited Children (NCMEC), which alerted law enforcement. Anderegg could face five to 70 years in prison if convicted on all four counts. He’s currently in federal custody before a hearing scheduled for May 22.

Read more of this story at Slashdot.

Project Astra Is Google’s ‘Multimodal’ Answer to the New ChatGPT

At Google I/O today, Google introduced a “next-generation AI assistant” called Project Astra that can “make sense of what your phone’s camera sees,” reports Wired. It follows yesterday’s launch of GPT-4o, a new AI model from OpenAI that can quickly respond to prompts via voice and talk about what it ‘sees’ through a smartphone camera or on a computer screen. It “also uses a more humanlike voice and emotionally expressive tone, simulating emotions like surprise and even flirtatiousness,” notes Wired. From the report: In response to spoken commands, Astra was able to make sense of objects and scenes as viewed through the devices’ cameras, and converse about them in natural language. It identified a computer speaker and answered questions about its components, recognized a London neighborhood from the view out of an office window, read and analyzed code from a computer screen, composed a limerick about some pencils, and recalled where a person had left a pair of glasses. […] Google says Project Astra will be made available through a new interface called Gemini Live later this year. [Demis Hassabis, the executive leading the company’s effort to reestablish leadership inÂAI] said that the company is still testing several prototype smart glasses and has yet to make a decision on whether to launch any of them.

Hassabis believes that imbuing AI models with a deeper understanding of the physical world will be key to further progress in AI, and to making systems like Project Astra more robust. Other frontiers of AI, including Google DeepMind’s work on game-playing AI programs could help, he says. Hassabis and others hope such work could be revolutionary for robotics, an area that Google is also investing in.
“A multimodal universal agent assistant is on the sort of track to artificial general intelligence,” Hassabis said in reference to a hoped-for but largely undefined future point where machines can do anything and everything that a human mind can. “This is not AGI or anything, but it’s the beginning of something.”

Read more of this story at Slashdot.

Apple Will Revamp Siri To Catch Up To Its Chatbot Competitors

An anonymous reader quotes a report from the New York Times: Apple’s top software executives decided early last year that Siri, the company’s virtual assistant, needed a brain transplant. The decision came after the executives Craig Federighi and John Giannandrea spent weeks testing OpenAI’s new chatbot, ChatGPT. The product’s use of generative artificial intelligence, which can write poetry, create computer code and answer complex questions, made Siri look antiquated, said two people familiar with the company’s work, who didn’t have permission to speak publicly. Introduced in 2011 as the original virtual assistant in every iPhone, Siri had been limited for years to individual requests and had never been able to follow a conversation. It often misunderstood questions. ChatGPT, on the other hand, knew that if someone asked for the weather in San Francisco and then said, “What about New York?” that user wanted another forecast.

The realization that new technology had leapfrogged Siri set in motion the tech giant’s most significant reorganization in more than a decade. Determined to catch up in the tech industry’s A.I. race, Apple has made generative A.I. a tent pole project — the company’s special, internal label that it uses to organize employees around once-in-a-decade initiatives. Apple is expected to show off its A.I. work at its annual developers conference on June 10 when it releases an improved Siri that is more conversational and versatile, according to three people familiar with the company’s work, who didn’t have permission to speak publicly. Siri’s underlying technology will include a new generative A.I. system that will allow it to chat rather than respond to questions one at a time. The update to Siri is at the forefront of a broader effort to embrace generative A.I. across Apple’s business. The company is also increasing the memory in this year’s iPhones to support its new Siri capabilities. And it has discussed licensing complementary A.I. models that power chatbots from several companies, including Google, Cohere and OpenAI. Further reading: Apple Might Bring AI Transcription To Voice Memos and Notes

Read more of this story at Slashdot.

OpenAI Exec Says Today’s ChatGPT Will Be ‘Laughably Bad’ In 12 Months

At the 27th annual Milken Institute Global Conference on Monday, OpenAI COO Brad Lightcap said today’s ChatGPT chatbot “will be laughably bad” compared to what it’ll be capable of a year from now. “We think we’re going to move toward a world where they’re much more capable,” he added. Business Insider reports: Lightcap says large language models, which people use to help do their jobs and meet their personal goals, will soon be able to take on “more complex work.” He adds that AI will have more of a “system relationship” with users, meaning the technology will serve as a “great teammate” that can assist users on “any given problem.” “That’s going to be a different way of using software,” the OpenAI exec said on the panel regarding AI’s foreseeable capabilities.

In light of his predictions, Lightcap acknowledges that it can be tough for people to “really understand” and “internalize” what a world with robot assistants would look like. But in the next decade, the COO believes talking to an AI like you would with a friend, teammate, or project collaborator will be the new norm. “I think that’s a profound shift that we haven’t quite grasped,” he said, referring to his 10-year forecast. “We’re just scratching the surface on the full kind of set of capabilities that these systems have,” he said at the Milken Institute conference. “That’s going to surprise us.” You can watch/listen to the talk here.

Read more of this story at Slashdot.

AI-Powered ‘HorseGPT’ Fails to Predict This Year’s Kentucky Derby Winner

In 2016, an online “swarm intelligence” platform generated a correct prediction for the Kentucky Derby — naming all four top finishers, in order. (But the next year their predictions weren’t even close, with TechRepublic suggesting 2016’s race had an unusual cluster of just a few top racehorses.)

So this year Decrypt.co tried crafting their own system “that can be called up when the next Kentucky Derby draws near.
There are a variety of ways to enlist artificial intelligence in horse racing. You could process reams of data based on your own methodology, trust a third-party pre-trained model, or even build a bespoke solution from the ground up. We decided to build a GPT we named HorseGPT to crunch the numbers and make the picks for us…

We carefully curated prompts to instill HorseGPT with expertise in data science specific to horse racing: how weather affects times, the role of jockeys and riding styles, the importance of post positions, and so on. We then fed it a mix of research papers and blogs covering the theoretical aspects of wagering, and layered on practical knowledge: how to read racing forms, what the statistics mean, which factors are most predictive, expert betting strategies, and more. Finally, we gave HorseGPT a wealth of historical Kentucky Derby data, arming it with the raw information needed to put its freshly imparted skills to use.
We unleashed HorseGPT on official racing forms for this year’s Derby. We asked HorseGPT to carefully analyze each race’s form, identify the top contenders, and recommend wager types and strategies based on deep background knowledge derived from race statistics.

HorseGPT picked two horses to win — both of which failed to do so. (Sierra Leone did finish second — in a rare photo finish. But Fierceness finished… 15th.) It also recommended the same two horses if you were trying to pick the top two finishers in the correct order — a losing bet, since, again, Fierceness finished 15th.

But even worse, HorseGPT recommended betting on Just a Touch to finish in either first or second place. When the race was over, that horse finished dead last. (And when asked to pick the top three finishers in correct order, HorseGPT stuck with its choices for the top two — which finished #2 and #15 — and, again, Just a Touch, who came in last.)

When Google Gemini was asked to pick the winner by The Athletic, it first chose Catching Freedom (who finished 4th). But it then gave an entirely different answer when asked to predict the winner “with an Italian accent.”

“The winner of the Kentucky Derby will be… Just a Touch! Si, that’s-a right, the underdog! There will be much-a celebrating in the piazzas, thatta-a I guarantee!”

Again, Just a Touch came in last.

Decrypt noticed the same thing. “Interestingly enough, our HorseGPT AI agent and the other out-of-the-box chatbots seemed to agree with each other,” the site notes, “and with many experts analysts cited by the official Kentucky Derby website.”

But there was one glimmer of insight into the 20-horse race. When asked to choose the top four finishers in order, HorseGPT repeated those same losing picks — which finished #2, #15, and #20. But then it added two more underdogs for fourth place finishers, “based on their potential to outperform expectations under muddy conditions.”
One of those two horses — Domestic Product — finished in 13th place.

But the other of the two horses was Mystik Dan — who came in first.

Mystik Dan appeared in only one of the six “Top 10 Finishers” lists (created by humans) at the official Kentucky Derby site… in the #10 position.

Read more of this story at Slashdot.