OpenAI Unveils Plans For Tackling Abuse Ahead of 2024 Elections

Sara Fischer reports via Axios: ChatGPT maker OpenAI says it’s rolling out new policies and tools meant to combat misinformation and abuse ahead of 2024 elections worldwide. 2024 is one of the biggest election years in history — with high-stakes races in over 50 countries globally. It’s also the first major election cycle where generative AI tools will be widely available to voters, governments and political campaigns. In a statement published Monday, OpenAI said it will lean into verified news and image authenticity programs to ensure users get access to high-quality information throughout elections.

The company will add digital credentials set by a third-party coalition of AI firms that encode details about the origin of images created using its image generator tool, DALL-E 3. The firm says it’s experimenting with a new “provenance classifier” tool that can detect AI-generated images that have been made using DALL-E. It hopes to make that tool available to its first group of testers, including journalists, researchers and other tech platforms, for feedback. OpenAI will continue integrating its ChatGPT platform with real-time news reporting globally, “including attribution and links,” it said. That effort builds on a first-of-its-kind deal announced with German media giant Axel Springer last year that offers ChatGPT users summaries of select global news content from the company’s outlets. In the U.S., OpenAI says it’s working with the nonpartisan National Association of Secretaries of State to direct ChatGPT users to CanIVote.org for authoritative information on U.S. voting and elections.

Because generative AI technology is so new, OpenAI says it’s still working to understand how effective its tools might be for political persuasion. To hedge against abuse, the firm doesn’t allow people to build applications for political campaigning and lobbying, and it doesn’t allow engineers to create chatbots that pretend to be real people, such as candidates. Like most tech firms, it doesn’t allow users to build applications for its ChatGPT platform that deter people from participating in the democratic process.

Read more of this story at Slashdot.

GTA 5 Actor Goes Nuclear On AI Company That Made Voice Chatbot of Him

Rich Stanton reports via PC Gamer: Ned Luke, the actor whose roles include GTA 5’s Michael De Santa, has gone off on an AI company that released an unlicensed voice chatbot based on the character, and succeeded in having the offending bot nuked from the internet. AI company WAME had tweeted a link to its Michael chatbot on January 14 along with the text: “Any GTA fans around here? Now take your gaming experience to another level. Try having a realistic voice conversation with Michael De Santa, the protagonist of GTA 5, right now!”

Unfortunately for WAME, it quickly attracted the attention of Luke, who does not seem like the type of man to hold back. “This is fucking bullshit WAME,” says Luke (though I am definitely hearing all this in Michael’s voice). “Absolutely nothing cool about ripping people off with some lame computer estimation of my voice. Don’t waste your time on this garbage.” Luke also tagged Rockstar Games and the SAG-AFTRA union, since when the chatbot and tweets promoting it have been deleted. Fellow actors including Roger Clark weighed in with sympathy about how much this kind of stuff sucks, and our boy wasn’t done by a long shot:

“I’m not worried about being replaced, Roger,” says Luke. “I just hate these fuckers, and am pissed as fuck that our shitty union is so damn weak that this will soon be an issue on legit work, not just some lame douchebag tryna make $$ off of our voices.” Luke is here referring to a recent SAG-AFTRA “ethical AI” agreement which has gone down with some of its members like a cup of cold sick. Musician Marilou A. Burnel (not affiliated with WAME) pops up to suggest that “creative people make remarkable things with AI, and the opposite is also true.” “Not using my voice they don’t,” says Luke. WAME issued a statement expressing their “profound understanding and concern.”

“This incident has highlighted the intricate interplay between the advancement of AI technology and the ethical and legal realms,” says WAME. “WAME commits to protecting the rights of voice actors and creators while advancing ethical AI practices.”

Read more of this story at Slashdot.

Bill Gates Interviews Sam Altman, Who Predicts Fastest Tech Revolution ‘By Far’

This week on his podcast Bill Gates asked Sam Altman how his team is doing after his (temporary) ouster, Altman replies “a lot of people have remarked on the fact that the team has never felt more productive or more optimistic or better. So, I guess that’s like a silver lining of all of this. In some sense, this was like a real moment of growing up for us, we are very motivated to become better, and sort of to become a company ready for the challenges in front of us.”

The rest of their conversation was pre-ouster — but gave fascinating glimpses at the possible future of AI — including the prospect of very speedy improvements. Altman suggests it will be easier to understand how a creative work gets “encoded” in an AI than it would be in a human brain. “There has been some very good work on interpretability, and I think there will be more over time… The little bits we do understand have, as you’d expect, been very helpful in improving these things. We’re all motivated to really understand them, scientific curiosity aside, but the scale of these is so vast….”

BILL GATES: I’m pretty sure, within the next five years, we’ll understand it. In terms of both training efficiency and accuracy, that understanding would let us do far better than we’re able to do today.
SAM ALTMAN: A hundred percent. You see this in a lot of the history of technology where someone makes an empirical discovery. They have no idea what’s going on, but it clearly works. Then, as the scientific understanding deepens, they can make it so much better.

BILL GATES: Yes, in physics, biology, it’s sometimes just messing around, and it’s like, whoa — how does this actually come together…? When you look at the next two years, what do you think some of the key milestones will be?

SAM ALTMAN: Multimodality will definitely be important.
BILL GATES: Which means speech in, speech out?
SAM ALTMAN: Speech in, speech out. Images. Eventually video. Clearly, people really want that…. [B]ut maybe the most important areas of progress will be around reasoning ability. Right now, GPT-4 can reason in only extremely limited ways. Also reliability. If you ask GPT-4 most questions 10,000 times, one of those 10,000 is probably pretty good, but it doesn’t always know which one, and you’d like to get the best response of 10,000 each time, and so that increase in reliability will be important.

Customizability and personalization will also be very important. People want very different things out of GPT-4: different styles, different sets of assumptions. We’ll make all that possible, and then also the ability to have it use your own data. The ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources, all of that. Those will be some of the most important areas of improvement.
Areas where Altman sees potential are healthcare, education, and especially computer programming. “If you make a programmer three times more effective, it’s not just that they can do three times more stuff, it’s that they can — at that higher level of abstraction, using more of their brainpower — they can now think of totally different things. It’s like, going from punch cards to higher level languages didn’t just let us program a little faster — it let us do these qualitatively new things. And we’re really seeing that…

“I think it’s worth always putting it in context of this technology that, at least for the next five or ten years, will be on a very steep improvement curve. These are the stupidest the models will ever be.”

He predicts the fastest technology revolution “by far,” worrying about “the speed with which society is going to have to adapt, and that the labor market will change.” But soon he adds that “We started investing a little bit in robotics companies. On the physical hardware side, there’s finally, for the first time that I’ve ever seen, really exciting new platforms being built there.”

And at some point Altman tells Gates he’s optimistic that AI could contribute to helping humans get along with each other.

Read more of this story at Slashdot.

Cloudflare CTO Predicts Coding AIs Will Bring More Productivity, Urges ‘Data Fluidity’

Serverless JavaScript is hosted in an edge network or by an HTTP caching service (and only runs when requested), explains Cloudflare. “Developers can write and deploy JavaScript functions that process HTTP requests before they travel all the way to the origin server.”

Their platform for serverless JavaScript will soon have built-in AI features, Cloudflare’s CTO announced today, “so that developers have a rich toolset at their disposal.
A developer platform without AI isn’t going to be much use. It’ll be a bit like a developer platform that can’t do floating point arithmetic, or handle a list of data. We’re going to see every developer platform have AI capability built in because these capabilities will allow developers to make richer experiences for users…

As I look back at 40 years of my programming life, I haven’t been this excited about a new technology… ever. That’s because AI is going to be a pervasive change to how programs get written, who writes programs and how all of us interact with software… I think it’ll make us more productive and make more people programmers.

But in addition, developers on the platform will also be able to train and upload their own models to run on Cloudflare’s global network:
Unlike a database where data might largely be stored and accessed infrequently, AI systems are alive with moving data. To accommodate that, platforms need to stop treating data as something to lock in developers with. Data needs to be free to move from system to system, from platform to platform, without transfer fees, egress or other nonsense. If we want a world of AI, we need a world of data fluidity.

Read more of this story at Slashdot.

Will AI Become the New McKinsey?

Sci-fi writer Ted Chiang, writing for New Yorker: So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey — a consulting firm that works with ninety per cent of the Fortune 100 — and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.

A former McKinsey employee has described the company as “capital’s willing executioners”: if you want something done but don’t want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what âoethe algorithmâ says, even though it was the company that commissioned the algorithm in the first place.

Read more of this story at Slashdot.

Racehorse Owned by 391 Users of a Sports-Investment App Wins Kentucky Derby

This year’s Kentucky Derby was won by a 15-to-1 longshot named Mage, reports sports magazine the Athletic. It was only the horse’s third race — but the story gets even stranger:
Mage is owned by a collective of horse racing fans that doesn’t follow your standard ownership structure. Fans can purchase shares of a horse for as little as $50. It’s a publicly owned racehorse. Mage has 391 owners on the Commonwealth app. Many were at the 2023 Kentucky Derby wearing Mage hats and they were shown on NBC’s lead up to the Kentucky Derby.
The Commonwealth app even tried to arrange watch parties for the horse race across the country. Shareholders own 25% of the racehorse, reports Sporting News:

Although horses are the most common commodity, there are golfers on the app as well, specifically Joey Vrzich and Cooper Dossey, whom it says are “coming Spring 2023.” Commonwealth has a partnership with WinStar Farm, a premiere breeder of thoroughbreds… “Together we are going to revolutionize the ownership experience and give the world access to horses that were once unavailable to the public,” Brian Doxtator, CEO of Commonwealth said, per the collective’s website.

According to Action Network’s Darren Rovell, Commonwealth owners will be paid out $94.52 per share (prior to taxes) for the win.

Before the race the Athletic sports magazine asked seven experts for their picks — and only one of them had even mentioned Mage. (The Athletic’s senior sports betting editor correctly picked Mage and “Angel of Empire” to finish somewhere in the top three positions.) But just for fun, the Athletic also asked three AI chatbots — and none of them picked the winner:

Bing AI: That’s a tough question. I don’t have a personal preference, but based on the odds and the past performances, I think Forte has a good chance of winning. He has been very consistent and impressive in his races, and he has a top trainer and jockey. He also has a favorable post position at No. 14, which gives him some room to maneuver and avoid traffic. But as I said, anything can happen in the Kentucky Derby, and there are many other talented horses in the field. It will be an exciting race to watch.

Google Bard: I like the chances of Tapit Trice to win the 2023 Kentucky Derby. He is the second-lowest odds horse at 5-1, and has won twice and finished second twice in six career races. He has one of the best pedigrees a horse could ask for, being sired by Justify, the 2018 Triple Crown winner. He is also coming off an impressive showing in the 2023 Blue Grass Stakes, where he finished second just barely behind Verifying.
Here are some other horses that I like to finish in the money… The Kentucky Derby is always a tough race to predict, but I think these horses have a good chance of winning or finishing in the money. [Bard then listed five more of the 18 horses — one of which ultimately finished third.]

Chat GPT: As an artificial intelligence language model, I don’t have personal opinions or preferences. However, based on the current information I have, I cannot accurately predict the winner of the Kentucky Derby. Horse racing involves many variables, including the skill of the jockey, the health and performance of the horse, and even the weather conditions on race day. The outcome of any given race is always uncertain and subject to chance. It’s important to note that gambling can be risky, and I don’t encourage it.

Read more of this story at Slashdot.

China’s AI Industry Barely Slowed By US Chip Export Rules

Export controls imposed by the U.S. on microchips, aiming to hinder China’s technological advancements, have had minimal effects on the country’s tech sector. While the restrictions have slowed down variants of Nvidia’s chips for the Chinese market, it has not halted China’s progress in areas like AI, as the reduced performance is still an improvement for Chinese firms, and researchers are finding ways to overcome the limitations. Reuters reports: Nvidia has created variants of its chips for the Chinese market that are slowed down to meet U.S. rules. Industry experts told Reuters the newest one – the Nvidia H800, announced in March – will likely take 10% to 30% longer to carry out some AI tasks and could double some costs compared with Nvidia’s fastest U.S. chips. Even the slowed Nvidia chips represent an improvement for Chinese firms. Tencent Holdings, one of China’s largest tech companies, in April estimated that systems using Nvidia’s H800 will cut the time it takes to train its largest AI system by more than half, from 11 days to four days. “The AI companies that we talk to seem to see the handicap as relatively small and manageable,” said Charlie Chai, a Shanghai-based analyst with 86Research.

Part of the U.S. strategy in setting the rules was to avoid such a shock that the Chinese would ditch U.S. chips altogether and redouble their own chip-development efforts. “They had to draw the line somewhere, and wherever they drew it, they were going to run into the challenge of how to not be immediately disruptive, but how to also over time degrade China’s capability,” said one chip industry executive who requested anonymity to talk about private discussions with regulators. The export restrictions have two parts. The first puts a ceiling on a chip’s ability to calculate extremely precise numbers, a measure designed to limit supercomputers that can be used in military research. Chip industry sources said that was an effective action. But calculating extremely precise numbers is less relevant in AI work like large language models where the amount of data the chip can chew through is more important. […] The second U.S. limit is on chip-to-chip transfer speeds, which does affect AI. The models behind technologies such as ChatGPT are too large to fit onto a single chip. Instead, they must be spread over many chips – often thousands at a time — which all need to communicate with one another.

Nvidia has not disclosed the China-only H800 chip’s performance details, but a specification sheet seen by Reuters shows a chip-to-chip speed of 400 gigabytes per second, less than half the peak speed of 900 gigabytes per second for Nvidia’s flagship H100 chip available outside China. Some in the AI industry believe that is still plenty of speed. Naveen Rao, chief executive of a startup called MosaicML that specializes in helping AI models to run better on limited hardware, estimated a 10-30% system slowdown. “There are ways to get around all this algorithmically,” he said. “I don’t see this being a boundary for a very long time — like 10 years.” Moreover, AI researchers are trying to slim down the massive systems they have built to cut the cost of training products similar to ChatGPT and other processes. Those will require fewer chips, reducing chip-to-chip communications and lessening the impact of the U.S. speed limits.

Read more of this story at Slashdot.

Google Has More Powerful AI, Says Engineer Fired Over Sentience Claims

Remember that Google engineer/AI ethicist who was fired last summer after claiming their LaMDA LLM had become sentient?

In a new interview with Futurism, Blake Lemoine now says the “best way forward” for humankind’s future relationship with AI is “understanding that we are dealing with intelligent artifacts. There’s a chance that — and I believe it is the case — that they have feelings and they can suffer and they can experience joy, and humans should at least keep that in mind when interacting with them.” (Although earlier in the interview, Lemoine concedes “Is there a chance that people, myself included, are projecting properties onto these systems that they don’t have? Yes. But it’s not the same kind of thing as someone who’s talking to their doll.”)

But he also thinks there’s a lot of research happening inside corporations, adding that “The only thing that has changed from two years ago to now is that the fast movement is visible to the public.” For example, Lemoine says Google almost released its AI-powered Bard chatbot last fall, but “in part because of some of the safety concerns I raised, they deleted it… I don’t think they’re being pushed around by OpenAI. I think that’s just a media narrative. I think Google is going about doing things in what they believe is a safe and responsible manner, and OpenAI just happened to release something.”

“[Google] still has far more advanced technology that they haven’t made publicly available yet. Something that does more or less what Bard does could have been released over two years ago. They’ve had that technology for over two years. What they’ve spent the intervening two years doing is working on the safety of it — making sure that it doesn’t make things up too often, making sure that it doesn’t have racial or gender biases, or political biases, things like that. That’s what they spent those two years doing…

“And in those two years, it wasn’t like they weren’t inventing other things. There are plenty of other systems that give Google’s AI more capabilities, more features, make it smarter. The most sophisticated system I ever got to play with was heavily multimodal — not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it. That’s the one that I was like, “you know this thing, this thing’s awake.” And they haven’t let the public play with that one yet. But Bard is kind of a simplified version of that, so it still has a lot of the kind of liveliness of that model…

“[W]hat it comes down to is that we aren’t spending enough time on transparency or model understandability. I’m of the opinion that we could be using the scientific investigative tools that psychology has come up with to understand human cognition, both to understand existing AI systems and to develop ones that are more easily controllable and understandable.”

So how will AI and humans will coexist? “Over the past year, I’ve been leaning more and more towards we’re not ready for this, as people,” Lemoine says toward the end of the interview. “We have not yet sufficiently answered questions about human rights — throwing nonhuman entities into the mix needlessly complicates things at this point in history.”

Read more of this story at Slashdot.