OpenAI Unveils Plans For Tackling Abuse Ahead of 2024 Elections

Sara Fischer reports via Axios: ChatGPT maker OpenAI says it’s rolling out new policies and tools meant to combat misinformation and abuse ahead of 2024 elections worldwide. 2024 is one of the biggest election years in history — with high-stakes races in over 50 countries globally. It’s also the first major election cycle where generative AI tools will be widely available to voters, governments and political campaigns. In a statement published Monday, OpenAI said it will lean into verified news and image authenticity programs to ensure users get access to high-quality information throughout elections.

The company will add digital credentials set by a third-party coalition of AI firms that encode details about the origin of images created using its image generator tool, DALL-E 3. The firm says it’s experimenting with a new “provenance classifier” tool that can detect AI-generated images that have been made using DALL-E. It hopes to make that tool available to its first group of testers, including journalists, researchers and other tech platforms, for feedback. OpenAI will continue integrating its ChatGPT platform with real-time news reporting globally, “including attribution and links,” it said. That effort builds on a first-of-its-kind deal announced with German media giant Axel Springer last year that offers ChatGPT users summaries of select global news content from the company’s outlets. In the U.S., OpenAI says it’s working with the nonpartisan National Association of Secretaries of State to direct ChatGPT users to CanIVote.org for authoritative information on U.S. voting and elections.

Because generative AI technology is so new, OpenAI says it’s still working to understand how effective its tools might be for political persuasion. To hedge against abuse, the firm doesn’t allow people to build applications for political campaigning and lobbying, and it doesn’t allow engineers to create chatbots that pretend to be real people, such as candidates. Like most tech firms, it doesn’t allow users to build applications for its ChatGPT platform that deter people from participating in the democratic process.

Read more of this story at Slashdot.

GTA 5 Actor Goes Nuclear On AI Company That Made Voice Chatbot of Him

Rich Stanton reports via PC Gamer: Ned Luke, the actor whose roles include GTA 5’s Michael De Santa, has gone off on an AI company that released an unlicensed voice chatbot based on the character, and succeeded in having the offending bot nuked from the internet. AI company WAME had tweeted a link to its Michael chatbot on January 14 along with the text: “Any GTA fans around here? Now take your gaming experience to another level. Try having a realistic voice conversation with Michael De Santa, the protagonist of GTA 5, right now!”

Unfortunately for WAME, it quickly attracted the attention of Luke, who does not seem like the type of man to hold back. “This is fucking bullshit WAME,” says Luke (though I am definitely hearing all this in Michael’s voice). “Absolutely nothing cool about ripping people off with some lame computer estimation of my voice. Don’t waste your time on this garbage.” Luke also tagged Rockstar Games and the SAG-AFTRA union, since when the chatbot and tweets promoting it have been deleted. Fellow actors including Roger Clark weighed in with sympathy about how much this kind of stuff sucks, and our boy wasn’t done by a long shot:

“I’m not worried about being replaced, Roger,” says Luke. “I just hate these fuckers, and am pissed as fuck that our shitty union is so damn weak that this will soon be an issue on legit work, not just some lame douchebag tryna make $$ off of our voices.” Luke is here referring to a recent SAG-AFTRA “ethical AI” agreement which has gone down with some of its members like a cup of cold sick. Musician Marilou A. Burnel (not affiliated with WAME) pops up to suggest that “creative people make remarkable things with AI, and the opposite is also true.” “Not using my voice they don’t,” says Luke. WAME issued a statement expressing their “profound understanding and concern.”

“This incident has highlighted the intricate interplay between the advancement of AI technology and the ethical and legal realms,” says WAME. “WAME commits to protecting the rights of voice actors and creators while advancing ethical AI practices.”

Read more of this story at Slashdot.

Atari Will Release a Mini Edition of Its 1979 Atari 400 (Which Had An 8-Bit MOS 6502 CPU)

An 1979 Atari 8-bit system re-released in a tiny form factor? Yep.
Retro Games Ltd. is releasing a “half-sized” version of its very first home computer, the Atari 400, “emulating the whole 8-bit Atari range, including the 400/800, XL and XE series, and the 5200 home console. (“In 1979 Atari brought the computer age home,” remembers a video announcement, saying the new device represents “The iconic computer now reimagined.”)
More info from ExtremeTech:

For those of you unfamiliar with it, the Atari 400 and 800 were launched in 1979 as the company’s first attempt at a home computer that just happened to double as an incredible game system. That’s because, in addition to a faster variant of the excellent 8-bit MOS 6502 CPU found in the Apple II and Commodore PET, they also included Atari’s dedicated ANTIC, GTIA, and POKEY coprocessors for graphics and sound, making the Atari 400 and 800 the first true gaming PCs…

If it’s as good as the other Retro Games systems, the [new] 400Mini will count as another feather in the cap for Atari Interactive’s resurgence following its excellent Atari50 compilation, reissued Atari 2600+ console, and acquisitions of key properties including Digital Eclipse, MobyGames, and AtariAge.

The 2024 version — launching in the U.K. March 28th — will boast high-definition HDMI output at 720p 50 or 60Hz, along with five USB ports. More details from Retro Games Ltd.
Also included is THECXSTICK — a superb recreation of the classic Atari CX-40 joystick, with an additional seven seamlessly integrated function buttons. Play one of the included 25 classic Atari games, selected from a simple to use carousel, including all-time greats such as Berzerk, Missile Command, Lee, Millipede, Miner 2049er, M.U.L.E. and Star Raiders II, or play the games you own from USB stick. Plus save and resume your game at any time, or rewind by up to 30 seconds to help you finish those punishingly difficult classics!
Thanks to long-time Slashdot reader elfstones for sharing the article.

Read more of this story at Slashdot.

Valve Takes Action Against Team Fortress 2, Portal Fan Projects After Years of Leniency

Dustin Bailey reports via GamesRadar: Valve has suddenly taken action against multiple fan games, stunning a fandom that had grown used to the company’s freewheeling stance on unofficial community projects. One of those projects was Team Fortress: Source 2, an effort to bring the beloved multiplayer game back to life in a more modern engine using the S&box project. The project had already run into development difficulties and had essentially been on hiatus since September 2023, but now Valve has issued a DMCA takedown against it, effectively serving as the “nail in the coffin” for the project, as the devs explain on X. […]

The other project is Portal 64, a demake of the 2009 puzzle game that ports it to run on an actual N64. Developer James Lambert had been working on the project for years, but it gained substantial notoriety this past December with the release of First Slice, a playable demo featuring the first 13 test chambers. It doesn’t appear that Valve issued a formal DMCA against Portal 64, but the end result is the same. In a Patreon post (which was eventually made public on X), Lambert said he had “been in communication with Valve about the future of the project. There is some news and it isn’t good. Because the project depends on Nintendo’s proprietary libraries, they have asked me to take the project down.”

I’m not fully clear on what “proprietary libraries” means here, but it seems likely that Portal 64 was developed using some variation of Nintendo’s official development tools for N64, which were never officially released to the public. Open-source alternatives to those tools do exist, but might not have been in use here. […] Given Valve’s historic acceptance of fan games, the moves have been pretty shocking to the community.

Read more of this story at Slashdot.

New Paper on ‘MOND’ Argues That Gravity Changes At Very Low Accelerations

porkchop_d_clown (Slashdot reader #39,923) writes:
MOND — MOdified Newtonian Dynamics is a hypothesis that Newton’s law of gravity is incorrect under some conditions. Now a paper claims that a study does indeed show that pairs of widely separated binary stars do show a deviation from Newton’s Second Law, arguing that, at very low levels, gravity is stronger than the law predicts.
Phys.org writes that the study “reinforces the evidence for modified gravity that was previously reported in 2023 from an analysis of the orbital motions of gravitationally bound, widely separated (or long-period) binary stars, known as wide binaries.”

But RockDoctor (Slashdot reader #15,477) calls the hypothesis “very much disputed.”
YouTubing-astrophysicist Dr Becky considered this report a couple of months ago (2023-Nov-09), under the title “HUGE blow for alternate theory of gravity MOND”. At the very least, astrophysicists and cosmologists are deeply undecided whether this data supports or discourages MOND. (Shortened comment because verification problem.)

Last week, I updated my annual count of MOND and other “alternative gravity” publications. While research on MOND (and others) continues, and any “suppression” the tin-foil-hat brigade want to scream about is ineffective, it remains an unpopular (not-equal-to “suppressed”) field. Generally, astronomical publication counts are increasing, and MOND sticks with that trend. If anything is becoming more popular, it’s the “MOG” type of “MOdified Gravity”.

Read more of this story at Slashdot.

Bill Gates Interviews Sam Altman, Who Predicts Fastest Tech Revolution ‘By Far’

This week on his podcast Bill Gates asked Sam Altman how his team is doing after his (temporary) ouster, Altman replies “a lot of people have remarked on the fact that the team has never felt more productive or more optimistic or better. So, I guess that’s like a silver lining of all of this. In some sense, this was like a real moment of growing up for us, we are very motivated to become better, and sort of to become a company ready for the challenges in front of us.”

The rest of their conversation was pre-ouster — but gave fascinating glimpses at the possible future of AI — including the prospect of very speedy improvements. Altman suggests it will be easier to understand how a creative work gets “encoded” in an AI than it would be in a human brain. “There has been some very good work on interpretability, and I think there will be more over time… The little bits we do understand have, as you’d expect, been very helpful in improving these things. We’re all motivated to really understand them, scientific curiosity aside, but the scale of these is so vast….”

BILL GATES: I’m pretty sure, within the next five years, we’ll understand it. In terms of both training efficiency and accuracy, that understanding would let us do far better than we’re able to do today.
SAM ALTMAN: A hundred percent. You see this in a lot of the history of technology where someone makes an empirical discovery. They have no idea what’s going on, but it clearly works. Then, as the scientific understanding deepens, they can make it so much better.

BILL GATES: Yes, in physics, biology, it’s sometimes just messing around, and it’s like, whoa — how does this actually come together…? When you look at the next two years, what do you think some of the key milestones will be?

SAM ALTMAN: Multimodality will definitely be important.
BILL GATES: Which means speech in, speech out?
SAM ALTMAN: Speech in, speech out. Images. Eventually video. Clearly, people really want that…. [B]ut maybe the most important areas of progress will be around reasoning ability. Right now, GPT-4 can reason in only extremely limited ways. Also reliability. If you ask GPT-4 most questions 10,000 times, one of those 10,000 is probably pretty good, but it doesn’t always know which one, and you’d like to get the best response of 10,000 each time, and so that increase in reliability will be important.

Customizability and personalization will also be very important. People want very different things out of GPT-4: different styles, different sets of assumptions. We’ll make all that possible, and then also the ability to have it use your own data. The ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources, all of that. Those will be some of the most important areas of improvement.
Areas where Altman sees potential are healthcare, education, and especially computer programming. “If you make a programmer three times more effective, it’s not just that they can do three times more stuff, it’s that they can — at that higher level of abstraction, using more of their brainpower — they can now think of totally different things. It’s like, going from punch cards to higher level languages didn’t just let us program a little faster — it let us do these qualitatively new things. And we’re really seeing that…

“I think it’s worth always putting it in context of this technology that, at least for the next five or ten years, will be on a very steep improvement curve. These are the stupidest the models will ever be.”

He predicts the fastest technology revolution “by far,” worrying about “the speed with which society is going to have to adapt, and that the labor market will change.” But soon he adds that “We started investing a little bit in robotics companies. On the physical hardware side, there’s finally, for the first time that I’ve ever seen, really exciting new platforms being built there.”

And at some point Altman tells Gates he’s optimistic that AI could contribute to helping humans get along with each other.

Read more of this story at Slashdot.

Post-Quantum Encryption Algorithm KyberSlash Patched After Side-Channel Attack Discovered

jd (Slashdot reader #1,658) shared this story from BleepingComputer. The article notes that “Multiple implementations of the Kyber key encapsulation mechanism for quantum-safe encryption, are vulnerable to a set of flaws collectively referred to as KyberSlash, which could allow the recovery of secret keys.”

jd explains that Crystals-Kyber “was chosen to be the U.S. government’s post-quantum cryptography system of choice last year, but a side-channel attack has been identified. But in the article, NIST says that this is an implementation-specific attack (the reference implementation) and not a vulnerability in Kyber itself.”
From the article:
CRYSTALS-Kyber is the official implementation of the Kyber key encapsulation mechanism (KEM) for quantum-safe algorithm (QSA) and part of the CRYSTALS (Cryptographic Suite for Algebraic Lattices) suite of algorithms. It is designed for general encryption… The KyberSlash flaws are timing-based attacks arising from how Kyber performs certain division operations in the decapsulation process, allowing attackers to analyze the execution time and derive secrets that could compromise the encryption. If a service implementing Kyber allows multiple operation requests towards the same key pair, an attacker can measure timing differences and gradually compute the secret key…

In a KyberSlash1 demo on a Raspberry Pi system, the researchers recovered Kyber’s secret key from decryption timings in two out of three attempts…
On December 30, KyberSlash2 was patched following its discovery and responsible reporting by Prasanna Ravi, a researcher at the Nanyang Technological University in Singapore, and Matthias Kannwischer, who works at the Quantum Safe Migration Center.

Read more of this story at Slashdot.

Android 15 Could Bring Widgets Back To the Lock Screen

After removing the feature with Android 5.0 in 2015, Google appears to be bringing back lock screen widgets in the next version of Android. “There haven’t been any indications since then that Google would ever bring this feature back,” notes Android Authority. “But after Apple introduced widgets to the iPhone lock screen in iOS 16, many speculated that it was only a matter of time.” From the report: As for how they might do that, there seem to be two different approaches that are being developed. The first one involves the creation of a new “communal” space — an area on the lock screen that might be accessed by swiping inward from the right. Although the communal space is still unfinished, I was able to activate it in the new Android 14 QPR2 Beta 3 update. Once I activated the communal space, a large gray bar appeared on the right side of the lock screen on my Pixel device. After swiping inward, a pencil icon appeared on the top left of the screen. Tapping this icon opened a widget selector that allowed me to add widgets from Google Calendar, Google Clock, and the Google App, but I wasn’t able to add widgets from most of my other apps. This is because the widget category needs to be set to KEYGUARD in order for it to appear in this selector. KEYGUARD is a category Google introduced in Android 4.2 Jelly Bean that very few apps utilize today since the lock screen hasn’t supported showing widgets in nearly a decade. After adding the widgets for Google Clock and Google Finance, I returned to the communal space by swiping inward from the right on the lock screen. The widgets were indeed shown in this space without me needing to unlock the device. However, the lock screen UI was shown on top of the widgets, making things difficult to see. Clearly, this feature is still a work in progress in the current beta. […]

While it’s possible this communal space won’t be coming to all devices, there’s another way that Google could bring widgets back to the lock screen for Android phones: leveraging At a Glance. If you aren’t familiar, Pixel phones have a widget on the home screen and lock screen called At a Glance. The interesting thing about At a Glance is that it isn’t actually a widget but rather a “custom element behaving like a widget,” according to developer Kieron Quinn. Under the hood, At a Glance is built on top of Smartspace, the API that is responsible for creating the various cards you can swipe through. Although Smartspace supports creating a variety of card types, it currently can’t handle RemoteViews, the API on which Android app widgets are built. That could change soon, though, as Google is working on including RemoteViews into the Smartspace API.

It’s unclear whether this will allow raw widgets from all apps to be included in At a Glance, since it’s also possible that Google is only implementing this so it has more freedom in building new cards. Either way, this new addition to the Smartspace API would supercharge the At a Glance widget in Android 15, and we’re excited to see what Google has in store for us.

Read more of this story at Slashdot.