California Passes Bill Requiring Easier Data Sharing Opt Outs

Most of the attention today has been focused on California’s controversial “kill switch” AI safety bill, which passed the California State Assembly by a 45-11 vote. However, California legislators passed another tech bill this week which requires internet browsers and mobile operating systems to offer a simple tool for consumers to easily opt out of data sharing and selling for targeted advertising. Slashdot reader awwshit shares a report from The Record: The state’s Senate passed the landmark legislation after the General Assembly approved it late Wednesday. The Senate then added amendments to the bill which now goes back to the Assembly for final sign off before it is sent to the governor’s desk, a process Matt Schwartz, a policy analyst at Consumer Reports, called a “formality.” California, long a bellwether for privacy regulation, now sets an example for other states which could offer the same protections and in doing so dramatically disrupt the online advertising ecosystem, according to Schwartz.

“If folks use it, [the new tool] could severely impact businesses that make their revenue from monetizing consumers’ data,” Schwartz said in an interview with Recorded Future News. “You could go from relatively small numbers of individuals taking advantage of this right now to potentially millions and that’s going to have a big impact.” As it stands, many Californians don’t know they have the right to opt out because the option is invisible on their browsers, a fact which Schwartz said has “artificially suppressed” the existing regulation’s intended effects. “It shouldn’t be that hard to send the universal opt out signal,” Schwartz added. “This will require [browsers and mobile operating systems] to make that setting easy to use and find.”

Read more of this story at Slashdot.

South Korea Faces Deepfake Porn ‘Emergency’

An anonymous reader quotes a report from the BBC: South Korea’s president has urged authorities to do more to “eradicate” the country’s digital sex crime epidemic, amid a flood of deepfake pornography targeting young women. Authorities, journalists and social media users recently identified a large number of chat groups where members were creating and sharing sexually explicit “deepfake” images — including some of underage girls. Deepfakes are generated using artificial intelligence, and often combine the face of a real person with a fake body. South Korea’s media regulator is holding an emergency meeting in the wake of the discoveries.

The spate of chat groups, linked to individual schools and universities across the country, were discovered on the social media app Telegram over the past week. Users, mainly teenage students, would upload photos of people they knew — both classmates and teachers — and other users would then turn them into sexually explicit deepfake images. The discoveries follow the arrest of the Russian-born founder of Telegram, Pavel Durov, on Saturday, after it was alleged that child pornography, drug trafficking and fraud were taking place on the encrypted messaging app. South Korean President Yoon Suk Yeol on Tuesday instructed authorities to “thoroughly investigate and address these digital sex crimes to eradicate them.”

“Recently, deepfake videos targeting an unspecified number of people have been circulating rapidly on social media,” President Yoon said at a cabinet meeting. “The victims are often minors and the perpetrators are mostly teenagers.” To build a “healthy media culture,” President Yoon said young men needed to be better educated. “Although it is often dismissed as ‘just a prank,’ it is clearly a criminal act that exploits technology to hide behind the shield of anonymity,” he said.

The Guardian notes that making sexually explicit deepfakes with the intention of distributing them is punishable by five years in prison or a fine of $37,500.

Further reading: 1 in 10 Minors Say Their Friends Use AI to Generate Nudes of Other Kids, Survey Finds (Source: 404 Media)

Read more of this story at Slashdot.