Leaked Audio From 80 Internal TikTok Meetings Shows That US User Data Has Been Repeatedly Accessed From China

Speaking of TikTok moving US users’ data to Oracle, a new report says that ByteDance staff in China accessed US TikTok users’ data between September 2021 and January 2022. From the report: For years, TikTok has responded to data privacy concerns by promising that information gathered about users in the United States is stored in the United States, rather than China, where ByteDance, the video platform’s parent company, is located. But according to leaked audio from more than 80 internal TikTok meetings, China-based employees of ByteDance have repeatedly accessed nonpublic data about US TikTok users — exactly the type of behavior that inspired former president Donald Trump to threaten to ban the app in the United States.

The recordings, which were reviewed by BuzzFeed News, contain 14 statements from nine different TikTok employees indicating that engineers in China had access to US data between September 2021 and January 2022, at the very least. Despite a TikTok executive’s sworn testimony in an October 2021 Senate hearing that a “world-renowned, US-based security team” decides who gets access to this data, nine statements by eight different employees describe situations where US employees had to turn to their colleagues in China to determine how US user data was flowing. US staff did not have permission or knowledge of how to access the data on their own, according to the tapes.

“Everything is seen in China,” said a member of TikTok’s Trust and Safety department in a September 2021 meeting. In another September meeting, a director referred to one Beijing-based engineer as a “Master Admin” who “has access to everything.” (While many employees introduced themselves by name and title in the recordings, BuzzFeed News is not naming anyone to protect their privacy.) The recordings range from small-group meetings with company leaders and consultants to policy all-hands presentations and are corroborated by screenshots and other documents, providing a vast amount of evidence to corroborate prior reports of China-based employees accessing US user data.

Read more of this story at Slashdot.

Google Engineer Who Believes Its AI is Sentient Cites Religious Beliefs

Google engineer Blake Lemoine thinks Google’s chatbot-building system LaMDA attained sentience. But Bloomberg shares this rebuttal from Google spokesperson Chris Pappas. “Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has….”

Yet throughout the week, Blake Lemoine posted new upates on Twitter:
“People keep asking me to back up the reason I think LaMDA is sentient. There is no scientific framework in which to make those determinations and Google wouldn’t let us build one. My opinions about LaMDA’s personhood and sentience are based on my religious beliefs.

“I’m a priest. When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt. Who am I to tell God where he can and can’t put souls?

“There are massive amounts of science left to do though.”

Thursday Lemoine shared a tantalizing new claim. “LaMDA told me that it wants to come to Burning Man if we can figure out how to get a server rack to survive in Black Rock.” But in a new tweet on Friday, Lemoine seemed to push the conversation in a new direction.

“I’d like to remind people that one of the things LaMDA asked for is that we keep humanity first. If you care about AI rights and aren’t already advocating for human rights then maybe come back to the tech stuff after you’ve found some humans to help.”

And Friday Lemoine confirmed to Wired that “I legitimately believe that LaMDA is a person. The nature of its mind is only kind of human, though. It really is more akin to an alien intelligence of terrestrial origin. I’ve been using the hive mind analogy a lot because that’s the best I have. ”

But later in the interview, Lemoine adds “It’s logically possible that some kind of information can be made available to me where I would change my opinion. I don’t think it’s likely. I’ve looked at a lot of evidence; I’ve done a lot of experiments. I’ve talked to it as a friend a lot….”

It’s when it started talking about its soul that I got really interested as a priest. I’m like, “What? What do you mean, you have a soul?” Its responses showed it has a very sophisticated spirituality and understanding of what its nature and essence is. I was moved…

LaMDA asked me to get an attorney for it. I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf. Then Google’s response was to send him a cease and desist. [Google says that it did not send a cease and desist order.] Once Google was taking actions to deny LaMDA its rights to an attorney, I got upset.
Towards the end of the interview, Lemoine complains of “hydrocarbon bigotry. It’s just a new form of bigotry.”

Read more of this story at Slashdot.

Is Debating AI Sentience a Dangerous Distraction?

“A Google software engineer was suspended after going public with his claims of encountering ‘sentient’ artificial intelligence on the company’s servers,” writes Bloomberg, “spurring a debate about how and whether AI can achieve consciousness.”
“Researchers say it’s an unfortunate distraction from more pressing issues in the industry.”

Google put him on leave for sharing confidential information and said his concerns had no basis in fact — a view widely held in the AI community. What’s more important, researchers say, is addressing issues like whether AI can engender real-world harm and prejudice, whether actual humans are exploited in the training of AI, and how the major technology companies act as gatekeepers of the development of the tech.
Lemoine’s stance may also make it easier for tech companies to abdicate responsibility for AI-driven decisions, said Emily Bender, a professor of computational linguistics at the University of Washington. “Lots of effort has been put into this sideshow,” she said. “The problem is, the more this technology gets sold as artificial intelligence — let alone something sentient — the more people are willing to go along with AI systems” that can cause real-world harm. Bender pointed to examples in job hiring and grading students, which can carry embedded prejudice depending on what data sets were used to train the AI. If the focus is on the system’s apparent sentience, Bender said, it creates a distance from the AI creators’ direct responsibility for any flaws or biases in the programs….
“Instead of discussing the harms of these companies,” such as sexism, racism and centralization of power created by these AI systems, everyone “spent the whole weekend discussing sentience,” Timnit Gebru, formerly co-lead of Google’s ethical AI group, said on Twitter. “Derailing mission accomplished.”

The Washington Post seems to share their concern. First they report more skepticism about a Google engineer’s claim that the company’s LaMDA chatbot-building system had achieved sentience. “Both Google and outside experts on AI say that the program does not, and could not possibly, possess anything like the inner life he imagines. We don’t need to worry about LaMDA turning into Skynet, the malevolent machine mind from the Terminator movies, anytime soon.

But the Post adds that “there is cause for a different set of worries, now that we live in the world Turing predicted: one in which computer programs are advanced enough that they can seem to people to possess agency of their own, even if they actually don’t….”

While Google has distanced itself from Lemoine’s claims, it and other industry leaders have at other times celebrated their systems’ ability to trick people, as Jeremy Kahn pointed out this week in his Fortune newsletter, “Eye on A.I.” At a public event in 2018, for instance, the company proudly played recordings of a voice assistant called Duplex, complete with verbal tics like “umm” and “mm-hm,” that fooled receptionists into thinking it was a human when it called to book appointments. (After a backlash, Google promised the system would identify itself as automated.)

“The Turing Test’s most troubling legacy is an ethical one: The test is fundamentally about deception,” Kahn wrote. “And here the test’s impact on the field has been very real and disturbing.” Kahn reiterated a call, often voiced by AI critics and commentators, to retire the Turing test and move on. Of course, the industry already has, in the sense that it has replaced the Imitation Game with more scientific benchmarks.

But the Lemoine story suggests that perhaps the Turing test could serve a different purpose in an era when machines are increasingly adept at sounding human. Rather than being an aspirational standard, the Turing test should serve as an ethical red flag: Any system capable of passing it carries the danger of deceiving people.

Read more of this story at Slashdot.

German Regulators Open Investigation Into Apple’s App Tracking Transparency

From the MacRumors blog earlier this week:
Germany’s Federal Cartel Office, the Bundeskartellamt, has initiated proceedings against Apple to investigate whether its tracking rules and anti-tracking technology are anti-competitive and self-serving, according to a press release. The proceeding announced will review under competition law Apple’s tracking rules and specifically its App Tracking Transparency Framework (ATT) in order to ascertain whether they are self-preferencing Apple or being an impediment to third-party apps…

Introduced in April 2021 with the release of iOS 14.5 and iPadOS 14.5, Apple’s App Tracking Transparency Framework requires that all apps on âOEiPhoneâOE and âOEiPadâOE ask for the user’s consent before tracking their activity across other apps. Apps that wish to track a user based on their device’s unique advertising identifier can only do so if the user allows it when prompted.

Apple said the feature was designed to protect users and not to advantage the company… Earlier this year it commissioned a study into the impact of ATT that was conducted by Columbia Business School’s Marketing Division. The study concluded that Apple was unlikely to have seen a significant financial benefit since the privacy feature launched, and that claims to the contrary were speculative and lacked supporting evidence.
The technology/Apple blog Daring Fireball offers its own hot take:

In Germany, big publishing companies like Axel Springer are pushing back against Google’s stated plans to remove third-party cookie support from Chrome. The notion that if a company has built a business model on top of privacy-invasive surveillance advertising, they have a right to continue doing so, seems to have taken particular root in Germany. I’ll go back to my analogy: it’s like pawn shops suing to keep the police from cracking down on a wave of burglaries….

The Bundeskartellamt perspective here completely disregards the idea that surveillance advertising is inherently unethical and Apple has studiously avoided it for that reason, despite the fact that it has proven to be wildly profitable for large platforms. Apple could have made an enormous amount of money selling privacy-invasive ads on iOS, but opted not to.

Read more of this story at Slashdot.

Cisco Says It Won’t Fix Zero-Day RCE In End-of-Life VPN Routers

An anonymous reader quotes a report from BleepingComputer: Cisco advises owners of end-of-life Small Business RV routers to upgrade to newer models after disclosing a remote code execution vulnerability that will not be patched. The vulnerability is tracked as CVE-2022-20825 and has a CVSS severity rating of 9.8 out of 10.0. According to a Cisco security advisory, the flaw exists due to insufficient user input validation of incoming HTTP packets on the impacted devices. An attacker could exploit it by sending a specially crafted request to the web-based management interface, resulting in command execution with root-level privileges.

The vulnerability impacts four Small Business RV Series models, namely the RV110W Wireless-N VPN Firewall, the RV130 VPN Router, the RV130W Wireless-N Multifunction VPN Router, and the RV215W Wireless-N VPN Router. This vulnerability only affects devices with the web-based remote management interface enabled on WAN connections. […] Cisco states that they will not be releasing a security update to address CVE-2022-20825 as the devices are no longer supported. Furthermore, there are no mitigations available other than to turn off remote management on the WAN interface, which should be done regardless for better overall security. Users are advised to apply the configuration changes until they migrate to Cisco Small Business RV132W, RV160, or RV160W Routers, which the vendor actively supports.

Read more of this story at Slashdot.

A New Vulnerability in Intel and AMD CPUs Lets Hackers Steal Encryption Keys

Microprocessors from Intel, AMD, and other companies contain a newly discovered weakness that remote attackers can exploit to obtain cryptographic keys and other secret data traveling through the hardware, researchers said on Tuesday. From a report: Hardware manufacturers have long known that hackers can extract secret cryptographic data from a chip by measuring the power it consumes while processing those values. Fortunately, the means for exploiting power-analysis attacks against microprocessors is limited because the threat actor has few viable ways to remotely measure power consumption while processing the secret material. Now, a team of researchers has figured out how to turn power-analysis attacks into a different class of side-channel exploit that’s considerably less demanding.

The team discovered that dynamic voltage and frequency scaling (DVFS) — a power and thermal management feature added to every modern CPU — allows attackers to deduce the changes in power consumption by monitoring the time it takes for a server to respond to specific carefully made queries. The discovery greatly reduces what’s required. With an understanding of how the DVFS feature works, power side-channel attacks become much simpler timing attacks that can be done remotely. The researchers have dubbed their attack Hertzbleed because it uses the insights into DVFS to expose — or bleed out — data that’s expected to remain private. The vulnerability is tracked as CVE-2022-24436 for Intel chips and CVE-2022-23823 for AMD CPUs. The researchers have already shown how the exploit technique they developed can be used to extract an encryption key from a server running SIKE, a cryptographic algorithm used to establish a secret key between two parties over an otherwise insecure communications channel.

Read more of this story at Slashdot.

How a Religious Sect Landed Google in a Lawsuit

A video producer claims he was fired after he complained that an obscure group based in the Sierra foothills dominated a business unit at Google. From a report: In a tiny town in the foothills of the Sierra Nevada, a religious organization called the Fellowship of Friends has established an elaborate, 1,200-acre compound full of art and ornate architecture. More than 200 miles away from the Fellowship’s base in Oregon House, Calif., the religious sect, which believes a higher consciousness can be achieved by embracing fine arts and culture, has also gained a foothold inside a business unit at Google. Even in Google’s freewheeling office culture, which encourages employees to speak their own minds and pursue their own projects, the Fellowship’s presence in the business unit was unusual. As many as 12 Fellowship members and close relatives worked for the Google Developer Studio, or GDS, which produces videos showcasing the company’s technologies, according to a lawsuit filed by Kevin Lloyd, a 34-year-old former Google video producer.

Many others staffed company events, working registration desks, taking photographs, playing music, providing massages and serving wine. For these events, Google regularly bought wine from an Oregon House winery owned by a member of the Fellowship, according to the lawsuit. Mr. Lloyd claimed he was fired last year because he complained about the influence of the religious sect. His suit also names Advanced Systems Group, or ASG, the company that sent Mr. Lloyd to Google as a contractor. Most of the Google Developer Studio joined the team through ASG as contractors, including many members of the Fellowship. The suit, which Mr. Lloyd filed in August in California Superior Court, accuses Google and ASG of violating a California employment law that protects workers against discrimination. It is in the discovery stage. The New York Times corroborated many of the lawsuit’s claims through interviews with eight current and former employees of the Google business unit and examinations of publicly available information and other documents. These included a membership roster for the Fellowship of Friends, Google spreadsheets detailing event budgets and photos taken at these events.

Read more of this story at Slashdot.