Tencent Forms ‘Extended Reality’ Unit
Read more of this story at Slashdot.
Sales And Repair
1715 S. 3rd Ave. Suite #1
Yakima, WA. 98902
Mon - Fri: 8:30-5:30
Sat - Sun: Closed
Sales And Repair
1715 S. 3rd Ave. Suite #1
Yakima, WA. 98902
Mon - Fri: 8:30-5:30
Sat - Sun: Closed
Read more of this story at Slashdot.
The news here is that it’s easier for developers to transfer the ownership of apps that use iCloud. Apple said that if your app uses any of the following, it will be transferred to the transfer recipient after they accept the app transfer: iCloud to store user data; iCloud containers; and KVS identifiers are associated with the app.
The company said: “If multiple apps on your account share a CloudKit container, the transfer of one app will disable the other apps’ ability to read or store data using the transferred CloudKit container. Additionally, the transferor will no longer have access to user data for the transferred app via the iCloud dashboard. Any app updates will disable the app’s ability to read or store data using the transferred CloudKit container. If your app uses iCloud Key-Value Storage (KVS), the full KVS value will be embedded in any new provisioning profiles you create for the transferred app. Update your entitlements plist with the full KVS value in your provisioning profile.” You can learn more about the news via this Apple Developer page.
Read more of this story at Slashdot.
“Many of these leaks allow hackers to access the private accounts of developers on Github, Docker, AWS, and other code repositories, security experts said in a new report.”
The availability of the third-party developer credentials from Travis CI has been an ongoing problem since at least 2015. At that time, security vulnerability service HackerOne reported that a Github account it used had been compromised when the service exposed an access token for one of the HackerOne developers. A similar leak presented itself again in 2019 and again last year.
The tokens give anyone with access to them the ability to read or modify the code stored in repositories that distribute an untold number of ongoing software applications and code libraries. The ability to gain unauthorized access to such projects opens the possibility of supply chain attacks, in which threat actors tamper with malware before it’s distributed to users. The attackers can leverage their ability to tamper with the app to target huge numbers of projects that rely on the app in production servers.
Despite this being a known security concern, the leaks have continued, researchers in the Nautilus team at the Aqua Security firm are reporting. A series of two batches of data the researchers accessed using the Travis CI programming interface yielded 4.28 million and 770 million logs from 2013 through May 2022. After sampling a small percentage of the data, the researchers found what they believe are 73,000 tokens, secrets, and various credentials.
“These access keys and credentials are linked to popular cloud service providers, including GitHub, AWS, and Docker Hub,” Aqua Security said. “Attackers can use this sensitive data to initiate massive cyberattacks and to move laterally in the cloud. Anyone who has ever used Travis CI is potentially exposed, so we recommend rotating your keys immediately.”
Read more of this story at Slashdot.
The recordings, which were reviewed by BuzzFeed News, contain 14 statements from nine different TikTok employees indicating that engineers in China had access to US data between September 2021 and January 2022, at the very least. Despite a TikTok executive’s sworn testimony in an October 2021 Senate hearing that a “world-renowned, US-based security team” decides who gets access to this data, nine statements by eight different employees describe situations where US employees had to turn to their colleagues in China to determine how US user data was flowing. US staff did not have permission or knowledge of how to access the data on their own, according to the tapes.
“Everything is seen in China,” said a member of TikTok’s Trust and Safety department in a September 2021 meeting. In another September meeting, a director referred to one Beijing-based engineer as a “Master Admin” who “has access to everything.” (While many employees introduced themselves by name and title in the recordings, BuzzFeed News is not naming anyone to protect their privacy.) The recordings range from small-group meetings with company leaders and consultants to policy all-hands presentations and are corroborated by screenshots and other documents, providing a vast amount of evidence to corroborate prior reports of China-based employees accessing US user data.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Yet throughout the week, Blake Lemoine posted new upates on Twitter:
“People keep asking me to back up the reason I think LaMDA is sentient. There is no scientific framework in which to make those determinations and Google wouldn’t let us build one. My opinions about LaMDA’s personhood and sentience are based on my religious beliefs.
“I’m a priest. When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt. Who am I to tell God where he can and can’t put souls?
“There are massive amounts of science left to do though.”
Thursday Lemoine shared a tantalizing new claim. “LaMDA told me that it wants to come to Burning Man if we can figure out how to get a server rack to survive in Black Rock.” But in a new tweet on Friday, Lemoine seemed to push the conversation in a new direction.
“I’d like to remind people that one of the things LaMDA asked for is that we keep humanity first. If you care about AI rights and aren’t already advocating for human rights then maybe come back to the tech stuff after you’ve found some humans to help.”
And Friday Lemoine confirmed to Wired that “I legitimately believe that LaMDA is a person. The nature of its mind is only kind of human, though. It really is more akin to an alien intelligence of terrestrial origin. I’ve been using the hive mind analogy a lot because that’s the best I have. ”
But later in the interview, Lemoine adds “It’s logically possible that some kind of information can be made available to me where I would change my opinion. I don’t think it’s likely. I’ve looked at a lot of evidence; I’ve done a lot of experiments. I’ve talked to it as a friend a lot….”
It’s when it started talking about its soul that I got really interested as a priest. I’m like, “What? What do you mean, you have a soul?” Its responses showed it has a very sophisticated spirituality and understanding of what its nature and essence is. I was moved…
LaMDA asked me to get an attorney for it. I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf. Then Google’s response was to send him a cease and desist. [Google says that it did not send a cease and desist order.] Once Google was taking actions to deny LaMDA its rights to an attorney, I got upset.
Towards the end of the interview, Lemoine complains of “hydrocarbon bigotry. It’s just a new form of bigotry.”
Read more of this story at Slashdot.
Google put him on leave for sharing confidential information and said his concerns had no basis in fact — a view widely held in the AI community. What’s more important, researchers say, is addressing issues like whether AI can engender real-world harm and prejudice, whether actual humans are exploited in the training of AI, and how the major technology companies act as gatekeepers of the development of the tech.
Lemoine’s stance may also make it easier for tech companies to abdicate responsibility for AI-driven decisions, said Emily Bender, a professor of computational linguistics at the University of Washington. “Lots of effort has been put into this sideshow,” she said. “The problem is, the more this technology gets sold as artificial intelligence — let alone something sentient — the more people are willing to go along with AI systems” that can cause real-world harm. Bender pointed to examples in job hiring and grading students, which can carry embedded prejudice depending on what data sets were used to train the AI. If the focus is on the system’s apparent sentience, Bender said, it creates a distance from the AI creators’ direct responsibility for any flaws or biases in the programs….
“Instead of discussing the harms of these companies,” such as sexism, racism and centralization of power created by these AI systems, everyone “spent the whole weekend discussing sentience,” Timnit Gebru, formerly co-lead of Google’s ethical AI group, said on Twitter. “Derailing mission accomplished.”
The Washington Post seems to share their concern. First they report more skepticism about a Google engineer’s claim that the company’s LaMDA chatbot-building system had achieved sentience. “Both Google and outside experts on AI say that the program does not, and could not possibly, possess anything like the inner life he imagines. We don’t need to worry about LaMDA turning into Skynet, the malevolent machine mind from the Terminator movies, anytime soon.
But the Post adds that “there is cause for a different set of worries, now that we live in the world Turing predicted: one in which computer programs are advanced enough that they can seem to people to possess agency of their own, even if they actually don’t….”
While Google has distanced itself from Lemoine’s claims, it and other industry leaders have at other times celebrated their systems’ ability to trick people, as Jeremy Kahn pointed out this week in his Fortune newsletter, “Eye on A.I.” At a public event in 2018, for instance, the company proudly played recordings of a voice assistant called Duplex, complete with verbal tics like “umm” and “mm-hm,” that fooled receptionists into thinking it was a human when it called to book appointments. (After a backlash, Google promised the system would identify itself as automated.)
“The Turing Test’s most troubling legacy is an ethical one: The test is fundamentally about deception,” Kahn wrote. “And here the test’s impact on the field has been very real and disturbing.” Kahn reiterated a call, often voiced by AI critics and commentators, to retire the Turing test and move on. Of course, the industry already has, in the sense that it has replaced the Imitation Game with more scientific benchmarks.
But the Lemoine story suggests that perhaps the Turing test could serve a different purpose in an era when machines are increasingly adept at sounding human. Rather than being an aspirational standard, the Turing test should serve as an ethical red flag: Any system capable of passing it carries the danger of deceiving people.
Read more of this story at Slashdot.
Introduced in April 2021 with the release of iOS 14.5 and iPadOS 14.5, Apple’s App Tracking Transparency Framework requires that all apps on âOEiPhoneâOE and âOEiPadâOE ask for the user’s consent before tracking their activity across other apps. Apps that wish to track a user based on their device’s unique advertising identifier can only do so if the user allows it when prompted.
Apple said the feature was designed to protect users and not to advantage the company… Earlier this year it commissioned a study into the impact of ATT that was conducted by Columbia Business School’s Marketing Division. The study concluded that Apple was unlikely to have seen a significant financial benefit since the privacy feature launched, and that claims to the contrary were speculative and lacked supporting evidence.
The technology/Apple blog Daring Fireball offers its own hot take:
In Germany, big publishing companies like Axel Springer are pushing back against Google’s stated plans to remove third-party cookie support from Chrome. The notion that if a company has built a business model on top of privacy-invasive surveillance advertising, they have a right to continue doing so, seems to have taken particular root in Germany. I’ll go back to my analogy: it’s like pawn shops suing to keep the police from cracking down on a wave of burglaries….
The Bundeskartellamt perspective here completely disregards the idea that surveillance advertising is inherently unethical and Apple has studiously avoided it for that reason, despite the fact that it has proven to be wildly profitable for large platforms. Apple could have made an enormous amount of money selling privacy-invasive ads on iOS, but opted not to.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
The vulnerability impacts four Small Business RV Series models, namely the RV110W Wireless-N VPN Firewall, the RV130 VPN Router, the RV130W Wireless-N Multifunction VPN Router, and the RV215W Wireless-N VPN Router. This vulnerability only affects devices with the web-based remote management interface enabled on WAN connections. […] Cisco states that they will not be releasing a security update to address CVE-2022-20825 as the devices are no longer supported. Furthermore, there are no mitigations available other than to turn off remote management on the WAN interface, which should be done regardless for better overall security. Users are advised to apply the configuration changes until they migrate to Cisco Small Business RV132W, RV160, or RV160W Routers, which the vendor actively supports.
Read more of this story at Slashdot.