India Withdraws Warning on Biometric ID Sharing Following Online Uproar
Read more of this story at Slashdot.
Sales And Repair
1715 S. 3rd Ave. Suite #1
Yakima, WA. 98902
Mon - Fri: 8:30-5:30
Sat - Sun: Closed
Sales And Repair
1715 S. 3rd Ave. Suite #1
Yakima, WA. 98902
Mon - Fri: 8:30-5:30
Sat - Sun: Closed
Read more of this story at Slashdot.
“You’d be in good company too: Apple CEO Tim Cook had his home blurred from mapping apps after issues with a stalker.”
There is something to bear in mind before you do this, though: you may not be able to reverse the process. The blur could be there for good. This is the case for Google Maps, and while Apple and Microsoft don’t specify whether blurs on their services are permanent, they may follow the same protocol or decide to do so in the future.
The case for blurring? “Having strangers from all over the world stare at your home isn’t necessarily something you want to happen — but it can be done in seconds on the mapping apps we all carry around on our phones.” (“Stop people from peering at your place,” suggests the article’s subtitle.)
But is there also a case against demanding platforms blur what’s essentially just the exterior of a building? Where’s the boundary where we’re honoring the wishes of the privacy-conscious — and does the public ever have a right to see? Share your own thoughts in the comments.
And would you blur your house on every map app?
(Thanks to long-time Slashdot reader schwit1 for sharing the article…)
Read more of this story at Slashdot.
Virginia-based Anomaly Six was founded in 2018 by two ex-military intelligence officers and maintains a public presence that is scant to the point of mysterious, its website disclosing nothing about what the firm actually does. But there’s a good chance that A6 knows an immense amount about you. The company is one of many that purchases vast reams of location data, tracking hundreds of millions of people around the world by exploiting a poorly understood fact: Countless common smartphone apps are constantly harvesting your location and relaying it to advertisers, typically without your knowledge or informed consent, relying on disclosures buried in the legalese of the sprawling terms of service that the companies involved count on you never reading.
Read more of this story at Slashdot.
Multiple women who filed these reports said they feared physical violence. One woman called the police because a man she had a protective order against was harassing her with phone calls. She’d gotten notifications that an AirTag was tracking her, and could hear it chiming in her car, but couldn’t find it. When the cops arrived, she answered one of his calls in front of the officer, and the man described how he would physically harm her. Another who found an AirTag in her car had been wondering how a man she had an order of protection against seemed to always know where she was. The report said she was afraid he would assault or kill her. […] The overwhelming number of reports came from women. Only one case out of the 150 we reviewed involved a man who suspected an ex-girlfriend of tracking him with an AirTag.
Read more of this story at Slashdot.
In the post, he used the patient’s full name and described, in detail, the specific dental problem he was in for: “excruciating pain” from the lower left quadrant, which resulted in a referral for a root canal. That’s what a HIPAA violation actually looks like. The law says that healthcare providers and insurance companies can’t share identifiable, personal information without a patient’s consent. In this case, the dentist (a healthcare provider) publicly shared a patient’s name, medical condition, and medical history (personal information). As a result, the office was fined $50,000 (PDF).
Read more of this story at Slashdot.
According to the documents, Sitel said it discovered the security incident in its VPN gateways on a legacy network belonging to Sykes, a customer service company working for Okta that Sitel acquired in 2021. The timeline details how the attackers used remote access services and publicly accessible hacking tools to compromise and navigate through Sitel’s network, gaining deeper visibility to the network over the five days that Lapsus$ had access. Sitel said that its Azure cloud infrastructure was also compromised by hackers. According to the timeline, the hackers accessed a spreadsheet on Sitel’s internal network early on January 21 called “DomAdmins-LastPass.xlsx.” The filename suggests that the spreadsheet contained passwords for domain administrator accounts that were exported from a Sitel employee’s LastPass password manager.
Read more of this story at Slashdot.
To conduct the study, URL Genius used the Record App Activity feature from Apple’s iOS to count how many different domains track a user’s activity across 10 different social media apps — YouTube, TikTok, Twitter, Telegram, LinkedIn, Instagram, Facebook, Snapchat, Messenger and Whatsapp — over the course of one visit, before you even log into your account. YouTube and TikTok topped the other apps with 14 network contacts apiece, significantly higher than the study’s average number of six network contacts per app. Those numbers are all probably higher for users who are logged into accounts on those apps, the study noted.
Ten of YouTube’s trackers were first-party network contacts, meaning the platform was tracking user activity for its own purposes. Four of the contacts were from third-party domains, meaning the social platform was allowing a handful of mystery outside parties to collect information and track user activity. For TikTok, the results were even more mysterious: 13 of the 14 network contacts on the popular social media app were from third parties. The third-party tracking still happened even when users didn’t opt into allowing tracking in each app’s settings, according to the study. “Consumers are currently unable to see what data is shared with third-party networks, or how their data will be used,” the report’s authors wrote.
Read more of this story at Slashdot.
Separately, the bill creates a 19-person federal commission, dominated by law enforcement agencies, which will lay out voluntary “best practices” for attacking the problem of online child abuse. Regardless of whether state legislatures take their lead from that commission, or from the bill’s sponsors themselves, we know where the road will end. Online service providers, even the smallest ones, will be compelled to scan user content, with government-approved software like PhotoDNA. If EARN IT supporters succeed in getting large platforms like Cloudflare and Amazon Web Services to scan, they might not even need to compel smaller websites — the government will already have access to the user data, through the platform. […] Senators supporting the EARN IT Act say they need new tools to prosecute cases over child sexual abuse material, or CSAM. But the methods proposed by EARN IT take aim at the security and privacy of everything hosted on the Internet.
The Senators supporting the bill have said that their mass surveillance plans are somehow magically compatible with end-to-end encryption. That’s completely false, no matter whether it’s called “client side scanning” or another misleading new phrase. The EARN IT Act doesn’t target Big Tech. It targets every individual internet user, treating us all as potential criminals who deserve to have every single message, photograph, and document scanned and checked against a government database. Since direct government surveillance would be blatantly unconstitutional and provoke public outrage, EARN IT uses tech companies — from the largest ones to the very smallest ones — as its tools. The strategy is to get private companies to do the dirty work of mass surveillance.
Read more of this story at Slashdot.
The decision says IP addresses represent personal data because it’s theoretically possible to identify the person associated with an IP address, and that it’s irrelevant whether the website or Google has actually done so. The ruling directs the website to stop providing IP addresses to Google and threatens the site operator with a fine of 250,000 euros for each violation, or up to six months in prison, for continued improper use of Google Fonts. Google Fonts is widely deployed — the Google Fonts API is used by about 50m websites. The API allows websites to style text with Google Fonts stored on remote servers — Google’s or a CDN’s — that get fetched as the page loads. Google Fonts can be self-hosted to avoid running afoul of EU rules and the ruling explicitly cites this possibility to assert that relying on Google-hosted Google Fonts is not defensible under the law.
Read more of this story at Slashdot.
Read more of this story at Slashdot.