96% of US Hospital Websites Share Visitor Info With Meta, Google, Data Brokers

An anonymous reader quotes a report from The Guardian: Hospitals — despite being places where people implicitly expect to have their personal details kept private — frequently use tracking technologies on their websites to share user information with Google, Meta, data brokers, and other third parties, according to research published today. Academics at the University of Pennsylvania analyzed a nationally representative sample of 100 non-federal acute care hospitals — essentially traditional hospitals with emergency departments — and their findings were that 96 percent of their websites transmitted user data to third parties. Additionally, not all of these websites even had a privacy policy. And of the 71 percent that did, 56 percent disclosed specific third-party companies that could receive user information.

The researchers’ latest work builds on a study they published a year ago of 3,747 US non-federal hospital websites. That found 98.6 percent tracked and transferred visitors’ data to large tech and social media companies, advertising firms, and data brokers. To find the trackers on websites, the team checked out each hospitals’ homepage on January 26 using webXray, an open source tool that detects third-party HTTP requests and matches them to the organizations receiving the data. They also recorded the number of third-party cookies per page. One name in particular stood out, in terms of who was receiving website visitors’ information. “In every study we’ve done, in any part of the health system, Google, whose parent company is Alphabet, is on nearly every page, including hospitals,” [Dr Ari Friedman, an assistant professor of emergency medicine at the University of Pennsylvania] observed. “From there, it declines,” he continued. “Meta was on a little over half of hospital webpages, and the Meta Pixel is notable because it seems to be one of the grabbier entities out there in terms of tracking.”

Both Meta and Google’s tracking technologies have been the subject of criminal complaints and lawsuits over the years — as have some healthcare companies that shared data with these and other advertisers. In addition, between 20 and 30 percent of the hospitals share data with Adobe, Friedman noted. “Everybody knows Adobe for PDFs. My understanding is they also have a tracking division within their ad division.” Others include telecom and digital marketing companies like The Trade Desk and Verizon, plus tech giants Oracle, Microsoft, and Amazon, according to Friedman. Then there’s also analytics firms including Hotjar and data brokers such as Acxiom. “And two thirds of hospital websites had some kind of data transfer to a third-party domain that we couldn’t even identify,” he added. Of the 71 hospital website privacy policies that the team found, 69 addressed the types of user information that was collected. The most common were IP addresses (80 percent), web browser name and version (75 percent), pages visited on the website (73 percent), and the website from which the user arrived (73 percent). Only 56 percent of these policies identified the third-party companies receiving user information. In lieu of any federal data privacy law in the U.S., Friedman recommends users protect their personal information via the browser-based tools Ghostery and Privacy Badger, which identify and block transfers to third-party domains.

Read more of this story at Slashdot.

Four Baseball Teams Now Let Ticket-Holders Enter Using AI-Powered ‘Facial Authentication’

“The San Francisco Giants are one of four teams in Major League Baseball this season offering fans a free shortcut through the gates into the ballpark,” writes SFGate.

“The cost? Signing up for the league’s ‘facial authentication’ software through its ticketing app.”

The Giants are using MLB’s new Go-Ahead Entry program, which intends to cut down on wait times for fans entering games. The pitch is simple: Take a selfie through the MLB Ballpark app (which already has your tickets on it), upload the selfie and, once you’re approved, breeze through the ticketing lines and into the ballpark. Fans will barely have to slow down at the entrance gate on their way to their seats…

The Philadelphia Phillies were MLB’s test team for the technology in 2023. They’re joined by the Giants, Nationals and Astros in 2024…

[Major League Baseball] says it won’t be saving or storing pictures of faces in a database — and it clearly would really like you to not call this technology facial recognition. “This is not the type of facial recognition that’s scanning a crowd and specifically looking for certain kinds of people,” Karri Zaremba, a senior vice president at MLB, told ESPN. “It’s facial authentication. … That’s the only way in which it’s being utilized.”

Privacy advocates “have pointed out that the creep of facial recognition technology may be something to be wary of,” the article acknowledges. But it adds that using the technology is still completely optional.

And they also spoke to the San Francisco Giants’ senior vice president of ticket sales, who gushed about the possibility of app users “walking into the ballpark without taking your phone out, or all four of us taking our phones out.”

Read more of this story at Slashdot.

Over 15,000 Roku Accounts Sold To Buy Streaming Subscriptions, Devices

Over 15,000 Roku customers were hacked and used to make fraudulent purchases of hardware and streaming subscriptions. According to BleepingComputer, the threat actors were “selling the stolen accounts for as little as $0.50 per account, allowing purchasers to use stored credit cards to make illegal purchases.” From the report: On Friday, Roku first disclosed the data breach, warning that 15,363 customer accounts were hacked in a credential stuffing attack. A credential stuffing attack is when threat actors collect credentials exposed in data breaches and then attempt to use them to log in to other sites, in this case, Roku.com. The company says that once an account was breached, it allowed threat actors to change the information on the account, including passwords, email addresses, and shipping addresses. This effectively locked a user out of the account, allowing the threat actors to make purchases using stored credit card information without the legitimate account holder receiving order confirmation emails.

“It appears likely that the same username/password combinations had been used as login information for such third-party services as well as certain individual Roku accounts,” reads the data breach notice. “As a result, unauthorized actors were able to obtain login information from third-party sources and then use it to access certain individual Roku accounts. “After gaining access, they then changed the Roku login information for the affected individual Roku accounts, and, in a limited number of cases, attempted to purchase streaming subscriptions.” Roku says that it secured the impacted accounts and forced a password reset upon detecting the incident. Additionally, the platform’s security team investigated for any charges due to unauthorized purchases performed by the hackers and took steps to cancel the relevant subscriptions and refund the account holders.

A researcher told BleepingComputer last week that the threat actors have been using a Roku config to perform credential stuffing attacks for months, bypassing brute force attack protections and captchas by using specific URLs and rotating through lists of proxy servers. Successfully hacked accounts are then sold on stolen account marketplaces for as little as 50 cents, as seen below where 439 accounts are being sold. The seller of these accounts provides information on how to change information on the account to make fraudulent purchases. Those who purchase the stolen accounts hijack them with their own information and use stored credit cards to purchase cameras, remotes, soundbars, light strips, and streaming boxes. After making their purchases, it is common for them to share screenshots of redacted order confirmation emails on Telegram channels associated with the stolen account marketplaces.

Read more of this story at Slashdot.

License Plate-Scanning Company Violates Privacy of Millions of California Drivers, Argues Class Action

“If you drive a car in California, you may be in for a payday thanks to a lawsuit alleging privacy violations by a Texas company,” report SFGate:

The 2021 lawsuit, given class-action status in September, alleges that Digital Recognition Network is breaking a California law meant to regulate the use of automatic license plate readers. DRN, a Fort Worth-based company, uses plate-scanning cameras to create location data for people’s vehicles, then sells that data to marketers, car repossessors and insurers.

What’s particularly notable about the case is the size of the class. The court has established that if you’re a California resident whose license plate data was collected by DRN at least 15 times since June 2017, you’re a class member. The plaintiff’s legal team estimates that the tally includes about 23 million people, alleging that DRN cameras were mounted to cars on public roads. The case website lets Californians check whether their plates were scanned.

Barring a settlement or delay, the trial to decide whether DRN must pay a penalty to those class members will begin on May 17 in San Diego County Superior Court…

The company’s cameras scan 220 million plates a month, its website says, and customers can use plate data to “create comprehensive vehicle stories.”

A lawyer for the firm representing class members told SFGATE Friday that his team will try to show DRN’s business is a “mass surveillance program.”

Read more of this story at Slashdot.

Ask Slashdot: How Can I Stop Security Firms From Harvesting My Data?

Slashdot reader Unpopular Opinions requests suggestions from the Slashdot community:
Lately a boom of companies decided to play their “nice guy” card, providing us with a trove of information about our own sites, DNS servers, email servers, pretty much anything about any online service you host.

Which is not anything new… Companies have been doing this for decades, except as paid services you requested. Now the trend is basically anyone can do it over my systems, and they are always more than happy to sell anyone, me included, my data they collected without authorization or consent. It’s data they never had the rights to collect and/or compile to begin with, including data collected thru access attempts via known default accounts (Administrator, root, admin, guest) and/or leaked credentials provided by hacked databases when a few elements seemingly match…

“Just block those crawlers”? That’s what some of those companies advise, but not only does the site operator have to automate it themself, not all companies offer lists of their source IP addresses or identify them. Some use multiple/different crawler domain names from their commercial product, or use cloud providers such as Google Cloud, AWS and Azure â” so one can’t just block access to their company’s networks without massive implications. They also change their own information with no warning, and many times, no updates to their own lists. Then, there is the indirect cost: computing cost, network cost, development cost, review cycle cost. It is a cat-and-mice game that has become very boring.

With the raise of concerns and ethical questions about AI harvesting and learning from copyrighted work, how are those security companies any different from AI, and how could one legally put a stop on this?

Block those crawlers? Change your Terms of Service? What’s the best fix… Share your own thoughts and suggestions in the comments.
How can you stop security firms from harvesting your data?

Read more of this story at Slashdot.

Mobile Device Ambient Light Sensors Can Be Used To Spy On Users

“The ambient light sensors present in most mobile devices can be accessed by software without any special permissions, unlike permissions required for accessing the microphone or the cameras,” writes longtime Slashdot reader BishopBerkeley. “When properly interrogated, the data from the light sensor can reveal much about the user.” IEEE Spectrum reports: While that may not seem to provide much detailed information, researchers have already shown these sensors can detect light intensity changes that can be used to infer what kind of TV programs someone is watching, what websites they are browsing or even keypad entries on a touchscreen. Now, [Yang Liu, a PhD student at MIT] and colleagues have shown in a paper in Science Advances that by cross-referencing data from the ambient light sensor on a tablet with specially tailored videos displayed on the tablet’s screen, it’s possible to generate images of a user’s hands as they interact with the tablet. While the images are low-resolution and currently take impractically long to capture, he says this kind of approach could allow a determined attacker to infer how someone is using the touchscreen on their device. […]

“The acquisition time in minutes is too cumbersome to launch simple and general privacy attacks on a mass scale,” says Lukasz Olejnik, an independent security researcher and consultant who has previously highlighted the security risks posed by ambient light sensors. “However, I would not rule out the significance of targeted collections for tailored operations against chosen targets.” But he also points out that, following his earlier research, the World Wide Web Consortium issued a new standard that limited access to the light sensor API, which has already been adopted by browser vendors.

Liu notes, however, that there are still no blanket restrictions for Android apps. In addition, the researchers discovered that some devices directly log data from the light sensor in a system file that is easily accessible, bypassing the need to go through an API. The team also found that lowering the resolution of the images could bring the acquisition times within practical limits while still maintaining enough detail for basic recognition tasks. Nonetheless, Liu agrees that the approach is too complicated for widespread attacks. And one saving grace is that it is unlikely to ever work on a smartphone as the displays are simply too small. But Liu says their results demonstrate how seemingly harmless combinations of components in mobile devices can lead to surprising security risks.

Read more of this story at Slashdot.

Netflix Password Sharing Crackdown To Expand To US In Q2 2023

Netflix is planning a “broad rollout” of the password sharing crackdown that it began implementing in 2022, the company said today in its Q1 2023 earnings report (PDF). MacRumors reports: The “paid sharing” plan that Netflix has been testing in a limited number of countries will expand to additional countries in the second quarter, including the United States. Netflix said that it was “pleased with the results” of the password sharing restrictions that it implemented in Canada, New Zealand, Spain, and Portugal earlier this year. Netflix initially planned to start eliminating password sharing in the United States in the first quarter of the year, but the company said that it had learned from its tests and “found opportunities to improve the experience for members.” There is a “cancel reaction” expected in each market where paid sharing is implemented, but increased revenue comes later as borrowers activate their own Netflix accounts and existing members add “extra member” accounts.

In Canada, paid sharing resulted in a larger Netflix membership base and an acceleration in revenue growth, which has given Netflix the confidence to expand it to the United States. When Netflix brings its paid sharing rules to the United States, multi-household account use will no longer be permitted. Netflix subscribers who share an account with those who do not live with them will need to pay for an additional member. In Canada, Netflix charges $7.99 CAD for an extra member, which is around $6. […] Netflix claims that more than 100 million households are sharing accounts, which is impacting its ability to “invest in and improve Netflix” for paying members.

Read more of this story at Slashdot.

Amazon Sued For Not Telling New York Store Customers About Facial Recognition

Amazon did not alert its New York City customers that they were being monitored by facial recognition technology, a lawsuit filed Thursday alleges. CNBC reports: In a class-action suit, lawyers for Alfredo Perez said that the company failed to tell visitors to Amazon Go convenience stores that the technology was in use. Thanks to a 2021 law, New York is the only major American city to require businesses to post signs if they’re tracking customers’ biometric information, such as facial scans or fingerprints. […] The lawsuit says that Amazon only recently put up signs informing New York customers of its use of facial recognition technology, more than a year after the disclosure law went into effect. “To make this ‘Just Walk Out’ technology possible, the Amazon Go stores constantly collect and use customers’ biometric identifier information, including by scanning the palms of some customers to identify them and by applying computer vision, deep learning algorithms, and sensor fusion that measure the shape and size of each customer’s body to identify customers, track where they move in the stores, and determine what they have purchased,” says the lawsuit.

“It means that even a global tech giant can’t ignore local privacy laws,” Albert Cahn, project director, said in a text message. “As we wait for long overdue federal privacy laws, it shows there is so much local governments can do to protect their residents.”

Read more of this story at Slashdot.

The Washington Post Says There’s ‘No Real Reason’ to Use a VPN

Some people try to hide parts of their email address from online scrapers by spelling out “at” and “dot,” notes a Washington Post technology newsletter. But unfortunately, “This spam-fighting trick doesn’t work. At all.” They warn that it’s not just a “piece of anti-spam fiction,” but “an example of the digital self-protection myths that drain your time and energy and make you less safe.

“Today, let’s kill off four privacy and security bogus beliefs, including that you need a VPN to stay safe online. (No, you probably don’t.)
Myth No. 3: You need a VPN to stay safe online.
…for most people in the United States and other democracies, “There is no real reason why you should use a VPN,” said Frédéric Rivain, chief technology officer of Dashlane, a password management service that also offers a VPN…. If you’re researching sensitive subjects like depression and don’t want family members to know or corporations to keep records of your activities, Rivain said you might be better off using a privacy-focused web browser such as Brave or the search engine DuckDuckGo. If you use a VPN, that company has records of what you’re doing. And advertisers will still figure out how to pitch ads based on your online activities.

P.S. If you’re concerned about crooks stealing your info when you use WiFi networks in coffee shops or airports and want to use a VPN to disguise what you’re doing, you probably don’t need to. Using public WiFi is safe now in most circumstances, my colleague Tatum Hunter has reported.
“Many VPNs are also dodgy and may do far more harm than good,” their myth-busting continues, referring readers to an earlier analysis by the Washington Post (with some safe recommendations).

On a more sympathetic note, they acknowledge that “It’s exhausting to be a human on the internet. Companies and public officials could be doing far more to protect you.”

But as it is, “the internet is a nonstop scam machine and a little paranoia is healthy.”

Read more of this story at Slashdot.

Tile Ads Undetectable Anti-Theft Mode To Tracking Devices, With $1 Million Fine If Used For Stalking

AirTag competitor Tile today announced a new Anti-Theft Mode for Tile tracking devices, which is designed to make Tile accessories undetectable by the anti-stalking Scan and Secure feature. MacRumors reports: Scan and Secure is a security measure that Tile implemented in order to allow iPhone and Android users to scan for and detect nearby Tile devices to keep them from being used for stalking purposes. Unfortunately, Scan and Secure undermines the anti-theft capabilities of the Tile because a stolen device’s Tile can be located and removed, something also possible with similar security features added for AirTags. Tile’s Anti-Theft Mode disables Scan and Secure so a Tile tracking device will not be able to be located by a person who does not own the tracker. To prevent stalking with Anti-Theft Mode, Tile says that customers must register using multi-factor identification and agree to stringent usage terms, which include a $1 million fine if the device ends up being used to track a person without their consent.

The Anti-Theft Mode option is meant to make it easier to locate stolen items by preventing thieves from knowing an item is being tracked. Tile points out that in addition to Anti-Theft Mode, its trackers do not notify nearby smartphone users when an unknown Bluetooth tracker is traveling with them, making them more useful for tracking stolen items than AirTags. Apple has added alerts for nearby AirTags to prevent AirTags from being used for tracking people. Enabling Anti-Theft mode will require users to link a government-issued ID card to their Tile account, submitting to an “advanced ID verification process” that uses a biometric scan to detect fake IDs. […] Anti-Theft Mode is rolling out to Tile users starting today, and will be available to all users in the coming weeks.

Read more of this story at Slashdot.