Over 15,000 Roku Accounts Sold To Buy Streaming Subscriptions, Devices

Over 15,000 Roku customers were hacked and used to make fraudulent purchases of hardware and streaming subscriptions. According to BleepingComputer, the threat actors were “selling the stolen accounts for as little as $0.50 per account, allowing purchasers to use stored credit cards to make illegal purchases.” From the report: On Friday, Roku first disclosed the data breach, warning that 15,363 customer accounts were hacked in a credential stuffing attack. A credential stuffing attack is when threat actors collect credentials exposed in data breaches and then attempt to use them to log in to other sites, in this case, Roku.com. The company says that once an account was breached, it allowed threat actors to change the information on the account, including passwords, email addresses, and shipping addresses. This effectively locked a user out of the account, allowing the threat actors to make purchases using stored credit card information without the legitimate account holder receiving order confirmation emails.

“It appears likely that the same username/password combinations had been used as login information for such third-party services as well as certain individual Roku accounts,” reads the data breach notice. “As a result, unauthorized actors were able to obtain login information from third-party sources and then use it to access certain individual Roku accounts. “After gaining access, they then changed the Roku login information for the affected individual Roku accounts, and, in a limited number of cases, attempted to purchase streaming subscriptions.” Roku says that it secured the impacted accounts and forced a password reset upon detecting the incident. Additionally, the platform’s security team investigated for any charges due to unauthorized purchases performed by the hackers and took steps to cancel the relevant subscriptions and refund the account holders.

A researcher told BleepingComputer last week that the threat actors have been using a Roku config to perform credential stuffing attacks for months, bypassing brute force attack protections and captchas by using specific URLs and rotating through lists of proxy servers. Successfully hacked accounts are then sold on stolen account marketplaces for as little as 50 cents, as seen below where 439 accounts are being sold. The seller of these accounts provides information on how to change information on the account to make fraudulent purchases. Those who purchase the stolen accounts hijack them with their own information and use stored credit cards to purchase cameras, remotes, soundbars, light strips, and streaming boxes. After making their purchases, it is common for them to share screenshots of redacted order confirmation emails on Telegram channels associated with the stolen account marketplaces.

Read more of this story at Slashdot.

License Plate-Scanning Company Violates Privacy of Millions of California Drivers, Argues Class Action

“If you drive a car in California, you may be in for a payday thanks to a lawsuit alleging privacy violations by a Texas company,” report SFGate:

The 2021 lawsuit, given class-action status in September, alleges that Digital Recognition Network is breaking a California law meant to regulate the use of automatic license plate readers. DRN, a Fort Worth-based company, uses plate-scanning cameras to create location data for people’s vehicles, then sells that data to marketers, car repossessors and insurers.

What’s particularly notable about the case is the size of the class. The court has established that if you’re a California resident whose license plate data was collected by DRN at least 15 times since June 2017, you’re a class member. The plaintiff’s legal team estimates that the tally includes about 23 million people, alleging that DRN cameras were mounted to cars on public roads. The case website lets Californians check whether their plates were scanned.

Barring a settlement or delay, the trial to decide whether DRN must pay a penalty to those class members will begin on May 17 in San Diego County Superior Court…

The company’s cameras scan 220 million plates a month, its website says, and customers can use plate data to “create comprehensive vehicle stories.”

A lawyer for the firm representing class members told SFGATE Friday that his team will try to show DRN’s business is a “mass surveillance program.”

Read more of this story at Slashdot.

Ask Slashdot: How Can I Stop Security Firms From Harvesting My Data?

Slashdot reader Unpopular Opinions requests suggestions from the Slashdot community:
Lately a boom of companies decided to play their “nice guy” card, providing us with a trove of information about our own sites, DNS servers, email servers, pretty much anything about any online service you host.

Which is not anything new… Companies have been doing this for decades, except as paid services you requested. Now the trend is basically anyone can do it over my systems, and they are always more than happy to sell anyone, me included, my data they collected without authorization or consent. It’s data they never had the rights to collect and/or compile to begin with, including data collected thru access attempts via known default accounts (Administrator, root, admin, guest) and/or leaked credentials provided by hacked databases when a few elements seemingly match…

“Just block those crawlers”? That’s what some of those companies advise, but not only does the site operator have to automate it themself, not all companies offer lists of their source IP addresses or identify them. Some use multiple/different crawler domain names from their commercial product, or use cloud providers such as Google Cloud, AWS and Azure â” so one can’t just block access to their company’s networks without massive implications. They also change their own information with no warning, and many times, no updates to their own lists. Then, there is the indirect cost: computing cost, network cost, development cost, review cycle cost. It is a cat-and-mice game that has become very boring.

With the raise of concerns and ethical questions about AI harvesting and learning from copyrighted work, how are those security companies any different from AI, and how could one legally put a stop on this?

Block those crawlers? Change your Terms of Service? What’s the best fix… Share your own thoughts and suggestions in the comments.
How can you stop security firms from harvesting your data?

Read more of this story at Slashdot.

Mobile Device Ambient Light Sensors Can Be Used To Spy On Users

“The ambient light sensors present in most mobile devices can be accessed by software without any special permissions, unlike permissions required for accessing the microphone or the cameras,” writes longtime Slashdot reader BishopBerkeley. “When properly interrogated, the data from the light sensor can reveal much about the user.” IEEE Spectrum reports: While that may not seem to provide much detailed information, researchers have already shown these sensors can detect light intensity changes that can be used to infer what kind of TV programs someone is watching, what websites they are browsing or even keypad entries on a touchscreen. Now, [Yang Liu, a PhD student at MIT] and colleagues have shown in a paper in Science Advances that by cross-referencing data from the ambient light sensor on a tablet with specially tailored videos displayed on the tablet’s screen, it’s possible to generate images of a user’s hands as they interact with the tablet. While the images are low-resolution and currently take impractically long to capture, he says this kind of approach could allow a determined attacker to infer how someone is using the touchscreen on their device. […]

“The acquisition time in minutes is too cumbersome to launch simple and general privacy attacks on a mass scale,” says Lukasz Olejnik, an independent security researcher and consultant who has previously highlighted the security risks posed by ambient light sensors. “However, I would not rule out the significance of targeted collections for tailored operations against chosen targets.” But he also points out that, following his earlier research, the World Wide Web Consortium issued a new standard that limited access to the light sensor API, which has already been adopted by browser vendors.

Liu notes, however, that there are still no blanket restrictions for Android apps. In addition, the researchers discovered that some devices directly log data from the light sensor in a system file that is easily accessible, bypassing the need to go through an API. The team also found that lowering the resolution of the images could bring the acquisition times within practical limits while still maintaining enough detail for basic recognition tasks. Nonetheless, Liu agrees that the approach is too complicated for widespread attacks. And one saving grace is that it is unlikely to ever work on a smartphone as the displays are simply too small. But Liu says their results demonstrate how seemingly harmless combinations of components in mobile devices can lead to surprising security risks.

Read more of this story at Slashdot.

Netflix Password Sharing Crackdown To Expand To US In Q2 2023

Netflix is planning a “broad rollout” of the password sharing crackdown that it began implementing in 2022, the company said today in its Q1 2023 earnings report (PDF). MacRumors reports: The “paid sharing” plan that Netflix has been testing in a limited number of countries will expand to additional countries in the second quarter, including the United States. Netflix said that it was “pleased with the results” of the password sharing restrictions that it implemented in Canada, New Zealand, Spain, and Portugal earlier this year. Netflix initially planned to start eliminating password sharing in the United States in the first quarter of the year, but the company said that it had learned from its tests and “found opportunities to improve the experience for members.” There is a “cancel reaction” expected in each market where paid sharing is implemented, but increased revenue comes later as borrowers activate their own Netflix accounts and existing members add “extra member” accounts.

In Canada, paid sharing resulted in a larger Netflix membership base and an acceleration in revenue growth, which has given Netflix the confidence to expand it to the United States. When Netflix brings its paid sharing rules to the United States, multi-household account use will no longer be permitted. Netflix subscribers who share an account with those who do not live with them will need to pay for an additional member. In Canada, Netflix charges $7.99 CAD for an extra member, which is around $6. […] Netflix claims that more than 100 million households are sharing accounts, which is impacting its ability to “invest in and improve Netflix” for paying members.

Read more of this story at Slashdot.

Amazon Sued For Not Telling New York Store Customers About Facial Recognition

Amazon did not alert its New York City customers that they were being monitored by facial recognition technology, a lawsuit filed Thursday alleges. CNBC reports: In a class-action suit, lawyers for Alfredo Perez said that the company failed to tell visitors to Amazon Go convenience stores that the technology was in use. Thanks to a 2021 law, New York is the only major American city to require businesses to post signs if they’re tracking customers’ biometric information, such as facial scans or fingerprints. […] The lawsuit says that Amazon only recently put up signs informing New York customers of its use of facial recognition technology, more than a year after the disclosure law went into effect. “To make this ‘Just Walk Out’ technology possible, the Amazon Go stores constantly collect and use customers’ biometric identifier information, including by scanning the palms of some customers to identify them and by applying computer vision, deep learning algorithms, and sensor fusion that measure the shape and size of each customer’s body to identify customers, track where they move in the stores, and determine what they have purchased,” says the lawsuit.

“It means that even a global tech giant can’t ignore local privacy laws,” Albert Cahn, project director, said in a text message. “As we wait for long overdue federal privacy laws, it shows there is so much local governments can do to protect their residents.”

Read more of this story at Slashdot.

The Washington Post Says There’s ‘No Real Reason’ to Use a VPN

Some people try to hide parts of their email address from online scrapers by spelling out “at” and “dot,” notes a Washington Post technology newsletter. But unfortunately, “This spam-fighting trick doesn’t work. At all.” They warn that it’s not just a “piece of anti-spam fiction,” but “an example of the digital self-protection myths that drain your time and energy and make you less safe.

“Today, let’s kill off four privacy and security bogus beliefs, including that you need a VPN to stay safe online. (No, you probably don’t.)
Myth No. 3: You need a VPN to stay safe online.
…for most people in the United States and other democracies, “There is no real reason why you should use a VPN,” said Frédéric Rivain, chief technology officer of Dashlane, a password management service that also offers a VPN…. If you’re researching sensitive subjects like depression and don’t want family members to know or corporations to keep records of your activities, Rivain said you might be better off using a privacy-focused web browser such as Brave or the search engine DuckDuckGo. If you use a VPN, that company has records of what you’re doing. And advertisers will still figure out how to pitch ads based on your online activities.

P.S. If you’re concerned about crooks stealing your info when you use WiFi networks in coffee shops or airports and want to use a VPN to disguise what you’re doing, you probably don’t need to. Using public WiFi is safe now in most circumstances, my colleague Tatum Hunter has reported.
“Many VPNs are also dodgy and may do far more harm than good,” their myth-busting continues, referring readers to an earlier analysis by the Washington Post (with some safe recommendations).

On a more sympathetic note, they acknowledge that “It’s exhausting to be a human on the internet. Companies and public officials could be doing far more to protect you.”

But as it is, “the internet is a nonstop scam machine and a little paranoia is healthy.”

Read more of this story at Slashdot.

Tile Ads Undetectable Anti-Theft Mode To Tracking Devices, With $1 Million Fine If Used For Stalking

AirTag competitor Tile today announced a new Anti-Theft Mode for Tile tracking devices, which is designed to make Tile accessories undetectable by the anti-stalking Scan and Secure feature. MacRumors reports: Scan and Secure is a security measure that Tile implemented in order to allow iPhone and Android users to scan for and detect nearby Tile devices to keep them from being used for stalking purposes. Unfortunately, Scan and Secure undermines the anti-theft capabilities of the Tile because a stolen device’s Tile can be located and removed, something also possible with similar security features added for AirTags. Tile’s Anti-Theft Mode disables Scan and Secure so a Tile tracking device will not be able to be located by a person who does not own the tracker. To prevent stalking with Anti-Theft Mode, Tile says that customers must register using multi-factor identification and agree to stringent usage terms, which include a $1 million fine if the device ends up being used to track a person without their consent.

The Anti-Theft Mode option is meant to make it easier to locate stolen items by preventing thieves from knowing an item is being tracked. Tile points out that in addition to Anti-Theft Mode, its trackers do not notify nearby smartphone users when an unknown Bluetooth tracker is traveling with them, making them more useful for tracking stolen items than AirTags. Apple has added alerts for nearby AirTags to prevent AirTags from being used for tracking people. Enabling Anti-Theft mode will require users to link a government-issued ID card to their Tile account, submitting to an “advanced ID verification process” that uses a biometric scan to detect fake IDs. […] Anti-Theft Mode is rolling out to Tile users starting today, and will be available to all users in the coming weeks.

Read more of this story at Slashdot.

Dashlane Publishes Its Source Code To GitHub In Transparency Push

Password management company Dashlane has made its mobile app code available on GitHub for public perusal, a first step it says in a broader push to make its platform more transparent. TechCrunch reports: The Dashlane Android app code is available now alongside the iOS incarnation, though it also appears to include the codebase for its Apple Watch and Mac apps even though Dashlane hasn’t specifically announced that. The company said that it eventually plans to make the code for its web extension available on GitHub too. Initially, Dashlane said that it was planning to make its codebase “fully open source,” but in response to a handful of questions posed by TechCrunch, it appears that won’t in fact be the case.

At first, the code will be open for auditing purposes only, but in the future it may start accepting contributions too –” however, there is no suggestion that it will go all-in and allow the public to fork or otherwise re-use the code in their own applications. Dashlane has released the code under a Creative Commons Attribution-NonCommercial 4.0 license, which technically means that users are allowed to copy, share and build upon the codebase so long as it’s for non-commercial purposes. However, the company said that it has stripped out some key elements from its release, effectively hamstringing what third-party developers are able to do with the code. […]

“The main benefit of making this code public is that anyone can audit the code and understand how we build the Dashlane mobile application,” the company wrote. “Customers and the curious can also explore the algorithms and logic behind password management software in general. In addition, business customers, or those who may be interested, can better meet compliance requirements by being able to review our code.” On top of that, the company says that a benefit of releasing its code is to perhaps draw-in technical talent, who can inspect the code prior to an interview and perhaps share some ideas on how things could be improved. Moreover, so-called “white-hat hackers” will now be better equipped to earn bug bounties. “Transparency and trust are part of our company values, and we strive to reflect those values in everything we do,” Dashlane continued. “We hope that being transparent about our code base will increase the trust customers have in our product.”

Read more of this story at Slashdot.