Windows 11 is Now Automatically Enabling OneDrive Folder Backup Without Asking Permission
Depending on how much is stored there, you might end up with a desktop and other folders filled to the brim with shortcuts to various stuff right after finishing a clean Windows installation. Automatic folder backup in OneDrive is a very useful feature when used properly and when the user deliberately enables it. However, Microsoft decided that sending a few notification prompts to enable folder backup was not enough, so it just turned the feature on without asking anybody or even letting users know about it, resulting in a flood of Reddit posts about users complaining about what the hell are those green checkmarks next to files and shortcuts on their desktops.
Read more of this story at Slashdot.
Julian Assange Reaches Plea Deal With US, Allowing Him To Go Free
Assange had faced 18 counts from a 2019 indictment for his alleged role in the breach that carried a max of up to 175 years in prison, though he was unlikely to be sentenced to that time in full. Assange was being pursued by US authorities for publishing confidential military records supplied by former Army intelligence analyst Chelsea Manning in 2010 and 2011. US officials alleged that Assange goaded Manning into obtaining thousands of pages of unfiltered US diplomatic cables that potentially endangered confidential sources, Iraq war-related significant activity reports and information related to Guantanamo Bay detainees.
Read more of this story at Slashdot.
Meta Is Tagging Real Photos As ‘Made With AI,’ Says Photographers
Former White House photographer Pete Souza said in an Instagram post that one of his photos was tagged with the new label. Souza told TechCrunch in an email that Adobe changed how its cropping tool works and you have to “flatten the image” before saving it as a JPEG image. He suspects that this action has triggered Meta’s algorithm to attach this label. “What’s annoying is that the post forced me to include the ‘Made with AI’ even though I unchecked it,” Souza told TechCrunch.
Meta would not answer on the record to TechCrunch’s questions about Souza’s experience or other photographers’ posts who said their posts were incorrectly tagged. However, after publishing of the story, Meta said the company is evaluating its approach to indicate labels reflect the amount of AI used in an image. “Our intent has always been to help people know when they see content that has been made with AI. We are taking into account recent feedback and continue to evaluate our approach so that our labels reflect the amount of AI used in an image,” a Meta spokesperson told TechCrunch. “For now, Meta provides no separate labels to indicate if a photographer used a tool to clean up their photo, or used AI to create it,” notes TechCrunch. “For users, it might be hard to understand how much AI was involved in a photo.”
“Meta’s label specifies that ‘Generative AI may have been used to create or edit content in this post’ — but only if you tap on the label. Despite this approach, there are plenty of photos on Meta’s platforms that are clearly AI-generated, and Meta’s algorithm hasn’t labeled them.”
Read more of this story at Slashdot.
EFF: New License Plate Reader Vulnerabilties Prove The Tech Itself is a Public Safety Threat
When law enforcement uses automated license plate readers (ALPRs) to document the comings and goings of every driver on the road, regardless of a nexus to a crime, it results in gargantuan databases of sensitive information, and few agencies are equipped, staffed, or trained to harden their systems against quickly evolving cybersecurity threats. The Cybersecurity and Infrastructure Security Agency (CISA), a component of the U.S. Department of Homeland Security, released an advisory last week that should be a wake up call to the thousands of local government agencies around the country that use ALPRs to surveil the travel patterns of their residents by scanning their license plates and “fingerprinting” their vehicles. The bulletin outlines seven vulnerabilities in Motorola Solutions’ Vigilant ALPRs, including missing encryption and insufficiently protected credentials…
Unlike location data a person shares with, say, GPS-based navigation app Waze, ALPRs collect and store this information without consent and there is very little a person can do to have this information purged from these systems… Because drivers don’t have control over ALPR data, the onus for protecting the data lies with the police and sheriffs who operate the surveillance and the vendors that provide the technology. It’s a general tenet of cybersecurity that you should not collect and retain more personal data than you are capable of protecting. Perhaps ironically, a Motorola Solutions cybersecurity specialist wrote an article in Police Chief magazine this month that public safety agencies “are often challenged when it comes to recruiting and retaining experienced cybersecurity personnel,” even though “the potential for harm from external factors is substantial.” That partially explains why, more than 125 law enforcement agencies reported a data breach or cyberattacks between 2012 and 2020, according to research by former EFF intern Madison Vialpando. The Motorola Solutions article claims that ransomware attacks “targeting U.S. public safety organizations increased by 142 percent” in 2023.
Yet, the temptation to “collect it all” continues to overshadow the responsibility to “protect it all.” What makes the latest CISA disclosure even more outrageous is it is at least the third time in the last decade that major security vulnerabilities have been found in ALPRs… If there’s one positive thing we can say about the latest Vigilant vulnerability disclosures, it’s that for once a government agency identified and reported the vulnerabilities before they could do damage… The Michigan Cyber Command center found a total of seven vulnerabilities in Vigilant devices; two of which were medium severity and 5 of which were high severity vulnerabilities…
But a data breach isn’t the only way that ALPR data can be leaked or abused. In 2022, an officer in the Kechi (Kansas) Police Department accessed ALPR data shared with his department by the Wichita Police Department to stalk his wife.
The article concludes that public safety agencies should “collect only the data they need for actual criminal investigations.
“They must never store more data than they adequately protect within their limited resources-or they must keep the public safe from data breaches by not collecting the data at all.”
Read more of this story at Slashdot.