Meta Is Tagging Real Photos As ‘Made With AI,’ Says Photographers

Since May, Meta has been labeling photos created with AI tools on its social networks to help users better identify the content they’re consuming. However, as TechCrunch’s Ivan Mehta reports, this approach has faced criticism as many photos not created using AI tools have been incorrectly labeled, prompting Meta to reevaluate its labeling strategy to better reflect the actual use of AI in images. From the report: There are plenty of examples of Meta automatically attaching the label to photos that were not created through AI. For example, this photo of Kolkata Knight Riders winning the Indian Premier League Cricket tournament. Notably, the label is only visible on the mobile apps and not on the web. Plenty of other photographers have raised concerns over their images having been wrongly tagged with the “Made with AI” label. Their point is that simply editing a photo with a tool should not be subject to the label.

Former White House photographer Pete Souza said in an Instagram post that one of his photos was tagged with the new label. Souza told TechCrunch in an email that Adobe changed how its cropping tool works and you have to “flatten the image” before saving it as a JPEG image. He suspects that this action has triggered Meta’s algorithm to attach this label. “What’s annoying is that the post forced me to include the ‘Made with AI’ even though I unchecked it,” Souza told TechCrunch.

Meta would not answer on the record to TechCrunch’s questions about Souza’s experience or other photographers’ posts who said their posts were incorrectly tagged. However, after publishing of the story, Meta said the company is evaluating its approach to indicate labels reflect the amount of AI used in an image. “Our intent has always been to help people know when they see content that has been made with AI. We are taking into account recent feedback and continue to evaluate our approach so that our labels reflect the amount of AI used in an image,” a Meta spokesperson told TechCrunch. “For now, Meta provides no separate labels to indicate if a photographer used a tool to clean up their photo, or used AI to create it,” notes TechCrunch. “For users, it might be hard to understand how much AI was involved in a photo.”

“Meta’s label specifies that ‘Generative AI may have been used to create or edit content in this post’ — but only if you tap on the label. Despite this approach, there are plenty of photos on Meta’s platforms that are clearly AI-generated, and Meta’s algorithm hasn’t labeled them.”

Read more of this story at Slashdot.

Why Going Cashless Has Turned Sweden Into a High-Crime Nation

An anonymous reader quotes a report from Fortune: Ellen Bagley was delighted when she made her first sale on a popular second-hand clothing app, but just a few minutes later, the thrill turned to shock as the 20-year-old from Linkoping in Sweden discovered she’d been robbed. Everything seemed normal when Bagley received a direct message on the platform, which asked her to verify personal details to complete the deal. She clicked the link, which fired up BankID — the ubiquitous digital authorization system used by nearly all Swedish adults.After receiving a couple of error messages, she started thinking something was wrong, but it was already too late. Over 10,000 Swedish kronor ($1,000) had been siphoned from her account and the thieves disappeared into the digital shadows. “The fraudsters are so skilled at making things look legitimate,” said Bagley, who was born after BankID was created. “It’s not easy” to identify scams. Although financial crime has garnered fewer headlines than a surge in gang-related gun violence, it’s become a growing risk for the country. Beyond its borders, Sweden is an important test case on fighting cashless crime because it’s gone further on ditching paper money than almost any other country in Europe.

Online fraud and digital crime in Sweden have surged, with criminals taking 1.2 billion kronor in 2023 through scams like the one Bagley fell for, doubling from 2021. Law-enforcement agencies estimate that the size of Sweden’s criminal economy could amount to as high as 2.5% of the country’s gross domestic product. To counter the digital crime spree, Swedish authorities have put pressure on banks to tighten security measures and make it harder on tech-savvy criminals, but it’s a delicate balancing act. Going too far could slow down the economy, while doing too little erodes trust and damages legitimate businesses in the process.Using complex webs of fake companies and forging documents to gain access to Sweden’s welfare system, sophisticated fraudsters have made Sweden a “Silicon Valley for criminal entrepreneurship,” said Daniel Larson, a senior economic crime prosecutor. While the shock of armed violence has grabbed public attention — the nation’s gun-homicide rate tripled between 2012 and 2022 — economic crime underlies gang activity and needs to be tackled as aggressively, he added. “That has been a strategic mistake,” Larson said. “This profit-generating crime is what’s fueling organized crime and, in some cases, leads to these conflicts.”

Sweden’s switch to electronic cash started after a surge of armed robberies in the 1990s, and by 2022, only 8% of Swedes said they had used cash for their latest purchase, according to a central bank survey. Along with neighboring Norway, Sweden has Europe’s lowest number of ATMs per capita, according to the IMF. The prevalence of BankID play a role in Sweden’s vulnerability. The system works like an online signature. If used, it’s considered a done deal and the transaction gets executed immediately. It was designed by Sweden’s banks to make electronic payments even quicker and easier than handing over a stack of bills. Since it’s original rollout in 2001, it’s become part of the everyday Swedish life. On average, the service — which requires a six-digit code, a fingerprint or a face scan for authentication — is used more than twice a day by every adult Swede and is involved in everything from filing tax returns to paying for bus tickets.Originally intended as a product by banks for their customers, its use exploded in 2005 after Sweden’s tax agency adopted the technology as an identification for tax returns, giving it the government’s official seal of approval. The launch of BankID on mobile phones in 2010 increased usage even further, along with public perception that associated cash with criminality.The country’s central bank has acknowledged that some of those connotations may have gone too far. “We have to be very clear that there are still honest people using cash,” Riksbank Governor Erik Thedeen told Bloomberg.

Read more of this story at Slashdot.

EFF: New License Plate Reader Vulnerabilties Prove The Tech Itself is a Public Safety Threat

Automated license plate readers “pose risks to public safety,” argues the EFF, “that may outweigh the crimes they are attempting to address in the first place.”

When law enforcement uses automated license plate readers (ALPRs) to document the comings and goings of every driver on the road, regardless of a nexus to a crime, it results in gargantuan databases of sensitive information, and few agencies are equipped, staffed, or trained to harden their systems against quickly evolving cybersecurity threats. The Cybersecurity and Infrastructure Security Agency (CISA), a component of the U.S. Department of Homeland Security, released an advisory last week that should be a wake up call to the thousands of local government agencies around the country that use ALPRs to surveil the travel patterns of their residents by scanning their license plates and “fingerprinting” their vehicles. The bulletin outlines seven vulnerabilities in Motorola Solutions’ Vigilant ALPRs, including missing encryption and insufficiently protected credentials…

Unlike location data a person shares with, say, GPS-based navigation app Waze, ALPRs collect and store this information without consent and there is very little a person can do to have this information purged from these systems… Because drivers don’t have control over ALPR data, the onus for protecting the data lies with the police and sheriffs who operate the surveillance and the vendors that provide the technology. It’s a general tenet of cybersecurity that you should not collect and retain more personal data than you are capable of protecting. Perhaps ironically, a Motorola Solutions cybersecurity specialist wrote an article in Police Chief magazine this month that public safety agencies “are often challenged when it comes to recruiting and retaining experienced cybersecurity personnel,” even though “the potential for harm from external factors is substantial.” That partially explains why, more than 125 law enforcement agencies reported a data breach or cyberattacks between 2012 and 2020, according to research by former EFF intern Madison Vialpando. The Motorola Solutions article claims that ransomware attacks “targeting U.S. public safety organizations increased by 142 percent” in 2023.

Yet, the temptation to “collect it all” continues to overshadow the responsibility to “protect it all.” What makes the latest CISA disclosure even more outrageous is it is at least the third time in the last decade that major security vulnerabilities have been found in ALPRs… If there’s one positive thing we can say about the latest Vigilant vulnerability disclosures, it’s that for once a government agency identified and reported the vulnerabilities before they could do damage… The Michigan Cyber Command center found a total of seven vulnerabilities in Vigilant devices; two of which were medium severity and 5 of which were high severity vulnerabilities…
But a data breach isn’t the only way that ALPR data can be leaked or abused. In 2022, an officer in the Kechi (Kansas) Police Department accessed ALPR data shared with his department by the Wichita Police Department to stalk his wife.
The article concludes that public safety agencies should “collect only the data they need for actual criminal investigations.

“They must never store more data than they adequately protect within their limited resources-or they must keep the public safe from data breaches by not collecting the data at all.”

Read more of this story at Slashdot.