Microsoft President: ‘You Can’t Believe Every Video You See or Audio You Hear’

“We’re currently witnessing a rapid expansion in the abuse of these new AI tools by bad actors,” writes Microsoft VP Brad Smith, “including through deepfakes based on AI-generated video, audio, and images.

“This trend poses new threats for elections, financial fraud, harassment through nonconsensual pornography, and the next generation of cyber bullying.” Microsoft found its own tools being used in a recently-publicized episode, and the VP writes that “We need to act with urgency to combat all these problems.”

Microsoft’s blog post says they’re “committed as a company to a robust and comprehensive approach,” citing six different areas of focus:

A strong safety architecture. This includes “ongoing red team analysis, preemptive classifiers, the blocking of abusive prompts, automated testing, and rapid bans of users who abuse the system… based on strong and broad-based data analysis.”
Durable media provenance and watermarking. (“Last year at our Build 2023 conference, we announced media provenance capabilities that use cryptographic methods to mark and sign AI-generated content with metadata about its source and history.”)
Safeguarding our services from abusive content and conduct. (“We are committed to identifying and removing deceptive and abusive content” hosted on services including LinkedIn and Microsoft’s Gaming network.)
Robust collaboration across industry and with governments and civil society. This includes “others in the tech sector” and “proactive efforts” with both civil society groups and “appropriate collaboration with governments.”
Modernized legislation to protect people from the abuse of technology. “We look forward to contributing ideas and supporting new initiatives by governments around the world.”
Public awareness and education. “We need to help people learn how to spot the differences between legitimate and fake content, including with watermarking. This will require new public education tools and programs, including in close collaboration with civil society and leaders across society.”

Thanks to long-time Slashdot reader theodp for sharing the article

Read more of this story at Slashdot.

Will ‘Precision Agriculture’ Be Harmful to Farmers?

Modern U.S. farming is being transformed by precision agriculture, writes Paul Roberts, the founder of securepairs.org and Editor in Chief at Security Ledger.

Theres autonomous tractors and “smart spraying” systems that use AI-powered cameras to identify weeds, just for starters. “Among the critical components of precision agriculture: Internet- and GPS connected agricultural equipment, highly accurate remote sensors, ‘big data’ analytics and cloud computing…”

As with any technological revolution, however, there are both “winners” and “losers” in the emerging age of precision agriculture… Precision agriculture, once broadly adopted, promises to further reduce the need for human labor to run farms. (Autonomous equipment means you no longer even need drivers!) However, the risks it poses go well beyond a reduction in the agricultural work force. First, as the USDA notes on its website: the scale and high capital costs of precision agriculture technology tend to favor large, corporate producers over smaller farms. Then there are the systemic risks to U.S. agriculture of an increasingly connected and consolidated agriculture sector, with a few major OEMs having the ability to remotely control and manage vital equipment on millions of U.S. farms… (Listen to my podcast interview with the hacker Sick Codes, who reverse engineered a John Deere display to run the Doom video game for insights into the company’s internal struggles with cybersecurity.)

Finally, there are the reams of valuable and proprietary environmental and operational data that farmers collect, store and leverage to squeeze the maximum productivity out of their land. For centuries, such information resided in farmers’ heads, or on written or (more recently) digital records that they owned and controlled exclusively, typically passing that knowledge and data down to succeeding generation of farm owners. Precision agriculture technology greatly expands the scope, and granularity, of that data. But in doing so, it also wrests it from the farmer’s control and shares it with equipment manufacturers and service providers — often without the explicit understanding of the farmers themselves, and almost always without monetary compensation to the farmer for the data itself. In fact, the Federal Government is so concerned about farm data they included a section (1619) on “information gathering” into the latest farm bill.
Over time, this massive transfer of knowledge from individual farmers or collectives to multinational corporations risks beggaring farmers by robbing them of one of their most vital assets: data, and turning them into little more than passive caretakers of automated equipment managed, controlled and accountable to distant corporate masters.

Weighing in is Kevin Kenney, a vocal advocate for the “right to repair” agricultural equipment (and also an alternative fuel systems engineer at Grassroots Energy LLC). In the interview, he warns about the dangers of tying repairs to factory-installed firmware, and argues that its the long-time farmer’s “trade secrets” that are really being harvested today. The ultimate beneficiary could end up being the current “cabal” of tractor manufacturers.

“While we can all agree that it’s coming…the question is who will own these robots?”

First, we need to acknowledge that there are existing laws on the books which for whatever reason, are not being enforced. The FTC should immediately start an investigation into John Deere and the rest of the ‘Tractor Cabal’ to see to what extent farmers’ farm data security and privacy are being compromised. This directly affects national food security because if thousands- or tens of thousands of tractors’ are hacked and disabled or their data is lost, crops left to rot in the fields would lead to bare shelves at the grocery store… I think our universities have also been delinquent in grasping and warning farmers about the data-theft being perpetrated on farmers’ operations throughout the United States and other countries by makers of precision agricultural equipment.

Thanks to long-time Slashdot reader chicksdaddy for sharing the article.

Read more of this story at Slashdot.

New Bill Would Let Defendants Inspect Algorithms Used Against Them In Court

Lauren Feiner reports via The Verge: Reps. Mark Takano (D-CA) and Dwight Evans (D-PA) reintroduced the Justice in Forensic Algorithms Act on Thursday, which would allow defendants to access the source code of software used to analyze evidence in their criminal proceedings. It would also require the National Institute of Standards and Technology (NIST) to create testing standards for forensic algorithms, which software used by federal enforcers would need to meet.

The bill would act as a check on unintended outcomes that could be created by using technology to help solve crimes. Academic research has highlighted the ways human bias can be built into software and how facial recognition systems often struggle to differentiate Black faces, in particular. The use of algorithms to make consequential decisions in many different sectors, including both crime-solving and health care, has raised alarms for consumers and advocates as a result of such research.

Takano acknowledged that gaining or hiring the deep expertise needed to analyze the source code might not be possible for every defendant. But requiring NIST to create standards for the tools could at least give them a starting point for understanding whether a program matches the basic standards. Takano introduced previous iterations of the bill in 2019 and 2021, but they were not taken up by a committee.

Read more of this story at Slashdot.

Scientists Propose AI Apocalypse Kill Switches

A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions — including those designed to improve visibility and limit the sale of AI accelerators — are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you’re not familiar, “dual-use” refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent “know-your-customer” policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked.

Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we’ve previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they’ve left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components.

At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. […] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: “Specialized co-processors that sit on the chip could hold a cryptographically signed digital “certificate,” and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance.” In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn’t without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit.

Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. “Nuclear weapons use similar mechanisms called permissive action links,” they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they’d first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn’t always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as “allocation.”

Read more of this story at Slashdot.

Indian Government Moves To Ban ProtonMail After Bomb Threat

Following a hoax bomb threat sent via ProtonMail to schools in Chennai, India, police in the state of Tamil Nadu put in a request to block the encrypted email service in the region since they have been unable to identify the sender. According to Hindustan Times, that request was granted today. From the report: The decision to block Proton Mail was taken at a meeting of the 69A blocking committee on Wednesday afternoon. Under Section 69A of the IT Act, the designated officer, on approval by the IT Secretary and at the recommendation of the 69A blocking committee, can issue orders to any intermediary or a government agency to block any content for national security, public order and allied reasons. HT could not ascertain if a blocking order will be issued to Apple and Google to block the Proton Mail app. The final order to block the website has not yet been sent to the Department of Telecommunications but the MeitY has flagged the issue with the DoT.

During the meeting, the nodal officer representing the Tamil Nadu government submitted that a bomb threat was sent to multiple schools using ProtonMail, HT has learnt. The police attempted to trace the IP address of the sender but to no avail. They also tried to seek help from the Interpol but that did not materialise either, the nodal officer said. During the meeting, HT has learnt, MeitY representatives noted that getting information from Proton Mail, on other criminal matters, not necessarily linked to Section 69A related issues, is a recurrent problem.

Although Proton Mail is end-to-end encrypted, which means the content of the emails cannot be intercepted and can only be seen by the sender and recipient if both are using Proton Mail, its privacy policy states that due to the nature of the SMTP protocol, certain email metadata — including sender and recipient email addresses, the IP address incoming messages originated from, attachment name, message subject, and message sent and received times — is available with the company. “We condemn a potential block as a misguided measure that only serves to harm ordinary people. Blocking access to Proton is an ineffective and inappropriate response to the reported threats. It will not prevent cybercriminals from sending threats with another email service and will not be effective if the perpetrators are located outside of India,” said ProtonMail in a statement.

“We are currently working to resolve this situation and are investigating how we can best work together with the Indian authorities to do so. We understand the urgency of the situation and are completely clear that our services are not to be used for illegal purposes. We routinely remove users who are found to be doing so and are willing to cooperate wherever possible within international cooperation agreements.”

Read more of this story at Slashdot.

Apple Confirms iOS 17.4 Removes Home Screen Web Apps In the EU

Apple has now offered an explanation for why iOS 17.4 removes support for Home Screen web apps in the European Union. Spoiler: it’s because of the Digital Markets Act that went into effect last August. 9to5Mac reports: Last week, iPhone users in the European Union noticed that they were no longer able to install and run web apps on their iPhone’s Home Screen in iOS 17.4. Apple has added a number of features over the years to improve support for progressive web apps on iPhone. For example, iOS 16.4 allowed PWAs to deliver push notifications with icon badges. One change in iOS 17.4 is that the iPhone now supports alternative browser engines in the EU. This allows companies to build browsers that don’t use Apple’s WebKit engine for the first time. Apple says that this change, required by the Digital Markets Act, is why it has been forced to remove Home Screen web apps support in the European Union.

Apple explains that it would have to build an “entirely new integration architecture that does not currently exist in iOS” to address the “complex security and privacy concerns associated with web apps using alternative browser engines.” This work “was not practical to undertake given the other demands of the DMA and the very low user adoption of Home Screen web apps,” Apple explains. “And so, to comply with the DMA’s requirements, we had to remove the Home Screen web apps feature in the EU.” “EU users will be able to continue accessing websites directly from their Home Screen through a bookmark with minimal impact to their functionality,” Apple continues.

It’s understandable that Apple wouldn’t offer support for Home Screen web apps for third-party browsers. But why did it also remove support for Home Screen web apps for Safari? Unfortunately, that’s another side effect of the Digital Markets Act. The DMA requires that all browsers have equality, meaning that Apple can’t favor Safari and WebKit over third-party browser engines. Therefore, because it can’t offer Home Screen web apps support for third-party browsers, it also can’t offer support via Safari. […] iOS 17.4 is currently available to developers and public beta testers, and is slated for a release in early March. The full explanation was published on Apple’s developer website today.

Read more of this story at Slashdot.

Leaked Emails Show Hugo Awards Self-Censoring To Appease China

samleecole shares a report from 404 Media: A trove of leaked emails shows how administrators of one of the most prestigious awards in science fiction censored themselves because the awards ceremony was being held in China. Earlier this month, the Hugo Awards came under fire with accusations of censorship when several authors were excluded from the awards, including Neil Gaiman, R. F. Kuang, Xiran Jay Zhao, and Paul Weimer. These authors’ works had earned enough votes to make them finalists, but were deemed “ineligible” for reasons not disclosed by Hugo administrators. The Hugo Awards are one of the largest and most important science fiction awards. […]

The emails, which show the process of compiling spreadsheets of the top 10 works in each category and checking them for “sensitive political nature” to see if they were “an issue in China,” were obtained by fan writer Chris M. Barkley and author Jason Sanford, and published on fandom news site File 770 and Sanford’s Patreon, where they uploaded the full PDF of the emails. They were provided to them by Hugo Awards administrator Diane Lacey. Lacey confirmed in an email to 404 Media that she was the source of the emails. “In addition to the regular technical review, as we are happening in China and the *laws* we operate under are different…we need to highlight anything of a sensitive political nature in the work,” Dave McCarty, head of the 2023 awards jury, directed administrators in an email. “It’s not necessary to read everything, but if the work focuses on China, taiwan, tibet, or other topics that may be an issue *in* China…that needs to be highlighted so that we can determine if it is safe to put it on the ballot of if the law will require us to make an administrative decision about it.”

The email replies to this directive show administrators combing through authors’ social media presences and public travel histories, including from before they were nominated for the 2023 awards, and their writing and bodies of work beyond just what they were nominated for. Among dozens of other posts and writings, they note Weimer’s negative comments about the Chinese government in a Patreon post and misspell Zhao’s name and work (calling their novel Iron Widow “The Iron Giant”). About author Naseem Jamnia, an administrator allegedly wrote, “Author openly describes themselves as queer, nonbinary, trans, (And again, good for them), and frequently writes about gender, particularly non-binary. The cited work also relies on these themes. I include them because I don’t know how that will play in China. (I suspect less than well.)”

“As far as our investigation is concerned there was no reason to exclude the works of Kuang, Gaiman, Weimer or Xiran Jay Zhao, save for being viewed as being undesirable in the view of the Hugo Award admins which had the effect of being the proxies Chinese government,” Sanford and Barkley wrote. In conjunction with the email trove, Sanford and Barkley also released an apology letter from Lacey, in which she explains some of her role in the awards vetting process and also blames McCarty for his role in the debacle. McCarty, along with board chair Kevin Standlee, resigned earlier this month.

Read more of this story at Slashdot.

Sony’s PS5 Enters ‘Latter Stage of Its Life Cycle’

After missing its sales target in the last quarter, Sony says it plans to emphasize the PlayStation 5’s profitability over unit sales as the console approaches its fourth birthday. “Looking ahead, PS5 will enter the latter stage of its life cycle,” said Sony senior vice president Naomi Matsuoka. “As such, we will put more emphasis on the balance between profitability and sales. For this reason, we expect the annual sales pace of PS5 hardware will start falling from the next fiscal year.” Sony also said it has no plans to release “any new major existing franchise titles” in its next fiscal year. The Verge reports: Sony now expects to sell 4 million fewer PS5 consoles in its 2023 fiscal year ending March 31st compared to previous projections, Bloomberg reports. The revision came as part of today’s third-quarter earnings release which saw Sony lower the PS5 sales forecast from the 25 million consoles it expected to sell down to 21 million.

While PS5 sales were up in Sony’s third quarter, increasing to 8.2 million units from 6.3 million in the same quarter the previous year, Bloomberg notes that this was roughly a million units lower than it had previously projected. That’s despite the release of the big first-party title Spider-Man 2, strong sales of third-party titles, and the launch of a new slimmer PS5 in November.

In its third quarter, Sony’s gaming revenue was up 16 percent versus the same period the previous year, sitting at 1.4 trillion yen (around $9.3 billion), but operating income was down 26 percent to 86.1 billion yen (around $572 million) due to promotions in the third quarter ending on December 31st.

Read more of this story at Slashdot.