New Bill Would Let Defendants Inspect Algorithms Used Against Them In Court

Lauren Feiner reports via The Verge: Reps. Mark Takano (D-CA) and Dwight Evans (D-PA) reintroduced the Justice in Forensic Algorithms Act on Thursday, which would allow defendants to access the source code of software used to analyze evidence in their criminal proceedings. It would also require the National Institute of Standards and Technology (NIST) to create testing standards for forensic algorithms, which software used by federal enforcers would need to meet.

The bill would act as a check on unintended outcomes that could be created by using technology to help solve crimes. Academic research has highlighted the ways human bias can be built into software and how facial recognition systems often struggle to differentiate Black faces, in particular. The use of algorithms to make consequential decisions in many different sectors, including both crime-solving and health care, has raised alarms for consumers and advocates as a result of such research.

Takano acknowledged that gaining or hiring the deep expertise needed to analyze the source code might not be possible for every defendant. But requiring NIST to create standards for the tools could at least give them a starting point for understanding whether a program matches the basic standards. Takano introduced previous iterations of the bill in 2019 and 2021, but they were not taken up by a committee.

Read more of this story at Slashdot.

Scientists Propose AI Apocalypse Kill Switches

A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions — including those designed to improve visibility and limit the sale of AI accelerators — are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you’re not familiar, “dual-use” refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent “know-your-customer” policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked.

Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we’ve previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they’ve left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components.

At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. […] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: “Specialized co-processors that sit on the chip could hold a cryptographically signed digital “certificate,” and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance.” In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn’t without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit.

Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. “Nuclear weapons use similar mechanisms called permissive action links,” they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they’d first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn’t always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as “allocation.”

Read more of this story at Slashdot.

Indian Government Moves To Ban ProtonMail After Bomb Threat

Following a hoax bomb threat sent via ProtonMail to schools in Chennai, India, police in the state of Tamil Nadu put in a request to block the encrypted email service in the region since they have been unable to identify the sender. According to Hindustan Times, that request was granted today. From the report: The decision to block Proton Mail was taken at a meeting of the 69A blocking committee on Wednesday afternoon. Under Section 69A of the IT Act, the designated officer, on approval by the IT Secretary and at the recommendation of the 69A blocking committee, can issue orders to any intermediary or a government agency to block any content for national security, public order and allied reasons. HT could not ascertain if a blocking order will be issued to Apple and Google to block the Proton Mail app. The final order to block the website has not yet been sent to the Department of Telecommunications but the MeitY has flagged the issue with the DoT.

During the meeting, the nodal officer representing the Tamil Nadu government submitted that a bomb threat was sent to multiple schools using ProtonMail, HT has learnt. The police attempted to trace the IP address of the sender but to no avail. They also tried to seek help from the Interpol but that did not materialise either, the nodal officer said. During the meeting, HT has learnt, MeitY representatives noted that getting information from Proton Mail, on other criminal matters, not necessarily linked to Section 69A related issues, is a recurrent problem.

Although Proton Mail is end-to-end encrypted, which means the content of the emails cannot be intercepted and can only be seen by the sender and recipient if both are using Proton Mail, its privacy policy states that due to the nature of the SMTP protocol, certain email metadata — including sender and recipient email addresses, the IP address incoming messages originated from, attachment name, message subject, and message sent and received times — is available with the company. “We condemn a potential block as a misguided measure that only serves to harm ordinary people. Blocking access to Proton is an ineffective and inappropriate response to the reported threats. It will not prevent cybercriminals from sending threats with another email service and will not be effective if the perpetrators are located outside of India,” said ProtonMail in a statement.

“We are currently working to resolve this situation and are investigating how we can best work together with the Indian authorities to do so. We understand the urgency of the situation and are completely clear that our services are not to be used for illegal purposes. We routinely remove users who are found to be doing so and are willing to cooperate wherever possible within international cooperation agreements.”

Read more of this story at Slashdot.

Apple Confirms iOS 17.4 Removes Home Screen Web Apps In the EU

Apple has now offered an explanation for why iOS 17.4 removes support for Home Screen web apps in the European Union. Spoiler: it’s because of the Digital Markets Act that went into effect last August. 9to5Mac reports: Last week, iPhone users in the European Union noticed that they were no longer able to install and run web apps on their iPhone’s Home Screen in iOS 17.4. Apple has added a number of features over the years to improve support for progressive web apps on iPhone. For example, iOS 16.4 allowed PWAs to deliver push notifications with icon badges. One change in iOS 17.4 is that the iPhone now supports alternative browser engines in the EU. This allows companies to build browsers that don’t use Apple’s WebKit engine for the first time. Apple says that this change, required by the Digital Markets Act, is why it has been forced to remove Home Screen web apps support in the European Union.

Apple explains that it would have to build an “entirely new integration architecture that does not currently exist in iOS” to address the “complex security and privacy concerns associated with web apps using alternative browser engines.” This work “was not practical to undertake given the other demands of the DMA and the very low user adoption of Home Screen web apps,” Apple explains. “And so, to comply with the DMA’s requirements, we had to remove the Home Screen web apps feature in the EU.” “EU users will be able to continue accessing websites directly from their Home Screen through a bookmark with minimal impact to their functionality,” Apple continues.

It’s understandable that Apple wouldn’t offer support for Home Screen web apps for third-party browsers. But why did it also remove support for Home Screen web apps for Safari? Unfortunately, that’s another side effect of the Digital Markets Act. The DMA requires that all browsers have equality, meaning that Apple can’t favor Safari and WebKit over third-party browser engines. Therefore, because it can’t offer Home Screen web apps support for third-party browsers, it also can’t offer support via Safari. […] iOS 17.4 is currently available to developers and public beta testers, and is slated for a release in early March. The full explanation was published on Apple’s developer website today.

Read more of this story at Slashdot.

Leaked Emails Show Hugo Awards Self-Censoring To Appease China

samleecole shares a report from 404 Media: A trove of leaked emails shows how administrators of one of the most prestigious awards in science fiction censored themselves because the awards ceremony was being held in China. Earlier this month, the Hugo Awards came under fire with accusations of censorship when several authors were excluded from the awards, including Neil Gaiman, R. F. Kuang, Xiran Jay Zhao, and Paul Weimer. These authors’ works had earned enough votes to make them finalists, but were deemed “ineligible” for reasons not disclosed by Hugo administrators. The Hugo Awards are one of the largest and most important science fiction awards. […]

The emails, which show the process of compiling spreadsheets of the top 10 works in each category and checking them for “sensitive political nature” to see if they were “an issue in China,” were obtained by fan writer Chris M. Barkley and author Jason Sanford, and published on fandom news site File 770 and Sanford’s Patreon, where they uploaded the full PDF of the emails. They were provided to them by Hugo Awards administrator Diane Lacey. Lacey confirmed in an email to 404 Media that she was the source of the emails. “In addition to the regular technical review, as we are happening in China and the *laws* we operate under are different…we need to highlight anything of a sensitive political nature in the work,” Dave McCarty, head of the 2023 awards jury, directed administrators in an email. “It’s not necessary to read everything, but if the work focuses on China, taiwan, tibet, or other topics that may be an issue *in* China…that needs to be highlighted so that we can determine if it is safe to put it on the ballot of if the law will require us to make an administrative decision about it.”

The email replies to this directive show administrators combing through authors’ social media presences and public travel histories, including from before they were nominated for the 2023 awards, and their writing and bodies of work beyond just what they were nominated for. Among dozens of other posts and writings, they note Weimer’s negative comments about the Chinese government in a Patreon post and misspell Zhao’s name and work (calling their novel Iron Widow “The Iron Giant”). About author Naseem Jamnia, an administrator allegedly wrote, “Author openly describes themselves as queer, nonbinary, trans, (And again, good for them), and frequently writes about gender, particularly non-binary. The cited work also relies on these themes. I include them because I don’t know how that will play in China. (I suspect less than well.)”

“As far as our investigation is concerned there was no reason to exclude the works of Kuang, Gaiman, Weimer or Xiran Jay Zhao, save for being viewed as being undesirable in the view of the Hugo Award admins which had the effect of being the proxies Chinese government,” Sanford and Barkley wrote. In conjunction with the email trove, Sanford and Barkley also released an apology letter from Lacey, in which she explains some of her role in the awards vetting process and also blames McCarty for his role in the debacle. McCarty, along with board chair Kevin Standlee, resigned earlier this month.

Read more of this story at Slashdot.

Sony’s PS5 Enters ‘Latter Stage of Its Life Cycle’

After missing its sales target in the last quarter, Sony says it plans to emphasize the PlayStation 5’s profitability over unit sales as the console approaches its fourth birthday. “Looking ahead, PS5 will enter the latter stage of its life cycle,” said Sony senior vice president Naomi Matsuoka. “As such, we will put more emphasis on the balance between profitability and sales. For this reason, we expect the annual sales pace of PS5 hardware will start falling from the next fiscal year.” Sony also said it has no plans to release “any new major existing franchise titles” in its next fiscal year. The Verge reports: Sony now expects to sell 4 million fewer PS5 consoles in its 2023 fiscal year ending March 31st compared to previous projections, Bloomberg reports. The revision came as part of today’s third-quarter earnings release which saw Sony lower the PS5 sales forecast from the 25 million consoles it expected to sell down to 21 million.

While PS5 sales were up in Sony’s third quarter, increasing to 8.2 million units from 6.3 million in the same quarter the previous year, Bloomberg notes that this was roughly a million units lower than it had previously projected. That’s despite the release of the big first-party title Spider-Man 2, strong sales of third-party titles, and the launch of a new slimmer PS5 in November.

In its third quarter, Sony’s gaming revenue was up 16 percent versus the same period the previous year, sitting at 1.4 trillion yen (around $9.3 billion), but operating income was down 26 percent to 86.1 billion yen (around $572 million) due to promotions in the third quarter ending on December 31st.

Read more of this story at Slashdot.

NYC Sues Social Media Companies Over Youth Mental Health Crisis

New York City Mayor Eric Adams announced a lawsuit against four of the nation’s largest social media companies, accusing them of fueling a “national youth mental health crisis.” From a report: The lawsuit was filed to hold TikTok, Instagram, Facebook, Snapchat, and YouTube Accountable for their damaging influence on the mental health of children, Adams said. The lawsuit, filed in California Superior Court, alleged the companies intentionally designed their platforms to purposefully manipulate and addict children and teens to social media applications. The lawsuit pointed to the use of algorithms to generate feeds that keep users on the platforms longer and encourage compulsive use.

“Over the past decade, we have seen just how addictive and overwhelming the online world can be, exposing our children to a non-stop stream of harmful content and fueling our national youth mental health crisis,” Adams said. “Our city is built on innovation and technology, but many social media platforms end up endangering our children’s mental health, promoting addiction, and encouraging unsafe behavior.” The lawsuit accused the social media companies of manipulating users by making them feel compelled to respond to one positive action with another positive action.

“These platforms take advantage of reciprocity by, for example, automatically telling the sender when their message was seen or sending notifications when a message was delivered, encouraging teens to return to the platform again and again and perpetuating online engagement and immediate responses,” the lawsuit said. The city is joining hundreds of school districts across the nation in filing litigation to force the tech companies to change their behavior and recover the costs of addressing the public health threat.

Read more of this story at Slashdot.

Largest Text-To-Speech AI Model Yet Shows ‘Emergent Abilities’

Devin Coldeway reports via TechCrunch: Researchers at Amazon have trained the largest ever text-to-speech model yet, which they claim exhibits “emergent” qualities improving its ability to speak even complex sentences naturally. The breakthrough could be what the technology needs to escape the uncanny valley. These models were always going to grow and improve, but the researchers specifically hoped to see the kind of leap in ability that we observed once language models got past a certain size. For reasons unknown to us, once LLMs grow past a certain point, they start being way more robust and versatile, able to perform tasks they weren’t trained to. That is not to say they are gaining sentience or anything, just that past a certain point their performance on certain conversational AI tasks hockey sticks. The team at Amazon AGI — no secret what they’re aiming at — thought the same might happen as text-to-speech models grew as well, and their research suggests this is in fact the case.

The new model is called Big Adaptive Streamable TTS with Emergent abilities, which they have contorted into the abbreviation BASE TTS. The largest version of the model uses 100,000 hours of public domain speech, 90% of which is in English, the remainder in German, Dutch and Spanish. At 980 million parameters, BASE-large appears to be the biggest model in this category. They also trained 400M- and 150M-parameter models based on 10,000 and 1,000 hours of audio respectively, for comparison — the idea being, if one of these models shows emergent behaviors but another doesn’t, you have a range for where those behaviors begin to emerge. As it turns out, the medium-sized model showed the jump in capability the team was looking for, not necessarily in ordinary speech quality (it is reviewed better but only by a couple points) but in the set of emergent abilities they observed and measured. Here are examples of tricky text mentioned in the paper:

– Compound nouns: The Beckhams decided to rent a charming stone-built quaint countryside holiday cottage.
– Emotions: “Oh my gosh! Are we really going to the Maldives? That’s unbelievable!” Jennie squealed, bouncing on her toes with uncontained glee.
– Foreign words: “Mr. Henry, renowned for his mise en place, orchestrated a seven-course meal, each dish a piece de resistance.
– Paralinguistics (i.e. readable non-words): “Shh, Lucy, shhh, we mustn’t wake your baby brother,” Tom whispered, as they tiptoed past the nursery.
– Punctuations: She received an odd text from her brother: ‘Emergency @ home; call ASAP! Mom & Dad are worried… #familymatters.’
– Questions: But the Brexit question remains: After all the trials and tribulations, will the ministers find the answers in time?
-Syntactic complexities: The movie that De Moya who was recently awarded the lifetime achievement award starred in 2022 was a box-office hit, despite the mixed reviews. You can read more examples of these difficult texts being spoken naturally here.

Read more of this story at Slashdot.