Interpol Launches ‘First-Ever Metaverse’ Designed For Global Law Enforcement

The International Criminal Police Organization (Interpol) has announced the launch of its fully operational metaverse, initially designed for activities such as immersive training courses for forensic investigations. Decrypt reports: Unveiled at the 90th Interpol General Assembly in New Delhi, the INTERPOL Metaverse is described as the “first-ever Metaverse specifically designed for law enforcement worldwide.” Among other things, the platform will also help law enforcement across the globe to interact with each other via avatars. “For many, the Metaverse seems to herald an abstract future, but the issues it raises are those that have always motivated INTERPOL — supporting our member countries to fight crime and making the world, virtual or not, safer for those who inhabit it,” Jurgen Stock, Interpol’s secretary general said in a statement.

One of the challenges identified by organizations is that something that is considered a crime in the physical world may not necessarily be the same in the virtual world. “By identifying these risks from the outset, we can work with stakeholders to shape the necessary governance frameworks and cut off future criminal markets before they are fully formed,” said Madan Oberoi, Interpol’s executive director of Technology and Innovation. “Only by having these conversations now can we build an effective response.”

In a live demonstration at the event, Interpol experts took to a Metaverse classroom to deliver a training course on travel document verification and passenger screening using the capabilities of the newly-launched platform. Students were then teleported to an airport where they were able to apply their newly-acquired skills at a virtual border point. Additionally, Interpol has created an expert group that will be tasked with ensuring new virtual worlds are “secure by design.” The report notes that Interpol has also joined “Defining and Building the Metaverse,” a World Economic Forum initiative around metaverse governance.

Read more of this story at Slashdot.

France Becomes Latest Country To Leave Controversial Energy Charter Treaty

France has become the latest country to pull out of the controversial energy charter treaty (ECT), which protects fossil fuel investors from policy changes that might threaten their profits. The Guardian reports: Speaking after an EU summit in Brussels on Friday, French president, Emmanuel Macron, said: “France has decided to withdraw from the energy charter treaty.” Quitting the ECT was “coherent” with the Paris climate deal, he added. Macron’s statement follows a recent vote by the Polish parliament to leave the 52-nation treaty and announcements by Spain and the Netherlands that they too wanted out of the scheme.

The European Commission has proposed a “modernization” of the agreement, which would end the writ of the treaty’s secret investor-state courts between EU members. That plan is expected to be discussed at a meeting in Mongolia next month. A French government official said Paris would not try to block the modernization blueprint within the EU or at the meeting in Mongolia. “But whatever happens, France is leaving,” the official said. While France was “willing to coordinate a withdrawal with others, we don’t see that there is a critical mass ready to engage with that in the EU bloc as a whole,” the official added.

The French withdrawal will take about a year to be completed, and in that time, discussion in Paris will likely move on to ways of neutralizing or reducing the duration of a “sunset clause” in the ECT that allows retrospective lawsuits. Progress on that issue is thought possible by sources close to ongoing legal negotiations on the issue.

Read more of this story at Slashdot.

US To Launch ‘Labeling’ Rating Program For Internet-Connected Devices In 2023

The Biden administration said it will launch a cybersecurity labeling program for consumer Internet of Things devices starting in 2023 in an effort to protect Americans from “significant national security risks.” TechCrunch reports: Inspired by Energy Star, a labeling program operated by Environmental Protection Agency and the Department of Energy to promote energy efficiency, the White House is planning to roll out a similar IoT labeling program to the “highest-risk” devices starting next year, a senior Biden administration official said on Wednesday following a National Security Council meeting with consumer product associations and device manufacturers. Attendees at the meeting included White House cyber official Anne Neuberger, FCC chairwoman Jessica Rosenworcel, National Cyber Director Chris Inglis and Sen. Angus King, alongside leaders from Google, Amazon, Samsung, Sony and others.

The initiative, described by White House officials as “Energy Star for cyber,” will help Americans to recognize whether devices meet a set of basic cybersecurity standards devised by the National Institute of Standards and Technology (NIST) and the Federal Trade Commission (FTC). Though specifics of the program have not yet been confirmed, the administration said it will “keep things simple.” The labels, which will be “globally recognized” and debut on devices such as routers and home cameras, will take the form of a “barcode” that users can scan using their smartphone rather than a static paper label, the administration official said. The scanned barcode will link to information based on standards, such as software updating policies, data encryption and vulnerability remediation.

Read more of this story at Slashdot.

RIAA Flags ‘Artificial Intelligence’ Music Mixer As Emerging Copyright Threat

The RIAA has submitted its most recent overview of notorious markets to the U.S. Trade Representative. As usual, the music industry group lists various torrent sites, cyberlockers and stream-ripping services as familiar suspects. In addition, several ‘AI-based’ music mixers and extractors are added as an emerging threat. TorrentFreak reports: “There are online services that, purportedly using artificial intelligence (AI), extract, or rather, copy, the vocals, instrumentals, or some portion of the instrumentals from a sound recording, and/or generate, master or remix a recording to be very similar to or almost as good as reference tracks by selected, well known sound recording artists,” RIAA writes.

Songmastr is one of the platforms that’s mentioned. The service promises to “master” any song based on the style of well-known music artists such as Beyonce, Taylor Swift, Coltrane, Bob Dylan, James Brown and many others. The site’s underlying technology is powered by the open-source Matchering 2.0 code, which is freely available on GitHub. And indeed, its purported AI capabilities are prominently in the site’s tagline. “This service uses artificial intelligence and is based on the open source library Matchering. The algorithm masters your track with the same RMS, FR, peak amplitude and stereo width as the reference song you choose,” Songmastr explains.

Where Artificial Intelligence comes into play isn’t quite clear to us. The same can be said for the Acapella-Extractor and Remove-Vocals websites, which the RIAA lists in the same category. The names of these services are pretty much self-explanatory; they can separate the vocals from the rest of a track. The RIAA logically doesn’t want third parties to strip music or vocals from copyrighted tracks, particularly when these derivative works are further shared with others. While Songmastr’s service is a bit more advanced, the RIAA sees it as clearly infringing. After all, the original copyrighted tracks are used by the site to create derivative works, without the necessary permission. […] The RIAA is clearly worried about these services. Interestingly, however, the operator of Songmastr and Acapella-Extractor informs us that the music group hasn’t reached out with any complaints. But perhaps they’re still in the pipeline. The RIAA also lists various torrent sites, download sites, streamrippers, and bulletproof ISPs in its overview, all of which can be found in the full report (PDF) or listed at the bottom of TorrentFreak’s article.

Read more of this story at Slashdot.

Anti-Vaccine Groups Avoid Facebook Bans By Using Emojis

Pizza slices, cupcakes, and carrots are just a few emojis that anti-vaccine activists use to speak in code and continue spreading COVID-19 misinformation on Facebook. Ars Technica reports: Bloomberg reported that Facebook moderators have failed to remove posts shared in anti-vaccine groups and on pages that would ordinarily be considered violating content, if not for the code-speak. One group that Bloomberg reviewed, called “Died Suddenly,” is a meeting ground for anti-vaccine activists supposedly mourning a loved one who died after they got vaccines — which they refer to as having “eaten the cake.” Facebook owner Meta told Bloomberg that “it’s removed more than 27 million pieces of content for violating its COVID-19 misinformation policy, an ongoing process,” but declined to tell Ars whether posts relying on emojis and code-speak were considered in violation of the policy.

According to Facebook community standards, the company says it will “remove misinformation during public health emergencies,” like the pandemic, “when public health authorities conclude that the information is false and likely to directly contribute to the risk of imminent physical harm.” Pages or groups risk being removed if they violate Facebook’s rules or if they “instruct or encourage users to employ code words when discussing vaccines or COVID-19 to evade our detection.” However, the policy remains vague regarding the everyday use of emojis and code words. The only policy that Facebook seems to have on the books directly discussing improper use of emojis as coded language deals with community standards regarding sexual solicitation. It seems that while anti-vaccine users’ emoji-speak can expect to remain unmoderated, anyone using “contextually specific and commonly sexual emojis or emoji strings” does actually risk having posts removed if moderators determine they are using emojis to ask for or offer sex.

In total, Bloomberg reviewed six anti-vaccine groups created in the past year where Facebook users employ emojis like peaches and apples to suggest people they know have been harmed by vaccines. Meta’s seeming failure to moderate the anti-vaccine emoji-speak suggests that blocking code-speak is likely not currently a priority. Last year, when BBC discovered that anti-vaccine groups were using carrots to mask COVID-19 vaccine misinformation, Meta immediately took down the groups identified. However, BBC reported that soon after, the same groups popped back up, and more recently, Bloomberg reported that some of the groups that it tracked seemed to change names frequently, possibly to avoid detection.

Read more of this story at Slashdot.