Facebook Warns It Could Block News in Canada Over Proposed Legislation

The Verge says Facebook “might ban news sharing in Canada if the country passes legislation forcing the company to pay news outlets for their content.” They cite a post Friday from Facebook’s parent company Meta, and a recent report in the Wall Street Journal.

If this type of law sounds familiar, it’s because Australia introduced a similar one last year, called the News Media Bargaining Code, which also requires Facebook and Google to pay for news included on the platforms. Although Australia eventually passed the law, it wasn’t without significant pushback from Facebook and Google. Facebook switched off news sharing in the country in response, and Google threatened to pull its search engine from the country.

While Google later walked back on its plans after striking deals with media organizations, Facebook reversed its news ban only after Australia amended its legislation. Facebook’s temporary ban not only affected news outlets but also ripped down posts from government agencies, like local fire and health departments. Earlier this year, a group of Facebook whistleblowers claimed the move was a negotiation tactic, alleging Facebook used an overly broad definition of what’s considered a news publisher to cause chaos in the country. The company maintains the disorder was “inadvertent.”

Now Facebook’s prepared to put a block on news in Canada if the country doesn’t change its legislation….
“If this draft legislation becomes law, creating globally unprecedented forms of financial liability for news links or content, we may be forced to consider whether we continue to allow the sharing of news content on Facebook in Canada as defined under the Online News Act,” Meta states.

Read more of this story at Slashdot.

How GitHub Copilot Could Steer Microsoft Into a Copyright Storm

An anonymous reader quotes a report from the Register: GitHub Copilot — a programming auto-suggestion tool trained from public source code on the internet — has been caught generating what appears to be copyrighted code, prompting an attorney to look into a possible copyright infringement claim. On Monday, Matthew Butterick, a lawyer, designer, and developer, announced he is working with Joseph Saveri Law Firm to investigate the possibility of filing a copyright claim against GitHub. There are two potential lines of attack here: is GitHub improperly training Copilot on open source code, and is the tool improperly emitting other people’s copyrighted work — pulled from the training data — to suggest code snippets to users?

Butterick has been critical of Copilot since its launch. In June he published a blog post arguing that “any code generated by Copilot may contain lurking license or IP violations,” and thus should be avoided. That same month, Denver Gingerich and Bradley Kuhn of the Software Freedom Conservancy (SFC) said their organization would stop using GitHub, largely as a result of Microsoft and GitHub releasing Copilot without addressing concerns about how the machine-learning model dealt with different open source licensing requirements.

Copilot’s capacity to copy code verbatim, or nearly so, surfaced last week when Tim Davis, a professor of computer science and engineering at Texas A&M University, found that Copilot, when prompted, would reproduce his copyrighted sparse matrix transposition code. Asked to comment, Davis said he would prefer to wait until he has heard back from GitHub and its parent Microsoft about his concerns. In an email to The Register, Butterick indicated there’s been a strong response to news of his investigation. “Clearly, many developers have been worried about what Copilot means for open source,” he wrote. “We’re hearing lots of stories. Our experience with Copilot has been similar to what others have found — that it’s not difficult to induce Copilot to emit verbatim code from identifiable open source repositories. As we expand our investigation, we expect to see more examples. “But keep in mind that verbatim copying is just one of many issues presented by Copilot. For instance, a software author’s copyright in their code can be violated without verbatim copying. Also, most open-source code is covered by a license, which imposes additional legal requirements. Has Copilot met these requirements? We’re looking at all these issues.” GitHub’s documentation for Copilot warns that the output may contain “undesirable patterns” and puts the onus of intellectual property infringement on the user of Copilot, notes the report.

Bradley Kuhn of the Software Freedom Conservancy is less willing to set aside how Copilot deals with software licenses. “What Microsoft’s GitHub has done in this process is absolutely unconscionable,” he said. “Without discussion, consent, or engagement with the FOSS community, they have declared that they know better than the courts and our laws about what is or is not permissible under a FOSS license. They have completely ignored the attribution clauses of all FOSS licenses, and, more importantly, the more freedom-protecting requirements of copyleft licenses.”

Brett Becker, assistant professor at University College Dublin in Ireland, told The Register in an email, “AI-assisted programming tools are not going to go away and will continue to evolve. Where these tools fit into the current landscape of programming practices, law, and community norms is only just beginning to be explored and will also continue to evolve.” He added: “An interesting question is: what will emerge as the main drivers of this evolution? Will these tools fundamentally alter future practices, law, and community norms — or will our practices, law and community norms prove resilient and drive the evolution of these tools?”

Read more of this story at Slashdot.

JavaScript Still Tops Python and Java in RedMonk’s Latest Rankings, While Go and TypeScript Rise

RedMonk has released its latest quarterly rankings of popular programming languages, arguing that “The idea is not to offer a statistically valid representation of current usage, but rather to correlate language discussion and usage in an effort to extract insights into potential future adoption trends.”

Their methodology? “We extract language rankings from GitHub and Stack Overflow, and combine them for a ranking that attempts to reflect both code (GitHub) and discussion (Stack Overflow) traction.” Below are this quarter’s results:

1. JavaScript
2. Python
3. Java
4. PHP
5. C#
6. CSS
7. C++
7. TypeScript
9. Ruby
10. C
11. Swift
12. R
12. Objective-C
14. Shell
15. Scala
15. Go
17. PowerShell
17. Kotlin
19. Rust
19. Dart

Their analysis of the latest rankings note “movement is increasingly rare…. the top 20 has been stable for multiple runs. As has been speculated about in this space previously, it seems increasingly clear that the hypothesis of a temporary equilibrium of programming language usage is supported by the evidence…. [W]e may have hit a point of relative — if temporary — contentment with the wide variety of languages available for developers’ usage.”

And yet this quarter TypeScript has risen from #8 to #7, now tied with C++, benefiting from attributes like its interoperability with an existing popular language with an increased availability of security-related features. “There is little suggestion at present that the language is headed anywhere but up. The only real question is on what timeframe.”

Unlike TypeScript, Go’s trajectory has been anything but clear. While it grew steadily and reasonably swiftly as languages go, it has appeared to be stalled, never placing higher than 14th and having dropped into 16 for the last three runs. This quarter, however, Go rose one spot in the rankings back up to 15. In and of itself, this is a move of limited significance, as the further one goes down the rankings the less significant the differences between them are, ranking-wise. But it has been over a year since we’ve seen movement from Go, which raises the question of whether there is any room for further upward ascent or whether it will remain hovering in the slot one would expect from a technically well regarded but not particularly versatile (from a use case standpoint) language.

Like Go, Kotlin had spent the last three runs in the same position. It and Rust had been moving in lockstep in recent quarters, but while Rust enters its fourth consecutive run in 19th place, Kotlin managed to achieve some separation this quarter jumping one spot up from 18 to 17.

Read more of this story at Slashdot.

Interpol Launches ‘First-Ever Metaverse’ Designed For Global Law Enforcement

The International Criminal Police Organization (Interpol) has announced the launch of its fully operational metaverse, initially designed for activities such as immersive training courses for forensic investigations. Decrypt reports: Unveiled at the 90th Interpol General Assembly in New Delhi, the INTERPOL Metaverse is described as the “first-ever Metaverse specifically designed for law enforcement worldwide.” Among other things, the platform will also help law enforcement across the globe to interact with each other via avatars. “For many, the Metaverse seems to herald an abstract future, but the issues it raises are those that have always motivated INTERPOL — supporting our member countries to fight crime and making the world, virtual or not, safer for those who inhabit it,” Jurgen Stock, Interpol’s secretary general said in a statement.

One of the challenges identified by organizations is that something that is considered a crime in the physical world may not necessarily be the same in the virtual world. “By identifying these risks from the outset, we can work with stakeholders to shape the necessary governance frameworks and cut off future criminal markets before they are fully formed,” said Madan Oberoi, Interpol’s executive director of Technology and Innovation. “Only by having these conversations now can we build an effective response.”

In a live demonstration at the event, Interpol experts took to a Metaverse classroom to deliver a training course on travel document verification and passenger screening using the capabilities of the newly-launched platform. Students were then teleported to an airport where they were able to apply their newly-acquired skills at a virtual border point. Additionally, Interpol has created an expert group that will be tasked with ensuring new virtual worlds are “secure by design.” The report notes that Interpol has also joined “Defining and Building the Metaverse,” a World Economic Forum initiative around metaverse governance.

Read more of this story at Slashdot.

France Becomes Latest Country To Leave Controversial Energy Charter Treaty

France has become the latest country to pull out of the controversial energy charter treaty (ECT), which protects fossil fuel investors from policy changes that might threaten their profits. The Guardian reports: Speaking after an EU summit in Brussels on Friday, French president, Emmanuel Macron, said: “France has decided to withdraw from the energy charter treaty.” Quitting the ECT was “coherent” with the Paris climate deal, he added. Macron’s statement follows a recent vote by the Polish parliament to leave the 52-nation treaty and announcements by Spain and the Netherlands that they too wanted out of the scheme.

The European Commission has proposed a “modernization” of the agreement, which would end the writ of the treaty’s secret investor-state courts between EU members. That plan is expected to be discussed at a meeting in Mongolia next month. A French government official said Paris would not try to block the modernization blueprint within the EU or at the meeting in Mongolia. “But whatever happens, France is leaving,” the official said. While France was “willing to coordinate a withdrawal with others, we don’t see that there is a critical mass ready to engage with that in the EU bloc as a whole,” the official added.

The French withdrawal will take about a year to be completed, and in that time, discussion in Paris will likely move on to ways of neutralizing or reducing the duration of a “sunset clause” in the ECT that allows retrospective lawsuits. Progress on that issue is thought possible by sources close to ongoing legal negotiations on the issue.

Read more of this story at Slashdot.

US To Launch ‘Labeling’ Rating Program For Internet-Connected Devices In 2023

The Biden administration said it will launch a cybersecurity labeling program for consumer Internet of Things devices starting in 2023 in an effort to protect Americans from “significant national security risks.” TechCrunch reports: Inspired by Energy Star, a labeling program operated by Environmental Protection Agency and the Department of Energy to promote energy efficiency, the White House is planning to roll out a similar IoT labeling program to the “highest-risk” devices starting next year, a senior Biden administration official said on Wednesday following a National Security Council meeting with consumer product associations and device manufacturers. Attendees at the meeting included White House cyber official Anne Neuberger, FCC chairwoman Jessica Rosenworcel, National Cyber Director Chris Inglis and Sen. Angus King, alongside leaders from Google, Amazon, Samsung, Sony and others.

The initiative, described by White House officials as “Energy Star for cyber,” will help Americans to recognize whether devices meet a set of basic cybersecurity standards devised by the National Institute of Standards and Technology (NIST) and the Federal Trade Commission (FTC). Though specifics of the program have not yet been confirmed, the administration said it will “keep things simple.” The labels, which will be “globally recognized” and debut on devices such as routers and home cameras, will take the form of a “barcode” that users can scan using their smartphone rather than a static paper label, the administration official said. The scanned barcode will link to information based on standards, such as software updating policies, data encryption and vulnerability remediation.

Read more of this story at Slashdot.

RIAA Flags ‘Artificial Intelligence’ Music Mixer As Emerging Copyright Threat

The RIAA has submitted its most recent overview of notorious markets to the U.S. Trade Representative. As usual, the music industry group lists various torrent sites, cyberlockers and stream-ripping services as familiar suspects. In addition, several ‘AI-based’ music mixers and extractors are added as an emerging threat. TorrentFreak reports: “There are online services that, purportedly using artificial intelligence (AI), extract, or rather, copy, the vocals, instrumentals, or some portion of the instrumentals from a sound recording, and/or generate, master or remix a recording to be very similar to or almost as good as reference tracks by selected, well known sound recording artists,” RIAA writes.

Songmastr is one of the platforms that’s mentioned. The service promises to “master” any song based on the style of well-known music artists such as Beyonce, Taylor Swift, Coltrane, Bob Dylan, James Brown and many others. The site’s underlying technology is powered by the open-source Matchering 2.0 code, which is freely available on GitHub. And indeed, its purported AI capabilities are prominently in the site’s tagline. “This service uses artificial intelligence and is based on the open source library Matchering. The algorithm masters your track with the same RMS, FR, peak amplitude and stereo width as the reference song you choose,” Songmastr explains.

Where Artificial Intelligence comes into play isn’t quite clear to us. The same can be said for the Acapella-Extractor and Remove-Vocals websites, which the RIAA lists in the same category. The names of these services are pretty much self-explanatory; they can separate the vocals from the rest of a track. The RIAA logically doesn’t want third parties to strip music or vocals from copyrighted tracks, particularly when these derivative works are further shared with others. While Songmastr’s service is a bit more advanced, the RIAA sees it as clearly infringing. After all, the original copyrighted tracks are used by the site to create derivative works, without the necessary permission. […] The RIAA is clearly worried about these services. Interestingly, however, the operator of Songmastr and Acapella-Extractor informs us that the music group hasn’t reached out with any complaints. But perhaps they’re still in the pipeline. The RIAA also lists various torrent sites, download sites, streamrippers, and bulletproof ISPs in its overview, all of which can be found in the full report (PDF) or listed at the bottom of TorrentFreak’s article.

Read more of this story at Slashdot.

Anti-Vaccine Groups Avoid Facebook Bans By Using Emojis

Pizza slices, cupcakes, and carrots are just a few emojis that anti-vaccine activists use to speak in code and continue spreading COVID-19 misinformation on Facebook. Ars Technica reports: Bloomberg reported that Facebook moderators have failed to remove posts shared in anti-vaccine groups and on pages that would ordinarily be considered violating content, if not for the code-speak. One group that Bloomberg reviewed, called “Died Suddenly,” is a meeting ground for anti-vaccine activists supposedly mourning a loved one who died after they got vaccines — which they refer to as having “eaten the cake.” Facebook owner Meta told Bloomberg that “it’s removed more than 27 million pieces of content for violating its COVID-19 misinformation policy, an ongoing process,” but declined to tell Ars whether posts relying on emojis and code-speak were considered in violation of the policy.

According to Facebook community standards, the company says it will “remove misinformation during public health emergencies,” like the pandemic, “when public health authorities conclude that the information is false and likely to directly contribute to the risk of imminent physical harm.” Pages or groups risk being removed if they violate Facebook’s rules or if they “instruct or encourage users to employ code words when discussing vaccines or COVID-19 to evade our detection.” However, the policy remains vague regarding the everyday use of emojis and code words. The only policy that Facebook seems to have on the books directly discussing improper use of emojis as coded language deals with community standards regarding sexual solicitation. It seems that while anti-vaccine users’ emoji-speak can expect to remain unmoderated, anyone using “contextually specific and commonly sexual emojis or emoji strings” does actually risk having posts removed if moderators determine they are using emojis to ask for or offer sex.

In total, Bloomberg reviewed six anti-vaccine groups created in the past year where Facebook users employ emojis like peaches and apples to suggest people they know have been harmed by vaccines. Meta’s seeming failure to moderate the anti-vaccine emoji-speak suggests that blocking code-speak is likely not currently a priority. Last year, when BBC discovered that anti-vaccine groups were using carrots to mask COVID-19 vaccine misinformation, Meta immediately took down the groups identified. However, BBC reported that soon after, the same groups popped back up, and more recently, Bloomberg reported that some of the groups that it tracked seemed to change names frequently, possibly to avoid detection.

Read more of this story at Slashdot.