China ‘Modified’ the Weather To Create Clear Skies For Political Celebration, Study Finds

Chinese weather authorities successfully controlled the weather ahead of a major political celebration earlier this year, according to a Beijing university study. The Guardian reports: On 1 July the Chinese Communist party marked its centenary with major celebrations including tens of thousands of people at a ceremony in Tiananmen Square, and a research paper from Tsinghua University has said an extensive cloud-seeding operation in the hours prior ensured clear skies and low air pollution. […] On Monday the South China Morning Post reported a recent research paper which found definitive signs that a cloud-seeding operation on the eve of the centenary had produced a marked drop in air pollution.

The centenary celebration faced what the paper reportedly termed unprecedented challenges, including an unexpected increase in air pollutants and an overcast sky during one of the wettest summers on record. Factories and other polluting activities had been halted in the days ahead of the event but low airflow meant the pollution hadn’t dissipated, it said. The paper, published in the peer-reviewed Environmental Science journal and led by environmental science professor, Wang Can, said a two-hour cloud-seeding operation was launched on the eve of the ceremony, and residents in nearby mountain regions reported seeing rockets shot into the sky on 30 June. The paper said the rockets were carrying silver iodine into the sky to stimulate rainfall.

The researchers said the resulting artificial rain reduced the level of PM2.5 air pollutants by more than two-thirds, and shifted the air quality index reading, based on World Health Organization standards, from “moderate” to “good.” The team said the artificial rain “was the only disruptive event in this period,” so it was unlikely the drop in pollution had a natural cause.

Read more of this story at Slashdot.

Meta Has a ‘Moral Obligation’ To Make Its Mental Health Research Transparent, Scientists Say

In an open letter to Mark Zuckerberg published Monday, a group of academics called for Meta to be more transparent about its research into how Facebook, Instagram, and WhatsApp affect the mental health of children and adolescents. The Verge reports: The letter calls for the company to allow independent reviews of its internal work, contribute data to external research projects, and set up an independent scientific oversight group. “You and your organizations have an ethical and moral obligation to align your internal research on children and adolescents with established standards for evidence in mental health science,” the letter, signed by researchers from universities around the world, reads.

The open letter comes after leaks from Facebook revealed some data from the company’s internal research, which found that Instagram was linked with anxiety and body image issues for some teenage girls. The research released, though, is limited and relied on subjective information collected through interviews. While this strategy can produce useful insights, it can’t prove that social media caused any of the mental health outcomes. The information available so far appears to show that the studies Facebook researchers conducted don’t meet the standards academic researchers use to conduct trials, the new open letter said. The information available also isn’t complete, the authors noted — Meta hasn’t made its research methods or data public, so it can’t be scrutinized by independent experts. The authors called for the company to allow independent review of past and future research, which would include releasing research materials and data.

The letter also asked Meta to contribute its data to ongoing independent research efforts on the mental health of adolescents. It’s a longstanding frustration that big tech companies don’t release data, which makes it challenging for external researchers to scrutinize and understand their products. “It will be impossible to identify and promote mental health in the 21st century if we cannot study how young people are interacting online,” the authors said. […] The open letter also called on Meta to establish an independent scientific trust to evaluate any risks to mental health from the use of platforms like Facebook and Instagram and to help implement “truly evidence-based solutions for online risks on a world-wide scale.” The trust could be similar to the existing Facebook Oversight Board, which helps the company with content moderation decisions.

Read more of this story at Slashdot.

Comcast Reduced ‘Working Latency’ By 90% with AQM. Is This the Future?

Long-time Slashdot reader mtaht writes:

Comcast fully deployed bufferbloat fixes across their entire network over the past year, demonstrating 90% improvements in working latency and jitter — which is described in this article by by Comcast Vice President of Technology Policy & Standards. (The article’s Cumulative Distribution Function chart is to die for…) But: did anybody notice? Did any other ISPs adopt AQM tech? How many of y’all out there are running smart queue management (sch_cake in linux) nowadays?

But wait — it gets even more interesting…

The Comcast official anticipates even less latency with the newest Wi-Fi 6E standard. (And for home users, the article links to a page recommending “a router whose manufacturer understands the principles of bufferbloat, and has updated the firmware to use one of the Smart Queue Management algorithms such as cake, fq_codel, PIE.”)
But then the Comcast VP looks to the future, and where all of this is leading:
Currently under discussion at the IETF in the Transport Area Working Group is a proposal for Low Latency, Low Loss Scalable Throughput. This potential approach to achieve very low latency may result in working latencies of roughly one millisecond (though perhaps 1-5 milliseconds initially). As the IETF sorts out the best technical path forward through experimentation and consensus-building (including debate of alternatives), in a few years we may see the beginning of a shift to sub-5 millisecond working latency. This seems likely to not only improve the quality of experience of existing applications but also create a network foundation on which entirely new classes of applications will be built.

While we can certainly think of usable augmented and virtual reality (AR and VR), these are applications we know about today. But what happens when the time to access resources on the Internet is the same, or close to the time to access local compute or storage resources? What if the core assumption that developers make about networks — that there is an unpredictable and variable delay — goes away? This is a central assumption embedded into the design of more or less all existing applications. So, if that assumption changes, then we can potentially rethink the design of many applications and all sorts of new applications will become possible. That is a big deal and exciting to think about the possibilities!
In a few years, when most people have 1 Gbps, 10 Gbps, or eventually 100 Gbps connections in their home, it is perhaps easy to imagine that connection speed is not the only key factor in your performance. We’re perhaps entering an era where consistently low working latency will become the next big thing that differentiates various Internet access services and application services/platforms. Beyond that, factors likely exceptionally high uptime, proactive/adaptive security, dynamic privacy protection, and other new things will likely also play a role. But keep an eye on working latency — there’s a lot of exciting things happening!

Read more of this story at Slashdot.

‘Advent of Code’ Has Begun – and Other Geeky Daily Programming Challenges

I Programmer writes:
December 1st is much anticipated among those who like programming puzzles. It is time to start collecting stars by solving small puzzles on the Advent of Code website with the goal of amassing 50 stars by Christmas Day, December 25th. Raku has also opened its advent calendar and there’s a brand new Bekk Christmas blog with informational content on multiple topics… At the time of writing we are only 10.5 hours into Advent of Code’s Day 1, almost 50,000 users have completed both puzzles and another 8,484 have completed the first. [Some programmers are even livestreaming their progress on Twitch, or sharing their thoughts (and some particuarly creative solutions) in the Advent of Code subreddit.]

We can credit Perl with pioneering the idea of a programming advent calendar with daily articles with a festive theme and the Raku Advent Calendar now continues the tradition. Now in its 13th year, but only the third with its new name this year’s first advent post solves a problem faced by Santa of creating thumbnails of approaching 2 billion images…

Smashing magazine has pulled together its own exhaustive list of additional geek-themed advent calendars. Some of the other highlights:

The beloved site “24 Pull Requests” has relaunched for 2021, daring participants to make 24 pull requests before December 24th. (The site’s tagline is “giving back to open source for the holidays.”) Over the years 26,465 contributors (as well as 25,738 organizations) have already participated through the site.
The Advent of JavaScript and Advent of CSS sites promise 24 puzzles delivered by email (though you’ll have to pay if you also want them to email you the solutions!)

This year also saw daily challenges from the sixth annual Code Security advent calendar being announced on Twitter, while TryHackMe.com has its own set of cybersecurity puzzles (and even a few prizes).

Read more of this story at Slashdot.

Those Cute Cats Online? They Help Spread Misinformation

“Videos and GIFs of cute animals — usually cats — have gone viral online for almost as long as the internet has been around…” writes the New York Times.

“Now, it is becoming increasingly clear how widely the old-school internet trick is being used by people and organizations peddling false information online, misinformation researchers say.”

The posts with the animals do not directly spread false information. But they can draw a huge audience that can be redirected to a publication or site spreading false information about election fraud, unproven coronavirus cures and other baseless conspiracy theories entirely unrelated to the videos. Sometimes, following a feed of cute animals on Facebook unknowingly signs users up as subscribers to misleading posts from the same publisher. Melissa Ryan, chief executive of Card Strategies, a consulting firm that researches disinformation, said this kind of “engagement bait” helped misinformation actors generate clicks on their pages, which can make them more prominent in users’ feeds in the future. That prominence can drive a broader audience to content with inaccurate or misleading information, she said.

“The strategy works because the platforms continue to reward engagement over everything else,” Ms. Ryan said, “even when that engagement comes from” publications that also publish false or misleading content.

Read more of this story at Slashdot.

Former Ubiquiti Dev Charged For Trying To Extort His Employer

Long-time Slashdot reader tinskip shares a report from BleepingComputer: Nickolas Sharp, a former employee of networking device maker Ubiquiti, was arrested and charged today with data theft and attempting to extort his employer while posing as a whistleblower and an anonymous hacker. “As alleged, Nickolas Sharp exploited his access as a trusted insider to steal gigabytes of confidential data from his employer, then, posing as an anonymous hacker, sent the company a nearly $2 million ransom demand,” U.S. Attorney Damian Williams said today. “As further alleged, after the FBI searched his home in connection with the theft, Sharp, now posing as an anonymous company whistleblower, planted damaging news stories falsely claiming the theft had been by a hacker enabled by a vulnerability in the company’s computer systems.”

According to the indictment (PDF), Sharp stole gigabytes of confidential data from Ubiquiti’s AWS (on December 10, 2020) and GitHub (on December 21 and 22, 2020) infrastructure using his cloud administrator credentials, cloning hundreds of GitHub repositories over SSH. Throughout this process, the defendant tried hiding his home IP address using Surfshark’s VPN services. However, his actual location was exposed after a temporary Internet outage. To hide his malicious activity, Sharp also altered log retention policies and other files that would have exposed his identity during the subsequent incident investigation. “Among other things, SHARP applied one-day lifecycle retention policies to certain logs on AWS which would have the effect of deleting certain evidence of the intruder’s activity within one day,” the court documents read.

After Ubiquiti disclosed a security incident in January following Sharp’s data theft, while working to assess the scope and remediate the security breach effects he also tried extorting the company (posing as an anonymous hacker). His ransom note demanded almost $2 million in exchange for returning the stolen files and the identification of a remaining vulnerability. The company refused to pay the ransom and, instead, found and removed a second backdoor from its systems, changed all employee credentials, and issued the January 11 security breach notification. After his extortion attempts failed, Sharp shared information with the media while pretending to be a whistleblower and accusing the company of downplaying the incident. This caused Ubiquiti’s stock price to fall by roughly 20%, from $349 on March 30 to $290 on April 1, amounting to losses of over $4 billion in market capitalization.

Read more of this story at Slashdot.