China Punishes 27 People Over ‘Tragically Ugly’ Illustrations In Maths Textbook

Chinese authorities have punished 27 people over the publication of a maths textbook that went viral over its “tragically ugly” illustrations. The Guardian reports: A months-long investigation by a ministry of education working group found the books were “not beautiful,” and some illustrations were “quite ugly” and did not “properly reflect the sunny image of China’s children.” The mathematics books were published by the People’s Education Press almost 10 years ago, and were reportedly used in elementary schools across the country. But they went viral in May after a teacher published photos of the illustrations inside, including people with distorted faces and bulging pants, boys pictures grabbing girls’ skirts and at least one child with an apparent leg tattoo.

Social media users were largely amused by the illustrations, but many also criticized them as bringing disrepute and “cultural annihilation” to China, speculating they were the deliberate work of western infiltrators in the education sector. Related hashtags were viewed billions of times, embarrassing the Communist party and education authorities who announced a review of all textbooks “to ensure that the textbooks adhere to the correct political direction and value orientation.”

In a lengthy statement released on Monday, the education authorities said 27 individuals were found to have “neglected their duties and responsibilities” and were punished, including the president of the publishing house, who was given formal demerits, which can affect a party member’s standing and future employment. The editor-in-chief and the head of the maths department editing office were also given demerits and dismissed from their roles. The statement said the illustrators and designers were “dealt with accordingly” but did not give details. They and their studios would no longer be engaged to work on textbook design or related work, it said. The highly critical statement found a litany of issues with the books, including critiquing the size, quantity and quality of illustrations, some of which had “scientific and normative problems.”

Read more of this story at Slashdot.

Oracle’s ‘Surveillance Machine’ Targeted In US Privacy Class Action

A new privacy class action claim (PDF) in the U.S. alleges Oracle’s “worldwide surveillance machine” has amassed detailed dossiers on some five billion people, “accusing the company and its adtech and advertising subsidiaries of violating the privacy of the majority of the people on Earth,” reports TechCrunch. From the report: The suit has three class representatives: Dr Johnny Ryan, senior fellow of the Irish Council for Civil Liberties (ICCL); Michael Katz-Lacabe, director of research at The Center for Human Rights and Privacy; and Dr Jennifer Golbeck, a professor of computer science at the University of Maryland — who say they are “acting on behalf of worldwide Internet users who have been subject to Oracle’s privacy violations.” The litigants are represented by the San Francisco-headquartered law firm, Lieff Cabraser, which they note has run significant privacy cases against Big Tech. The key point here is there is no comprehensive federal privacy law in the U.S. — so the litigation is certainly facing a hostile environment to make a privacy case — hence the complaint references multiple federal, constitutional, tort and state laws, alleging violations of the Federal Electronic Communications Privacy Act, the Constitution of the State of California, the California Invasion of Privacy Act, as well as competition law, and the common law.

It remains to be seen whether this “patchwork” approach to a tricky legal environment will prevail — for an expert snap analysis of the complaint and some key challenges this whole thread is highly recommended. But the substance of the complaint hinges on allegations that Oracle collects vast amounts of data from unwitting Internet users, i.e. without their consent, and uses this surveillance intelligence to profile individuals, further enriching profiles via its data marketplace and threatening people’s privacy on a vast scale — including, per the allegations, by the use of proxies for sensitive data to circumvent privacy controls.

Read more of this story at Slashdot.

Meta AI and Wikimedia Foundation Build an ML-Powered, Citation-Checking Bot

Digital Trends reports:

Working with the Wikimedia Foundation, Meta AI (that’s the AI research and development research lab for the social media giant) has developed what it claims is the first machine learning model able to automatically scan hundreds of thousands of citations at once to check if they support the corresponding claims….

“I think we were driven by curiosity at the end of the day,” Fabio Petroni, research tech lead manager for the FAIR (Fundamental AI Research) team of Meta AI, told Digital Trends. “We wanted to see what was the limit of this technology. We were absolutely not sure if [this AI] could do anything meaningful in this context. No one had ever tried to do something similar [before].”

Trained using a dataset consisting of 4 million Wikipedia citations, Meta’s new tool is able to effectively analyze the information linked to a citation and then cross-reference it with the supporting evidence…. Just as impressive as the ability to spot fraudulent citations, however, is the tool’s potential for suggesting better references. Deployed as a production model, this tool could helpfully suggest references that would best illustrate a certain point. While Petroni balks at it being likened to a factual spellcheck, flagging errors and suggesting improvements, that’s an easy way to think about what it might do.

Read more of this story at Slashdot.

Employers are Tracking Employees ‘Productivity’ – Sometimes Badly

Here’s an interesting statistic spotted by Fortune. “Eight out of the 10 largest private employers in the U.S. are tracking productivity metrics for their employees, according to an examination from The New York Times.”
“Some of this software measures active time, watches for keyboard pauses, and even silently counts keystrokes.”

J.P. Morgan, Barclays Bank, and UnitedHealth Group all track employees, The Times reported, seeing everything from how long it takes to write an email to keyboard activity. There are repercussions if workers aren’t meeting expectations: a prodding note, a skipped bonus, or a work-from-home day taken away, to name a few. For employers surrendering in the fight to return to the office, such surveillance is a way to maintain a sense of control. As Paul Wartenberg, who installs monitor systems, told The Times, “If we’re going to give up on bringing people back to the office, we’re not going to give up on managing productivity….

But tracking these remote workers’ every move doesn’t seem to be telling employers much. “We’re in this era of measurement but we don’t know what we should be measuring,” Ryan Fuller, former vice president for workplace intelligence at Microsoft, told the Times.

From the New York Times’ article. (Alternate URLs here, here, and here.)
In lower-paying jobs, the monitoring is already ubiquitous: not just at Amazon, where the second-by-second measurements became notorious, but also for Kroger cashiers, UPS drivers and millions of others…. Now digital productivity monitoring is also spreading among white-collar jobs and roles that require graduate degrees. Many employees, whether working remotely or in person, are subject to trackers, scores, “idle” buttons, or just quiet, constantly accumulating records. Pauses can lead to penalties, from lost pay to lost jobs.

Some radiologists see scoreboards showing their “inactivity” time and how their productivity stacks up against their colleagues’…. Public servants are tracked, too: In June, New York’s Metropolitan Transportation Authority told engineers and other employees they could work remotely one day a week if they agreed to full-time productivity monitoring. Architects, academic administrators, doctors, nursing home workers and lawyers described growing electronic surveillance over every minute of their workday.

They echoed complaints that employees in many lower-paid positions have voiced for years: that their jobs are relentless, that they don’t have control — and in some cases, that they don’t even have enough time to use the bathroom. In interviews and in hundreds of written submissions to The Times, white-collar workers described being tracked as “demoralizing,” “humiliating” and “toxic.” Micromanagement is becoming standard, they said. But the most urgent complaint, spanning industries and incomes, is that the working world’s new clocks are just wrong: inept at capturing offline activity, unreliable at assessing hard-to-quantify tasks and prone to undermining the work itself….

But many employers, along with makers of the tracking technology, say that even if the details need refining, the practice has become valuable — and perhaps inevitable. Tracking, they say, allows them to manage with newfound clarity, fairness and insight. Derelict workers can be rooted out. Industrious ones can be rewarded. “It’s a way to really just focus on the results,” rather than impressions, said Marisa Goldenberg, [who] said she used the tools in moderation…
[I]n-person workplaces have embraced the tools as well. Tommy Weir, whose company, Enaible, provides group productivity scores to Fortune 500 companies, aims to eventually use individual scores to calibrate pay.

Read more of this story at Slashdot.

Elon Musk Interviewed by Tesla Owners, Hears from a Former Professor

In June a YouTube channel called “Tesla Owners Silicon Valley” ran an hour-long interview with Elon Musk. (Musk begins by sharing an example of the “comedically long” list of things that can disrupt a supply chain, remembering an incident where a drug gang shoot out led to the mistaken impounding of a nearby truck that was delivering parts for a Tesla Model S factory — ultimately shutting down Model S production for three days.)

There’s some candid discussions about the technology of electric cars – but also some surprisingly personal insights. Musk also reveals he’s been thinking about electric cars since high school, as “the way cars should be, if you could just solve range… People will look back on the internal combustion car era as a strange time. Quaint.” And then he remembers the moment in 1995 when he put his graduate studies at Stanford “on hold” to pursue a business career, reassuring Stanford professor William Nix that “I will probably fail” and predicting an eventual return to Stanford. Nix had responded that he did not think Musk would fail.

It turns out that 27 years later, now-emeritus professor William Nix heard the interview, and typed up a fond letter to Elon Musk at SpaceX’s headquarters in Texas. Nix complimented Musk on the interview, noting Musk’s remarks on the challenges in using silicon for the anodes of electric batteries. “About 10 years ago we at Stanford did research on the very issues you described. Indeed, it almost seemed like you had read all the papers.”

Musk’s hour-long interview with the group was followed by two more hour-long interviews, and since then the group has been sharing short excerpts that give candid glimpses of Musk’s thinking. (The overwhelming focus is solving full self-driving,” Musk says in one clip. “That’s essential. That’s really the difference between Tesla being worth a lot of money and being worth basically zero.”)

Read more of this story at Slashdot.

Do Inaccurate Search Results Disrupt Democracies?

Users of Google “must recalibrate their thinking on what Google is and how information is returned to them,” warns an Assistant Professor at the School of Information and Library Science at UNC-Chapel Hill.

In a new book titled The Propagandists’ Playbook, they’re warning that simple link-filled search results have been transformed by “Google’s latest desire to answer our questions for us, rather than requiring us to click on the returns.” The trouble starts when Google returns inaccurate answers “that often disrupt democratic participation, confirm unsubstantiated claims, and are easily manipulatable by people looking to spread falsehoods.”

By adding all of these features, Google — as well as competitors such as DuckDuckGo and Bing, which also summarize content — has effectively changed the experience from an explorative search environment to a platform designed around verification, replacing a process that enables learning and investigation with one that is more like a fact-checking service…. The problem is, many rely on search engines to seek out information about more convoluted topics. And, as my research reveals, this shift can lead to incorrect returns… Worse yet, when errors like this happen, there is no mechanism whereby users who notice discrepancies can flag it for informational review….

The trouble is, many users still rely on Google to fact-check information, and doing so might strengthen their belief in false claims. This is not only because Google sometimes delivers misleading or incorrect information, but also because people I spoke with for my research believed that Google’s top search returns were “more important,” “more relevant,” and “more accurate,” and they trusted Google more than the news — they considered it to be a more objective source….

This leads to what I refer to in my book, The Propagandists’ Playbook, as the “IKEA effect of misinformation.” Business scholars have found that when consumers build their own merchandise, they value the product more than an already assembled item of similar quality — they feel more competent and therefore happier with their purchase. Conspiracy theorists and propagandists are drawing on the same strategy, providing a tangible, do-it-yourself quality to the information they provide. Independently conducting a search on a given topic makes audiences feel like they are engaging in an act of self-discovery when they are actually participating in a scavenger-hunt engineered by those spreading the lies….

Rather than assume that returns validate truth, we must apply the same scrutiny we’ve learned to have toward information on social media.

Another problem the article points out: “Googling the exact same phrase that you see on Twitter will likely return the same information you saw on Twitter.

“Just because it’s from a search engine doesn’t make it more reliable.”

Read more of this story at Slashdot.

After Mockery, Mark Zuckerberg Promises Better Metaverse Graphics, Post New Avatar

What do you when people hate your $10 billion selfie? “Mark Zuckerberg, in response to a torrent of critical memes mocking the graphics of Meta’s newest project, has heard his critics — and changed his selfie,” reports CNN:

Zuckerberg debuted Horizon Worlds, a virtual reality social app, in France and Spain earlier this week, sharing a somewhat flat, goofy digital avatar in front of an animated Eiffel Tower and la Sagrada Família.

The internet immediately jumped in, mocking what many users viewed as (hopefully) preliminary graphics for a venture that Meta has spent at least $10 billion in the last year.

New York Times tech columnist Kevin Roose compared the graphics to “worse than a 2008 Wii game” on Twitter. Slate used the term “buttcheeks.” Twitter was less kind: “eye-gougingly ugly” and “an international laughing stock” popping up. Many compared it to early 90’s graphics and pointed out how lifeless and childish the Zuckerberg selfie looked. It quickly won the designation “dead eyes.”
Well, Zuckerberg has apparently seen the memes, because on Friday he announced there are major updates coming — along with new avatar graphics.

In a CNBC report on how Zuckerberg “is getting dragged on the internet for how ugly the graphics of this game are,” they’d actually quoted a Forbes headline that asked, “Does Mark Zuckerberg not understand how bad his metaverse is?”

Read more of this story at Slashdot.