Will AI Really Perform Transformative, Disruptive Miracles?

“An encounter with the superhuman is at hand,” argues Canadian novelist, essayist, and cultural commentator Stephen Marche in an article in the Atlantic titled “Of Gods and Machines”. He argues that GPT-3’s 175 billion parameters give it interpretive power “far beyond human understanding, far beyond what our little animal brains can comprehend. Machine learning has capacities that are real, but which transcend human understanding: the definition of magic.”

But despite being a technology where inscrutability “is an industrial by-product of the process,” we may still not see what’s coming, Marche argue — that AI is “every bit as important and transformative as the other great tech disruptions, but more obscure, tucked largely out of view.”
Science fiction, and our own imagination, add to the confusion. We just can’t help thinking of AI in terms of the technologies depicted in Ex Machina, Her, or Blade Runner — people-machines that remain pure fantasy. Then there’s the distortion of Silicon Valley hype, the general fake-it-’til-you-make-it atmosphere that gave the world WeWork and Theranos: People who want to sound cutting-edge end up calling any automated process “artificial intelligence.” And at the bottom of all of this bewilderment sits the mystery inherent to the technology itself, its direct thrust at the unfathomable. The most advanced NLP programs operate at a level that not even the engineers constructing them fully understand.

But the confusion surrounding the miracles of AI doesn’t mean that the miracles aren’t happening. It just means that they won’t look how anybody has imagined them. Arthur C. Clarke famously said that “technology sufficiently advanced is indistinguishable from magic.” Magic is coming, and it’s coming for all of us….

And if AI harnesses the power promised by quantum computing, everything I’m describing here would be the first dulcet breezes of a hurricane. Ersatz humans are going to be one of the least interesting aspects of the new technology. This is not an inhuman intelligence but an inhuman capacity for digital intelligence. An artificial general intelligence will probably look more like a whole series of exponentially improving tools than a single thing. It will be a whole series of increasingly powerful and semi-invisible assistants, a whole series of increasingly powerful and semi-invisible surveillance states, a whole series of increasingly powerful and semi-invisible weapons systems. The world would change; we shouldn’t expect it to change in any kind of way that you would recognize.

Our AI future will be weird and sublime and perhaps we won’t even notice it happening to us. The paragraph above was composed by GPT-3. I wrote up to “And if AI harnesses the power promised by quantum computing”; machines did the rest.

Stephen Hawking once said that “the development of full artificial intelligence could spell the end of the human race.” Experts in AI, even the men and women building it, commonly describe the technology as an existential threat. But we are shockingly bad at predicting the long-term effects of technology. (Remember when everybody believed that the internet was going to improve the quality of information in the world?) So perhaps, in the case of artificial intelligence, fear is as misplaced as that earlier optimism was.

AI is not the beginning of the world, nor the end. It’s a continuation. The imagination tends to be utopian or dystopian, but the future is human — an extension of what we already are…. Artificial intelligence is returning us, through the most advanced technology, to somewhere primitive, original: an encounter with the permanent incompleteness of consciousness…. They will do things we never thought possible, and sooner than we think. They will give answers that we ourselves could never have provided.

But they will also reveal that our understanding, no matter how great, is always and forever negligible. Our role is not to answer but to question, and to let our questioning run headlong, reckless, into the inarticulate.

Read more of this story at Slashdot.

GPU Mining No Longer Profitable After Ethereum Merge

Just one day after the Ethereum Merge, where the cryptocoin successfully switched from Proof of Work (PoW) to Proof of Stake (PoS), profitability of GPU mining has completely collapsed. Tom’s Hardware reports: That means the best graphics cards should finally be back where they belonged, in your gaming PC, just as god intended. That’s a quick drop, considering yesterday there were still a few cryptocurrencies that were technically profitable. Looking at WhatToMine, and using the standard $0.10 per kWh, the best-case results are with the GeForce RTX 3090 and Radeon RX 6800 and 6800 XT. Those are technically showing slightly positive results, to the tune of around $0.06 per day after power costs. However, that doesn’t factor in the cost of the PC power, or the wear and tear on your graphics card.

Even at a slightly positive net result, it would still take over 20 years to break even on the cost of an RX 6800. We say that tongue-in-cheek, because if there’s one thing we know for certain, it’s that no one can predict what the cryptocurrency market will look like even one year out, never mind 20 years in the future. It’s a volatile market, and there are definitely lots of groups and individuals hoping to figure out a way to Make GPU Mining Profitable Again (MGMPA hats inbound…)

Of the 21 current generation graphics cards from the AMD RX 6000-series and the Nvidia RTX 30-series, only five are theoretically profitable right now, and those are all just barely in the black. This is using data from NiceHash and WhatToMine, so perhaps there are ways to tune other GPUs to get into the net positive, but the bottom line is that no one should be using GPUs for mining right now, and certainly not buying more GPUs for mining purposes. [You can see a full list of the current profitability of the current generation graphics cards here.]

Read more of this story at Slashdot.

Scientists Try To Teach Robot To Laugh At the Right Time

Laughter comes in many forms, from a polite chuckle to a contagious howl of mirth. Scientists are now developing an AI system that aims to recreate these nuances of humor by laughing in the right way at the right time. The Guardian reports: The team behind the laughing robot, which is called Erica, say that the system could improve natural conversations between people and AI systems. “We think that one of the important functions of conversational AI is empathy,” said Dr Koji Inoue, of Kyoto University, the lead author of the research, published in Frontiers in Robotics and AI. “So we decided that one way a robot can empathize with users is to share their laughter.”

Inoue and his colleagues have set out to teach their AI system the art of conversational laughter. They gathered training data from more than 80 speed-dating dialogues between male university students and the robot, who was initially teleoperated by four female amateur actors. The dialogue data was annotated for solo laughs, social laughs (where humor isn’t involved, such as in polite or embarrassed laughter) and laughter of mirth. This data was then used to train a machine learning system to decide whether to laugh, and to choose the appropriate type. It might feel socially awkward to mimic a small chuckle, but empathetic to join in with a hearty laugh. Based on the audio files, the algorithm learned the basic characteristics of social laughs, which tend to be more subdued, and mirthful laughs, with the aim of mirroring these in appropriate situations.

It might feel socially awkward to mimic a small chuckle, but empathetic to join in with a hearty laugh. Based on the audio files, the algorithm learned the basic characteristics of social laughs, which tend to be more subdued, and mirthful laughs, with the aim of mirroring these in appropriate situations. “Our biggest challenge in this work was identifying the actual cases of shared laughter, which isn’t easy because as you know, most laughter is actually not shared at all,” said Inoue. “We had to carefully categorize exactly which laughs we could use for our analysis and not just assume that any laugh can be responded to.” […] The team said laughter could help create robots with their own distinct character. “We think that they can show this through their conversational behaviours, such as laughing, eye gaze, gestures and speaking style,” said Inoue, although he added that it could take more than 20 years before it would be possible to have a “casual chat with a robot like we would with a friend.” “One of the things I’d keep in mind is that a robot or algorithm will never be able to understand you,” points out Prof Sandra Wachter of the Oxford Internet Institute at the University of Oxford. “It doesn’t know you, it doesn’t understand you and doesn’t understand the meaning of laughter.”

“They’re not sentient, but they might get very good at making you believe they understand what’s going on.”

Read more of this story at Slashdot.

Crispr Gene-Editing Drugs Show Promise In Preliminary Study

Intellia Therapeutics reported encouraging early-stage study results for its Crispr gene-editing treatments, the latest sign that the pathbreaking technology could result in commercially available drugs in the coming years. The Wall Street Journal reports: Intellia said Friday that one of its treatments, code-named NTLA-2002, significantly reduced levels of a protein that causes periodic attacks of swelling in six patients with a rare genetic disease called hereditary angioedema, or HAE. In a separate study building on previously released trial data, Intellia’s treatment NTLA-2001 reduced a disease-causing protein by more than 90% in 12 people with transthyretin-mediated amyloidosis cardiomyopathy, or ATTR-CM, a genetic disease that can lead to heart failure.

Despite the positive results, questions remain about whether therapies based on Crispr will work safely and effectively, analysts said. Intellia’s latest studies involved a small number of patients, and were disclosed in news releases and haven’t been published in a peer-reviewed journal. The NTLA-2002 study results were presented at the Bradykinin Symposium in Berlin, a medical meeting focused on angioedema. The data came from small, so-called Phase 1 studies conducted in New Zealand and the U.K. that didn’t include control groups. Results from such early studies can be unreliable predictors of a drug’s safety and effectiveness once the compound is tested in larger numbers of patients. The findings, nevertheless, add to preliminary but promising evidence of the potential for drugs based on the gene-editing technology. Last year, Intellia said that NTLA-2001 reduced the disease-causing protein involved in ATTR patients.

Read more of this story at Slashdot.

Customs Officials Have Copied Americans’ Phone Data at Massive Scale

SpzToid writes: U.S. government officials are adding data from as many as 10,000 electronic devices each year to a massive database they’ve compiled from cellphones, iPads and computers seized from travelers at the country’s airports, seaports and border crossings, leaders of Customs and Border Protection told congressional staff in a briefing this summer. The rapid expansion of the database and the ability of 2,700 CBP officers to access it without a warrant — two details not previously known about the database — have raised alarms in Congress about what use the government has made of the information, much of which is captured from people not suspected of any crime. CBP officials told congressional staff the data is maintained for 15 years.

Details of the database were revealed Thursday in a letter to CBP Commissioner Chris Magnus from Sen. Ron Wyden (D-Ore.), who criticized the agency for “allowing indiscriminate rifling through Americans’ private records” and called for stronger privacy protections. The revelations add new detail to what’s known about the expanding ways that federal investigators use technology that many Americans may not understand or consent to. Agents from the FBI and Immigration and Customs Enforcement, another Department of Homeland Security agency, have run facial recognition searches on millions of Americans’ driver’s license photos. They have tapped private databases of people’s financial and utility records to learn where they live. And they have gleaned location data from license-plate reader databases that can be used to track where people drive.

Read more of this story at Slashdot.

Increase in LED Lighting ‘Risks Harming Human and Animal Health’

Blue light from artificial sources is on the rise, which may have negative consequences for human health and the wider environment, according to a study. From a report: Academics at the University of Exeter have identified a shift in the kind of lighting technologies European countries are using at night to brighten streets and buildings. Using images produced by the International Space Station (ISS), they have found that the orange-coloured emissions from older sodium lights are rapidly being replaced by white-coloured emissions produced by LEDs. While LED lighting is more energy-efficient and costs less to run, the researchers say the increased blue light radiation associated with it is causing “substantial biological impacts” across the continent. The study also claims that previous research into the effects of light pollution have underestimated the impacts of blue light radiation.

Chief among the health consequences of blue light is its ability to suppress the production of melatonin, the hormone that regulates sleep patterns in humans and other organisms. Numerous scientific studies have warned that increased exposure to artificial blue light can worsen people’s sleeping habits, which in turn can lead to a variety of chronic health conditions over time. The increase in blue light radiation in Europe has also reduced the visibility of stars in the night sky, which the study says “may have impacts on people’s sense of nature.” Blue light can also alter the behavioural patterns of animals including bats and moths, as it can change their movements towards or away from light sources.

Read more of this story at Slashdot.

Adobe Thinks It Can Solve Netflix’s Password ‘Piracy’ Problem

Adobe thinks it has the answer to Netflix’s “password sharing” problem that involves up to 46 million people, according to a 2020 study. TorrentFreak reports: Adobe believes that since every user is different, any actions taken against an account should form part of a data-driven strategy designed to “measure, manage and monetize” password sharing. The company’s vision is for platforms like Netflix to deploy machine learning models to extract behavioral patterns associated with an account, to determine how the account is being used. These insights can determine which measures should be taken against an account, and how success or otherwise can be determined by monitoring an account in the following weeks or months. Ignoring the obviously creepy factors for a moment, Adobe’s approach does seem more sophisticated, even if the accompanying slide gives off a file-sharing-style “graduated response” vibe. That leads to the question of how much customer information Adobe would need to ensure that the right accounts are targeted, with the right actions, at the right time.

Adobe’s Account IQ is powered by Adobe Sensei, which in turn acts as the intelligence layer for Adobe Experience Platform. In theory, Adobe will know more about a streaming account than those using it, so the company should be able to predict the most effective course of action to reduce password sharing and/or monetize it, without annoying the account holder. But of course, if you’re monitoring customer accounts in such close detail, grabbing all available information is the obvious next step. Adobe envisions collecting data on how many devices are in use, how many individuals are active, and geographical locations — including distinct locations and span. This will then lead to a “sharing probability” conclusion, along with a usage pattern classification that should identify travelers, commuters, close family and friends, even the existence of a second home.

Given that excessive sharing is likely to concern platforms like Netflix, Adobe’s plan envisions a period of mass account monitoring followed by an on-screen “Excessive Sharing” warning in its dashboard. From there, legal streaming services can identify the accounts most responsible and begin preparing their “graduated response” towards changing behaviors. After monetizing those who can be monetized, those who refuse to pay can be identified and dumped. Or as Adobe puts it: “Return free-loaders to available market.” Finally, Adobe also suggests that its system can be used to identify customers who display good behavior. These users can be rewarded by eliminating authentication requirements, concurrent stream limits, and device registrations.

Read more of this story at Slashdot.

China Accuses the NSA of Hacking a Top University To Steal Data

hackingbear shares a report from Gizmodo: China claims that America’s National Security Agency used sophisticated cyber tools to hack into an elite research university on Chinese soil. The attack allegedly targeted the Northwestern Polytechnical University in Xi’an (not to be confused with a California school of the same name), which is highly ranked in the global university index for its science and engineering programs. The U.S. Justice Department has referred to the school as a “Chinese military university that is heavily involved in military research and works closely with the People’s Liberation Army,” painting it as a reasonable target for digital infiltration from an American perspective.

China’s National Computer Virus Emergency Response Center (CVERC) recently published a report attributing the hack to the Tailored Access Operations group (TAO) — an elite team of NSA hackers which first became publicly known via the Snowden Leaks back in 2013, helps the U.S. government break into networks all over the world for the purposes of intelligence gathering and data collection. [CVERC identified 41 TAO tools involved in the case.] One such tool, dubbed ‘Suctionchar,’ is said to have helped infiltrate the school’s network by stealing account credentials from remote management and file transfer applications to hijack logins on targeted servers. The report also mentions the exploitation of Bvp47, a backdoor in Linux that has been used in previous hacking missions by the Equation Group — another elite NSA hacking team. According to CVERC, traces of Suctionchar have been found in many other Chinese networks besides Northwestern’s, and the agency has accused the NSA of launching more than 10,000 cyberattacks on China over the past several years.

On Sunday, the allegations against the NSA were escalated to a diplomatic complaint. Yang Tao, the director-general of American affairs at China’s Ministry of Foreign Affairs, published a statement affirming the CVERC report and claiming that the NSA had “seriously violated the technical secrets of relevant Chinese institutions and seriously endangered the security of China’s critical infrastructure, institutions and personal information, and must be stopped immediately.”

Read more of this story at Slashdot.

EA Announces Kernel-Level Anti-Cheat System For PC Games

Electronics Arts (EA) is launching a new kernel-level anti-cheat system that’s been developed in-house to protect its games from tampering and cheaters. It’ll debut first in FIFA 23 but not all of its games will implement the system. The Verge reports: Kernel-level anti-cheat systems have drawn criticism from privacy and security advocates, as the drivers these systems use are complex and run at such a high level that if there are security issues, then developers have to be very quick to address them. EA says kernel-level protection is “absolutely vital” for competitive games like FIFA 23, as existing cheats operate in the kernel space, so games running in regular user mode can’t detect that tampering or cheating is occurring. “Unfortunately, the last few years have seen a large increase in cheats and cheat techniques operating in kernel-mode, so the only reliable way to detect and block these is to have our anti-cheat operate there as well,” explains [Elise Murphy, senior director of game security and anti-cheat at EA].

EA’s anti-cheat system will run at the kernel level and only runs when a game with EAAC protection is running. EA says its anti-cheat processes shut down once a game does and that the anti-cheat will be limited to what data it collects on a system. “EAAC does not gather any information about your browsing history, applications that are not connected to EA games, or anything that is not directly related to anti-cheat protection,” says Murphy.

Read more of this story at Slashdot.