First Electric Autonomous Cargo Ship Launched In Norway

Zero emissions and, soon, zero crew: the world’s first fully electric autonomous cargo vessel was unveiled in Norway, a small but promising step toward reducing the maritime industry’s climate footprint. TechXplore reports: By shipping up to 120 containers of fertilizer from a plant in the southeastern town of Porsgrunn to the Brevik port a dozen kilometres (about eight miles) away, the much-delayed Yara Birkeland, shown off to the media on Friday, will eliminate the need for around 40,000 truck journeys a year that are now fueled by polluting diesel. The 80-meter, 3,200-deadweight tonne ship will soon begin two years of working trials during which it will be fine-tuned to learn to maneuver on its own.

The wheelhouse could disappear altogether in “three, four or five years”, said Holsether, once the vessel makes its 7.5-nautical-mile trips on its own with the aid of sensors. “Quite a lot of the incidents happening on vessels are due to human error, because of fatigue for instance,” project manager Jostein Braaten said from the possibly doomed bridge. “Autonomous operating can enable a safe journey,” he said.

On board the Yara Birkeland, the traditional machine room has been replaced by eight battery compartments, giving the vessel a capacity of 6.8 MWh — sourced from renewable hydroelectricity. “That’s the equivalent of 100 Teslas,” says Braaten. The maritime sector, which is responsible for almost three percent of all man-made emissions, aims to reduce its emissions by 40 percent by 2030 and 50 percent by 2050. Despite that, the sector has seen a rise in recent years.

Read more of this story at Slashdot.

Alphabet Puts Prototype Robots To Work Cleaning Up Google’s Offices

The company announced today that its Everyday Robots Project — a team within its experimental X labs dedicated to creating “a general-purpose learning robot” — has moved some of its prototype machines out of the lab and into Google’s Bay Area campuses to carry out some light custodial tasks. The Verge reports: “We are now operating a fleet of more than 100 robot prototypes that are autonomously performing a range of useful tasks around our offices,” said Everyday Robot’s chief robot officer Hans Peter Brondmo in a blog post. “The same robot that sorts trash can now be equipped with a squeegee to wipe tables and use the same gripper that grasps cups can learn to open doors.”

These robots in question are essentially arms on wheels, with a multipurpose gripper on the end of a flexible arm attached to a central tower. There’s a “head” on top of the tower with cameras and sensors for machine vision and what looks like a spinning lidar unit on the side, presumably for navigation. As Brondmo indicates, these bots were first seen sorting out recycling when Alphabet debuted the Everyday Robot team in 2019. The big promise that’s being made by the company (as well as by many other startups and rivals) is that machine learning will finally enable robots to operate in “unstructured” environments like homes and offices.

Read more of this story at Slashdot.

Thousands of Firefox Users Accidentally Commit Login Cookies On GitHub

Thousands of Firefox cookie databases containing sensitive data are available on request from GitHub repositories, data potentially usable for hijacking authenticated sessions. The Register reports: These cookies.sqlite databases normally reside in the Firefox profiles folder. They’re used to store cookies between browsing sessions. And they’re findable by searching GitHub with specific query parameters, what’s known as a search “dork.” Aidan Marlin, a security engineer at London-based rail travel service Trainline, alerted The Register to the public availability of these files after reporting his findings through HackerOne and being told by a GitHub representative that “credentials exposed by our users are not in scope for our Bug Bounty program.”

Marlin then asked whether he could make his findings public and was told he’s free to do so. “I’m frustrated that GitHub isn’t taking its users’ security and privacy seriously,” Marlin told The Register in an email. “The least it could do is prevent results coming up for this GitHub dork. If the individuals who uploaded these cookie databases were made aware of what they’d done, they’d s*** their pants.”

Marlin acknowledges that affected GitHub users deserve some blame for failing to prevent their cookies.sqlite databases from being included when they committed code and pushed it to their public repositories. “But there are nearly 4.5k hits for this dork, so I think GitHub has a duty of care as well,” he said, adding that he’s alerted the UK Information Commissioner’s Office because personal information is at stake. Marlin speculates that the oversight is a consequence of committing code from one’s Linux home directory. “I imagine in most of the cases, the individuals aren’t aware that they’ve uploaded their cookie databases,” he explained. “A common reason users do this is for a common environment across multiple machines.”

Read more of this story at Slashdot.

Programmer Restores YouTube Dislike Counts With Browser Extension

An anonymous reader quotes a report from The Next Web: YouTube’s decision to hide dislike counts on videos has sparked anger and derision. One inventive programmer has attempted to restore the feature in a browser extension. The plugin currently uses the Google API to generate the dislike count. However, this functionality will be removed from December 13. “I’ll try to scrape as much data as possible until then,” the extension’s creator said on Reddit. “After that — total dislikes will be estimated using extension users as a sample.”

The alpha version isn’t perfect. It currently only works on videos for which the Youtube API returns a valid dislike count. The calculations could also be skewed by the userbase, which is unlikely to represent the average YouTube viewer. The developer said they’re exploring ways to mitigate this, such as comparing the downvotes collected through the public of extension users to a cache of real downvotes. The results should also improve as uptake grows. The plugin could provide a useful service, but its greatest value may be as a potent symbol of protest. You can try it out here — but proceed at your own risk. If you want to check out the code, it’s been published on GitHub. Further reading: YouTube Co-Founder Predicts ‘Decline’ of the Platform Following Removal of Dislikes

Read more of this story at Slashdot.

Tea and Coffee May Be Linked To Lower Risk of Stroke and Dementia, Study Finds

Drinking coffee or tea may be linked with a lower risk of stroke and dementia, according to the largest study of its kind. The Guardian reports: Strokes cause 10% of deaths globally, while dementia is one of the world’s biggest health challenges — 130 million are expected to be living with it by 2050. In the research, 365,000 people aged between 50 and 74 were followed for more than a decade. At the start the participants, who were involved in the UK Biobank study, self-reported how much coffee and tea they drank. Over the research period, 5,079 of them developed dementia and 10,053 went on to have at least one stroke.

Researchers found that people who drank two to three cups of coffee or three to five cups of tea a day, or a combination of four to six cups of coffee and tea, had the lowest risk of stroke or dementia. Those who drank two to three cups of coffee and two to three cups of tea daily had a 32% lower risk of stroke. These people had a 28% lower risk of dementia compared with those who did not drink tea or coffee. The research, by Yuan Zhang and colleagues from Tianjin Medical University, China, suggests drinking coffee alone or in combination with tea is also linked with lower risk of post-stroke dementia. “[W]hat generally happened is that the risk of stroke or dementia was lower in people who drank reasonably small amounts of coffee or tea compared to those who drank none at all, but that after a certain level of consumption, the risk started to increase again until it became higher than the risk to people who drank none,” said professor Kevin McConway, an emeritus professor of applied statistics at the Open University who was not involved in the study.

“Once the coffee consumption got up to seven or eight cups a day, the stroke risk was greater than for people who drank no coffee, and quite a lot higher than for those who drank two or three cups a day.”

The study has been published in the journal PLOS Medicine.

Read more of this story at Slashdot.

New Book Warns CS Mindset and VC Industry are Ignoring Competing Values

So apparently three Stanford professors are offering some tough-love to young people in the tech community. Mehran Sahami first worked at Google when it was still a startup (recruited to the company by Sergey Brin). Currently a Stanford CS professor, Sahami explained in 2019 that “I want students who engage in the endeavor of building technology to think more broadly about what are the implications of the things that they’re developing — how do they impact other people? I think we’ll all be better off.”

Now Sahami has teamed up with two more Stanford professors to write a book calling for “a mature reckoning with the realization that the powerful technologies dominating our lives encode within them a set of values that we had no role in choosing and that we often do not even see…”

At a virtual event at Silicon Valley’s Computer History Museum, the three professors discussed their new book, System Error: Where Big Tech Went Wrong and How We Can Reboot — and thoughtfully and succinctly distilled their basic argument. “The System Error that we’re describing is a function of an optimization mindset that is embedded in computer science, and that’s embedded in technology,” says political scientist Jeremy Weinstein (one of the book’s co-authors). “This mindset basically ignores the competing values that need to be ‘refereed’ as new products are designed. It’s also embedded in the structure of the venture capital industry that’s driving the growth of Silicon Valley and the growth of these companies, that prioritizes scale before we even understand anything about the impacts of technology in society. And of course it reflects the path that’s been paved for these tech companies to market dominance by a government that’s largely been in retreat from exercising any oversight.”

Sahami thinks our technological landscape should have a protective infrastructure like the one regulating our roads and highways. “It’s not a free-for all where the ultimate policy is ‘If you were worried about driving safely then don’t drive.'” Instead there’s lanes and traffic lights and speed bumps — an entire safe-driving infrastructure which arrived through regulation.” Or (as their political science professor/co-author Rob Reich tells the site), “Massive system problems should not be framed as choices that can be made by individual consumers.”

Sahami also thinks breaking up big tech monopolies would just leaves smaller “less equipped” companies to deal with the same problems — but that positive changes in behavior might instead come from government scrutiny. But Reich also wants to see professional ethics (like the kind that are well-established in biomedical fields). “In the book we point the way forward on a number of different fronts about how to accelerate that…”

And he argues that at colleges, just one computing-ethics class isn’t enough. “Ethics must be embedded through the entire curriculum.”

Read more of this story at Slashdot.