Vietnam Demands Big Tech Localize Data Storage and Offices

Vietnam’s Ministry of Information and Communications updated cybersecurity laws this week to mandate Big Tech and telecoms companies store user data locally, and control that data with local entities. The Register reports: The data affected goes beyond the basics of name, email, credit card information, phone number and IP address, and extends into social elements — including groups of which users are members, or the friends with whom they digitally interact. “Data of all internet users ranging from financial records and biometric data to information on people’s ethnicity and political views, or any data created by users while surfing the internet must be to stored domestically,” read the decree (PDF) issued Wednesday, as translated by Reuters. The decree applies to a wide swath of businesses including those providing telecom services, storing and sharing data in cyberspace, providing national or international domain names for users in Vietnam, e-commerce, online payments, payment intermediaries, transport connection services operating in cyberspace, social media, online video games, messaging services, and voice or video calls.

According to Article 26 of the government’s Decree 53, the new rules go into effect October 1, 2022 — around seven weeks from the date of its announcement. However, foreign companies have an entire 12 months in which to comply — beginning when they receive instructions from the Minister of Public Security. The companies are then required to store the data in Vietnam for a minimum of 24 months. System logs will need to be stored for 12 months. After this grace period, authorities reserve the right to make sure affected companies are following the law through investigations and data collection requests, as well as content removal orders. Further reading: Vietnam To Make Apple Watch, MacBook For First Time Ever

Read more of this story at Slashdot.

Linux 6.0 Arrives With Performance Improvements and More Rust Coming

Linux creator Linus Torvalds has announced the first release candidate for the Linux kernel version 6.0, but he says the major number change doesn’t signify anything especially different about this release. ZDNet: While there is nothing fundamentally different about this release compared with 5.19, Torvalds noted that there were over 13,500 non-merge commits and over 800 merged commits, meaning “6.0 looks to be another fairly sizable release.” According to Torvalds, most of the updates are improvements to the GPU, networking and sound. Torvalds stuck to his word after releasing Linux kernel 5.19 last month, when he flagged he would likely call the next release 6.0 because he’s “starting to worry about getting confused by big numbers again.”

On Sunday’s release of Linux 6.0 release candidate version 1 (rc-1), he explained his reasoning behind choosing a new major version number and its purpose for developers. Again, it’s about avoiding confusion rather than signaling that the release has major new features. His threshold for changing the lead version number was .20 because it is difficult to remember incremental version numbers beyond that. “Despite the major number change, there’s nothing fundamentally different about this release – I’ve long eschewed the notion that major numbers are meaningful, and the only reason for a ‘hierarchical; numbering system is to make the numbers easier to remember and distinguish,” said Torvalds. Torvalds lamented some Rust-enabling code didn’t make it into the release. The Register adds: “I actually was hoping that we’d get some of the first rust infrastructure, and the multi-gen LRU VM, but neither of them happened this time around,” he mused, before observing “There’s always more releases. This is one of those releases where you should not look at the diffstat too closely, because more than half of it is yet another AMD GPU register dump,” he added, noting that Intel’s Gaudi2 Ai processors are also likely to produce plenty of similar kernel additions. “The CPU people also show up in the JSON files that describe the perf events, but they look absolutely tiny compared to the ‘asic_reg’ auto-generated GPU and AI hardware definitions,” he added.

Read more of this story at Slashdot.

Google’s New Bug Bounties Include Their Custom Linux Kernel’s Experimental Security Mitigations

Google uses Linux “in almost everything,” according to the leader of Google’s “product security response” team — including Chromebooks, Android smartphones, and even Google Cloud.

“Because of this, we have heavily invested in Linux’s security — and today, we’re announcing how we’re building on those investments and increasing our rewards.”

In 2020, we launched an open-source Kubernetes-based Capture-the-Flag (CTF) project called, kCTF. The kCTF Vulnerability Rewards Program lets researchers connect to our Google Kubernetes Engine (GKE) instances, and if they can hack it, they get a flag, and are potentially rewarded.

All of GKE and its dependencies are in scope, but every flag caught so far has been a container breakout through a Linux kernel vulnerability.

We’ve learned that finding and exploiting heap memory corruption vulnerabilities in the Linux kernel could be made a lot harder. Unfortunately, security mitigations are often hard to quantify, however, we think we’ve found a way to do so concretely going forward….

First, we are indefinitely extending the increased reward amounts we announced earlier this year, meaning we’ll continue to pay $20,000 — $91,337 USD for vulnerabilities on our lab kCTF deployment to reward the important work being done to understand and improve kernel security. This is in addition to our existing patch rewards for proactive security improvements.

Second, we’re launching new instances with additional rewards to evaluate the latest Linux kernel stable image as well as new experimental mitigations in a custom kernel we’ve built. Rather than simply learning about the current state of the stable kernels, the new instances will be used to ask the community to help us evaluate the value of both our latest and more experimental security mitigations. Today, we are starting with a set of mitigations we believe will make most of the vulnerabilities (9/10 vulns and 10/13 exploits) we received this past year more difficult to exploit. For new exploits of vulnerabilities submitted which also compromise the latest Linux kernel, we will pay an additional $21,000 USD. For those which compromise our custom Linux kernel with our experimental mitigations, the reward will be another $21,000 USD (if they are clearly bypassing the mitigations we are testing). This brings the total rewards up to a maximum of $133,337 USD.

We hope this will allow us to learn more about how hard (or easy) it is to bypass our experimental mitigations…..

With the kCTF VRP program, we are building a pipeline to analyze, experiment, measure and build security mitigations to make the Linux kernel as safe as we can with the help of the security community. We hope that, over time, we will be able to make security mitigations that make exploitation of Linux kernel vulnerabilities as hard as possible.

“We don’t care about vulnerabilities; we care about exploits,” Vela told the Register. “We expect the vulnerabilities are there, they will get patched, and that’s nice and all. But the whole idea is what do to beyond just patching a couple of vulnerabilities.”
In total, Google paid out $8.7 million in rewards to almost 700 researchers across its various VPRs last year. “We are just one actor in the whole community that happens to have economic resources, financial resources, but we need the community to help us make the Kernel better,” Vela said.

“If the community is engaged and helps us validate the mitigations that we have, then, we will continue growing on top of that. But the whole idea is that we need to see where the community wants us to go with this….”

[I]t’s not always about the cash payout, according to Vela, and different bug hunters have different motivations. Some want money, some want fame and some just want to solve an interesting problem, Vela said. “We are trying to find the right combination to captivate people.”

Read more of this story at Slashdot.

Impact Crater May Be Dinosaur Killer’s Baby Cousin

Researchers have discovered a second impact crater on the other side of the Atlantic that could have finished off what was left of the dinosaurs, after an asteroid known as Chicxulub slammed into what is now the Gulf of Mexico 66 million years ago. The BBC reports: Dubbed Nadir Crater, the new feature sits more than 300m below the seabed, some 400km off the coast of Guinea, west Africa. With a diameter of 8.5km, it’s likely the asteroid that created it was a little under half a kilometre across. The hidden depression was identified by Dr Uisdean Nicholson from Heriot-Watt University, Edinburgh, UK. […] “Our simulations suggest this crater was caused by the collision of a 400m-wide asteroid in 500-800m of water,” explained Dr Veronica Bray from the University of Arizona, US. “This would have generated a tsunami over one kilometre high, as well as an earthquake of Magnitude 6.5 or so. “The energy released would have been around 1,000 times greater than that from the January 2022 eruption and tsunami in Tonga.”

Dr Nicholson’s team has to be cautious about tying the two impacts together. Nadir has been given a very similar date to Chicxulub based on an analysis of fossils of known age that were drilled from a nearby borehole. But to make a definitive statement, rocks in the crater itself would need to be pulled up and examined. This would also confirm Nadir is indeed an asteroid impact structure and not some other, unrelated feature caused by, for example, ancient volcanism. […] Prof Sean Gulick, who co-led the recent project to drill into the Chicxulub Crater, said Nadir might have fallen to Earth on the same day. Or it might have struck the planet a million or two years either side of the Mexican cataclysm. Scientists will only know for sure when rocks from the west African crater are inspected in the lab. “A much smaller cousin, or sister, doesn’t necessarily add to what we know about the dinosaurs’ extinction, but it does add to our understanding of the astronomical event that was Chicxulub,” the University of Texas at Austin researcher told BBC News.

Read more of this story at Slashdot.

San Francisco Restaurant Claims To Be First To Run Entirely By Robots

Mezli isn’t the first automated restaurant to roll out in San Francisco, but, at least according to its three co-founders, it’s the first to remove humans entirely from the on-site operation equation. Eater SF reports: About two years and a few million dollars later, Mezli co-founders Alex Kolchinski, Alex Gruebele, and Max Perham are days away from firing up the touch screens at what they believe to be the world’s first fully robotic restaurant. To be clear, Mezli isn’t a restaurant in the traditional sense. As in, you won’t be able to pull up a seat and have a friendly server — human, robot, or otherwise — take your order and deliver your food. Instead, Mezli works more like if a vending machine and a restaurant had a robot baby, Kolchinski describes. It’s a way to get fresh food to a lot of people, really fast (the box can pump out about 75 meals an hour), and, importantly, at a lower price; the cheapest Mezli bowl starts at $6.99.

On its face, the concept actually sounds pretty simple. The co-founders built what’s essentially a big, refrigerated shipping container and stuffed it with machines capable of portioning out ingredients, putting those ingredients into bowls, heating the food up, and then moving it to a place where diners can get to it. But in a technical sense, the co-founders say it was quite difficult to work out. Most automated restaurants still require humans in some capacity; maybe people take orders while robots make the food or, vice versa, with automated ordering and humans prepping food behind the scenes. But Mezli can run on its own, serving hundreds of meals without any human staff.

The food does get prepped and pre-cooked off-site by good old-fashioned carbon-based beings. Mezli founding chef Eric Minnich, who previously worked at Traci Des Jardins’s the Commissary and at Michelin-starred Madera at Rosewood Sand Hill hotel, says he and a lean team of just two other people can handle all the chopping, mixing, cooking, and portioning at a commissary kitchen. Then, once a day, they load all the menu components into the big blue-and-white Mezli box. Inside the box, there’s an oven that either brings the ingredients up to temp or finishes up the last of the cooking. Cutting down on labor marks a key cost-saving measure in the Mezli business model; with just a fraction of the staff, as in less than a half dozen workers, Mezli can serve hundreds of meals. “The fully robot-run restaurant begins taking orders and sliding out Mediterranean grain bowls by the end of this week with plans to celebrate a grand opening on August 28 at Spark Social,” notes Eater.

Read more of this story at Slashdot.

Drought-Stricken States To Get Less From Colorado River

For the second year in a row, Arizona and Nevada will face cuts in the amount of water they can draw from the Colorado River as the West endures an extreme drought, federal officials announced Tuesday. The Associated Press reports: The cuts planned for next year will force states to make critical decisions about where to reduce consumption and whether to prioritize growing cities or agricultural areas. The cuts will also place state officials under renewed pressure to plan for a hotter, drier future and a growing population. Mexico will also face cuts. “We are taking steps to protect the 40 million people who depend on the Colorado River for their lives and livelihoods,” said Camille Touton, commissioner of the Bureau of Reclamation.

The river provides water across seven states and in Mexico and helps feed an agricultural industry valued at $15 billion a year. Cities and farms are anxiously awaiting official estimates of the river’s future water levels that will determine the extent and scope of cuts to their water supply. That’s not all. In addition to those already-agreed-to cuts, the Bureau of Reclamation said Tuesday that states had missed a deadline to propose at least 15% more cuts needed to keep water levels at the river’s storage reservoirs from dropping even more. For example, officials have predicted that water levels at Lake Mead, the nation’s largest reservoir, will plummet further. The lake is currently less than a quarter full. “The states collectively have not identified and adopted specific actions of sufficient magnitude that would stabilize the system,” Touton said.

Read more of this story at Slashdot.

Microsoft Employees Exposed Own Company’s Internal Logins

Multiple people who appear to be employees of Microsoft have exposed sensitive login credentials to the company’s own infrastructure on GitHub, potentially offering attackers a gateway into internal Microsoft systems, according to a cybersecurity research firm that found the exposed credentials. Motherboard reports: “We continue to see that accidental source code and credential leakages are part of the attack surface of a company, and it’s becoming more and more difficult to identify in a timely and accurate manner. This is a very challenging issue for most companies these days,” Mossab Hussein, chief security officer at cybersecurity firm spiderSilk which discovered the issue, told Motherboard in an online chat. Hussein provided Motherboard with seven examples in total of exposed Microsoft logins. All of these were credentials for Azure servers. Azure is Microsoft’s cloud computer service and is similar to Amazon Web Services. All of the exposed credentials were associated with an official Microsoft tenant ID. A tenant ID is a unique identifier linked to a particular set of Azure users. One of the GitHub users also listed Microsoft on their profile.

Three of the seven login credentials were still active when spiderSilk discovered them, with one seemingly uploaded just days ago at the time of writing. The other four sets of credentials were no longer active but still highlighted the risk of workers accidentally uploading keys for internal systems. Microsoft refused to elaborate on what systems the credentials were protecting when asked multiple times by Motherboard. But generally speaking, an attacker may have an opportunity to move onto other points of interest after gaining initial access to an internal system. One of the GitHub profiles with exposed and active credentials makes a reference to the Azure DevOps code repository. Highlighting the risk that such credentials may pose, in an apparently unrelated hack in March attackers gained access to an Azure DevOps account and then published a large amount of Microsoft source code, including for Bing and Microsoft’s Cortana assistant. “We’ve investigated and have taken action to secure these credentials,” said a Microsoft spokesperson in a statement. “While they were inadvertently made public, we haven’t seen any evidence that sensitive data was accessed or the credentials were used improperly. We’re continuing to investigate and will continue to take necessary steps to further prevent inadvertent sharing of credentials.”

Read more of this story at Slashdot.

An Eye Implant Engineered From Proteins In Pigskin Restored Sight In 14 Blind People

According to a new study published in the journal Nature Biotechnology, researchers implanted corneas made from pig collagen to restore sight in 20 people who were blind or visually impaired. “Fourteen of the patients were blind before they received the implant, but two years after the procedure, they had regained some or all of their vision,” notes NBC News. “Three had perfect vision after the surgery.” From the report: The patients, in Iran and India, all suffered from keratoconus, a condition in which the protective outer layer of the eye progressively thins and bulges outward. “We were surprised with the degree of vision improvement,” said Neil Lagali, a professor of experimental ophthalmology at Linkoping University in Sweden who co-authored the study. Not all patients experienced the same degree of improvement, however. The 12 Iranian patients wound up with an average visual acuity of 20/58 with glasses; functional vision is defined as 20/40 or better with lenses. Nonetheless, Dr. Marian Macsai, a clinical professor of ophthalmology at the University of Chicago who wasn’t involved in the study, said the technology could be a game changer for those with keratoconus, which affects roughly 50 to 200 out of every 100,000 people. It might also have applications for other forms of corneal disease.

To create the implant, Lagali and his team dissolved pig tissue to form a purified collagen solution. That was used to engineer a hydrogel that mimics the human cornea. Surgeons then made an incision in a patient’s cornea for the hydrogel. “We insert our material into this pocket to thicken the cornea and to reshape it so that it can restore the cornea’s function,” Lagali said. Traditionally, human tissue is required for cornea transplants. But it’s in short supply, because people must volunteer to donate it after they die. So, Lagali said, his team was looking for a low-cost, widely available substitute. “Collagen from pigskin is a byproduct from the food industry,” he said. “This makes it broadly available and easier to procure.” After two years, the patients’ bodies hadn’t rejected the implants, and they didn’t have any inflammation or scarring.

But any experimental medical procedure comes with risk. In this case, Soiberman said, a foreign molecule like collagen could induce an immune reaction. The researchers prescribed patients an eight-week course of immunosuppressive eyedrops to lower the risk, which is less than the amount given to people who receive cornea transplants from human tissue. In those cases, patients take immunosuppressive medicine for more than a year, Lagali said. “There’s always a risk for rejection of the human donor tissue because it contains foreign cells,” he said. “Our implant does not contain any cells … so there’s a minimal risk of rejection.” The procedure itself was also quicker than traditional cornea transplants. The researchers said each operation took about 30 minutes, whereas transplants of human tissue can take a couple of hours. […] It’s not yet clear whether the surgery would work for patients who have other forms of corneal disease aside from keratoconus.

Read more of this story at Slashdot.