Edits To a Cholesterol Gene Could Stop the Biggest Killer On Earth

A volunteer in New Zealand has become the first person to undergo DNA editing in order to lower their blood cholesterol, a step that may foreshadow wide use of the technology to prevent heart attacks. MIT Technology Review reports: The experiment, part of a clinical trial by the US biotechnology company Verve Therapeutics, involved injecting a version of the gene-editing tool CRISPR in order to modify a single letter of DNA in the patient’s liver cells. According to the company, that tiny edit should be enough to permanently lower a person’s levels of “bad” LDL cholesterol, the fatty molecule that causes arteries to clog and harden with time. The patient in New Zealand had an inherited risk for extra-high cholesterol and was already suffering from heart disease. However, the company believes the same technique could eventually be used on millions of people in order to prevent cardiovascular disease.

In New Zealand, where Verve’s clinical trial is taking place, doctors will give the gene treatment to 40 people who have an inherited form of high cholesterol known as familial hypercholesterolemia, or FH. People with FH can have cholesterol readings twice the average, even as children. Many learn they have a problem only when they get hit with a heart attack, often at a young age. The study also marks an early use of base editing, a novel adaptation of CRISPR that was first developed in 2016. Unlike traditional CRISPR, which cuts a gene, base editing substitutes a single letter of DNA for another.

The gene Verve is editing is called PCSK9. It has a big role in maintaining LDL levels and the company says its treatment will turn the gene off by introducing a one-letter misspelling. […] One reason Verve’s base-editing technique is moving fast is that the technology is substantially similar to mRNA vaccines for covid-19. Just like the vaccines, the treatment consists of genetic instructions wrapped in a nanoparticle, which ferries everything into a cell. While the vaccine instructs cells to make a component of the SARS-CoV-2 virus, the particles in Verve’s treatment carry RNA directions for a cell to assemble and aim a base-editing protein, which then modifies that cell’s copy of PCSK9, introducing the tiny mistake. In experiments on monkeys, Verve found that the treatment lowered bad cholesterol by 60%. The effect has lasted more than a year in the animals and could well be permanent. The report notes that the human experiment does carry some risk. “Nanoparticles are somewhat toxic, and there have been reports of side effects, like muscle pain, in people taking other drugs to lower PCSK9,” reports MIT Technology Review. “And whereas treatment with ordinary drugs can be discontinued if problems come up, there’s as yet no plan to undo gene editing once it’s performed.”

Read more of this story at Slashdot.

Xbox Series X Can Run Windows 98, Along With Classic PC Games of The Era

Alex Battaglia from the YouTube channel “Digital Foundry” was able to use the “RetroArch” software emulator to run Windows 98 on the Xbox Series X, along with several PC games of the era. “Technically, you’re supposed to be an Xbox developer to access this, and you will need to sign up to the paid Microsoft Partner program and turn on ‘Developer Mode’ for your system to activate it,” notes Pure Xbox. “In DF’s case, rather than directly playing emulated games through RetroArch, they used the program to install Windows 98 software.” From the report: Beyond the novelty of actually booting up Win98 on a modern console the channel then decided to test out some games, running through the older version of Windows. Playthroughs of Turok, Command & Conquer, Quake 2 and more were all pretty successful, although the act of loading them onto the software requires a bit of messing about (you have to create ISO files and transfer them over — sadly, Xbox’s disc drive can’t read the original discs). Of course, this wouldn’t be a Digital Foundry video without some performance comparisons, so the team did just that. The video compares hardware of the era with Xbox Series X’s emulation, and while the console often lags behind due to the fact that it’s literally emulating an entire version of Windows, and then a game on top of that, it fares pretty well overall. You can watch Digital Foundry’s video here.

Read more of this story at Slashdot.

Physicists Discover a ‘Family’ of Robust, Superconducting Graphene Structures

In 2018, MIT researchers found that if two graphene layers are stacked at a very specific “magic” angle, the twisted bilayer structure could exhibit robust superconductivity, a widely sought material state in which an electrical current can flow through with zero energy loss. Now the team reports that […] four and five graphene layers can be twisted and stacked at new magic angles to elicit robust superconductivity at low temperatures. Phys.Org reports: This latest discovery, published this week in Nature Materials, establishes the various twisted and stacked configurations of graphene as the first known “family” of multilayer magic-angle superconductors. The team also identified similarities and differences between graphene family members. The findings could serve as a blueprint for designing practical, room-temperature superconductors. If the properties among family members could be replicated in other, naturally conductive materials, they could be harnessed, for instance, to deliver electricity without dissipation or build magnetically levitating trains that run without friction.

In the current study, the team looked to level up the number of graphene layers. They fabricated two new structures, made from four and five graphene layers, respectively. Each structure is stacked alternately, similar to the shifted cheese sandwich of twisted trilayer graphene. The team kept the structures in a refrigerator below 1 kelvin (about -273 degrees Celsius), ran electrical current through each structure, and measured the output under various conditions, similar to tests for their bilayer and trilayer systems. Overall, they found that both four- and five-layer twisted graphene also exhibit robust superconductivity and a flat band. The structures also shared other similarities with their three-layer counterpart, such as their response under a magnetic field of varying strength, angle, and orientation.

These experiments showed that twisted graphene structures could be considered a new family, or class of common superconducting materials. The experiments also suggested there may be a black sheep in the family: The original twisted bilayer structure, while sharing key properties, also showed subtle differences from its siblings. For instance, the group’s previous experiments showed the structure’s superconductivity broke down under lower magnetic fields and was more uneven as the field rotated, compared to its multilayer siblings. The team carried out simulations of each structure type, seeking an explanation for the differences between family members. They concluded that the fact that twisted bilayer graphene’s superconductivity dies out under certain magnetic conditions is simply because all of its physical layers exist in a “nonmirrored” form within the structure. In other words, there are no two layers in the structure that are mirror opposites of each other, whereas graphene’s multilayer siblings exhibit some sort of mirror symmetry. These findings suggest that the mechanism driving electrons to flow in a robust superconductive state is the same across the twisted graphene family.

Read more of this story at Slashdot.

Is Amazon’s AWS Quietly Getting Better at Contributing to Open Source?

“If I want AWS to ignore me completely all I have to do is open a pull request against one of their repositories,” quipped cloud economist Corey Quinn in April, while also complaining that the real problem is “how they consistently and in my opinion incorrectly try to shape a narrative where they’re contributing to the open source ecosystem at a level that’s on par with their big tech company peers.”

But on Friday tech columnist Matt Asay argued that AWS is quietly getting better at open source. “Agreed,” tweeted tech journalist Steven J. Vaughan-Nichols in response, commending “Good open source people, good open-source work.” (And Vaughan-Nichols later retweeted an AWS principle software engineer’s announcement that “Over at Amazon Linux we are hiring, and also trying to lead and better serve customers by being more involved in upstream communities.”) Mark Atwood, principle engineer for open source at Amazon, also joined Asay’s thread, tweeting “I’m glad that people are noticing. Me and my team have been doing heavy work for years to get to this point. Generally we don’t want to sit at the head of the table, but we are seeing the value of sitting at the table.”

Asay himself was AWS’s head of developer marketing/Open Source strategy for two years, leaving in August of 2021. But Friday Asay’s article noted a recent tweet where AWS engineer Divij Vaidya announced he’d suddenly become one of the top 10 contributors to Apache Kafka after three months as the founding engineer for AWS’s Apache Kafka open source team. (Vaida added “We are hiring for a globally distributed fully remote team to work on open source Apache Kafka! Join us.”)
Asay writes:
Apache Kafka is just the latest example of this…. This is exactly what critics have been saying AWS doesn’t do. And, for years, they were mostly correct.

AWS was, and is, far more concerned with taking care of customers than being popular with open-source audiences. So, the company has focused on being “the best place for customers to build and run open-source software in the cloud.” Historically, that tended to not involve or require contributing to the open-source projects it kept building managed services around. Many felt that was a mistake — that a company so dependent on open source for its business was putting its supply chain at risk by not sustaining the projects upon which it depended…

PostgreSQL contributor (and sometime AWS open-source critic) Paul Ramsey has noticed. As he told me recently, it “[f]eels like a switch flipped at AWS a year or two ago. The strategic value of being a real stakeholder in the software they spin is now recognized as being worth the dollars spent to make it happen….” What seems to be happening at AWS, if quietly and usually behind the scenes, is a shift toward AWS service teams taking greater ownership in the open-source projects they operationalize for customers. This allows them to more effectively deliver results because they can help shape the roadmap for customers, and it ensures AWS customers get the full open-source experience, rather than a forked repo with patches that pile up as technical debt.

Vaidya and the Managed Service for Kafka team is an example along with Madelyn Olson, an engineer with AWS’s ElastiCache team and one of five core maintainers for Redis. And then there are the AWS employees contributing to Kubernetes, etcd and more. No, AWS is still not the primary contributor to most of these. Not yet. Google, Microsoft and Red Hat tend to top many of the charts, to Quinn’s point above. This also isn’t somehow morally wrong, as Quinn also argued: “Amazon (and any company) is there to make money, not be your friend.”
But slowly and surely, AWS product teams are discovering that a key element of obsessing over customers is taking care of the open-source projects upon which those customers depend. In other words, part of the “undifferentiated heavy lifting” that AWS takes on for customers needs to be stewardship for the open-source projects those same customers demand.

UPDATE: Reached for a comment today, Asay clarified his position on Quinn’s original complaints about AWS’s low level of open source contributions. “What I was trying to say was that while Corey’s point had been more-or-less true, it wasn’t really true anymore.”

Read more of this story at Slashdot.

‘I’m CEO of a Robotics Company, and I Believe AI’s Failed on Many Fronts’

“Aside from drawing photo-realistic images and holding seemingly sentient conversations, AI has failed on many promises,” writes the cofounder and CEO of Serve Robotics:
The resulting rise in AI skepticism leaves us with a choice: We can become too cynical and watch from the sidelines as winners emerge, or find a way to filter noise and identify commercial breakthroughs early to participate in a historic economic opportunity. There’s a simple framework for differentiating near-term reality from science fiction. We use the single most important measure of maturity in any technology: its ability to manage unforeseen events commonly known as edge cases. As a technology hardens, it becomes more adept at handling increasingly infrequent edge cases and, as a result, gradually unlocking new applications…

Here’s an important insight: Today’s AI can achieve very high performance if it is focused on either precision, or recall. In other words, it optimizes one at the expense of the other (i.e., fewer false positives in exchange for more false negatives, and vice versa). But when it comes to achieving high performance on both of those simultaneously, AI models struggle. Solving this remains the holy grail of AI….

Delivery Autonomous Mobile Robots (AMRs) are the first application of urban autonomy to commercialize, while robo-taxis still await an unattainable hi-fi AI performance. The rate of progress in this industry, as well as our experience over the past five years, has strengthened our view that the best way to commercialize AI is to focus on narrower applications enabled by lo-fi AI, and use human intervention to achieve hi-fi performance when needed. In this model, lo-fi AI leads to early commercialization, and incremental improvements afterwards help drive business KPIs.

By targeting more forgiving use cases, businesses can use lo-fi AI to achieve commercial success early, while maintaining a realistic view of the multi-year timeline for achieving hi-fi capabilities.
After all, sci-fi has no place in business planning.

Read more of this story at Slashdot.

What Makes Workers ‘Thrive’? Microsoft Study Suggests Shorter Workweeks and Less Collaboration

Microsoft describes “thriving” at work as being “energized and empowered to do meaningful work.”

So Microsoft’s “people analytics” chief and its “culture measurements” director teamed up for a report in Harvard Business Review exploring “as we enter the hybrid work era… how thriving can be unlocked across different work locations, professions, and ways of working.”

ZDNet columnist Chris Matyszczyk took special note of the researchers’ observation that “Employees who weren’t thriving talked about experiencing siloes, bureaucracy, and a lack of collaboration,” asking playfully, “Does that sound like Microsoft to you?”

Klinghoffer and McCune were undeterred in their search for the secret of happiness. They examined those who spoke most positively about thriving at work and work-life balance. They reached a startling picture of a happy Microsoft employee. They said: “By combining sentiment data with de-identified calendar and email metadata, we found that those with the best of both worlds had five fewer hours in their workweek span, five fewer collaboration hours, three more focus hours, and 17 fewer employees in their internal network size.”

Five fewer collaboration hours? 17 fewer employees in their internal network? Does this suggest that the teamwork mantra isn’t working so well? Does it, in fact, intimate that collaboration may have become a buzzword for a collective that is more a bureaucracy than a truly productive organism?

Klinghoffer and McCune say collaboration isn’t bad in itself. However, they say: “It is important to be mindful of how intense collaboration can impact work-life balance, and leaders and employees alike should guard against that intensity becoming 24/7.”

If you’re a leader, you have a way to stop it. If you’re an employee, not so much.

The Microsoft researchers’ conclusion? “Thriving takes a village” (highlighting the importance of managers), and that “the most common thread among those who were not thriving was a feeling of exclusion — from a lack of collaboration to feeling left out of decisions to struggling with politics and bureaucracy.”

Matyszczyk’s conclusion? “It’s heartening to learn, though, that perhaps the most important element to making an employee happy at work is giving them time to, well, actually work.”

Read more of this story at Slashdot.