Scientists Say They Can Read Nearly the Whole Genome of an IVF-Created Embryo

sciencehabit shares a report from Science.org: A California company says it can decipher almost all the DNA code of a days-old embryo created through in vitro fertilization (IVF) — a challenging feat because of the tiny volume of genetic material available for analysis. The advance depends on fully sequencing both parents’ DNA and “reconstructing” an embryo’s genome with the help of those data. And the company suggests it could make it possible to forecast risk for common diseases that develop decades down the line. Currently, such genetic risk prediction is being tested in adults, and sometimes offered clinically. The idea of applying it to IVF embryos has generated intense scientific and ethical controversy. But that hasn’t stopped the technology from galloping ahead.
Predicting a person’s chance of a specific illness by blending this genetic variability into what’s called a “polygenic risk score” remains under study in adults, in part because our understanding of how gene variants come together to drive or protect against disease remains a work in progress. In embryos it’s even harder to prove a risk score’s accuracy, researchers say. The new work on polygenic risk scores for IVF embryos is “exploratory research,” says Premal Shah, CEO of MyOme, the company reporting the results. Today in Nature Medicine, the MyOme team, led by company co-founders and scientists Matthew Rabinowitz and Akash Kumar, along with colleagues elsewhere, describe creating such scores by first sequencing the genomes of 10 pairs of parents who had already undergone IVF and had babies. The researchers then used data collected during the IVF process: The couples’ embryos, 110 in all, had undergone limited genetic testing at that time, a sort of spot sequencing of cells, called microarray measurements. Such analysis can test for an abnormal number of chromosomes, certain genetic diseases, and rearrangements of large chunks of DNA, and it has become an increasingly common part of IVF treatment in the United States. By combining these patchy embryo data with the more complete parental genome sequences, and applying statistical and population genomics techniques, the researchers could account for the gene shuffling that occurs during reproduction and calculate which chromosomes each parent had passed down to each embryo. In this way, they could predict much of that embryo’s DNA.

The researchers had a handy way to see whether their reconstruction was accurate: Check the couples’ babies. They collected cheek swab samples from the babies and sequenced their full genome, just as they’d done with the parents. They then compared that “true sequence” with the reconstructed genome for the embryo from which the child originated. The comparison revealed, essentially, a match: For a 3-day-old embryo, at least 96% of the reconstructed genome aligned with the inherited gene variants in the corresponding baby; for a 5-day-old embryo, it was at least 98%. (Because much of the human genome is the same across all people, the researchers focused on the DNA variability that made the parents, and their babies, unique.) Once they had reconstructed embryo genomes in hand, the researchers turned to published data from large genomic studies of adults with or without common chronic diseases and the polygenic risk score models that were derived from that information. Then, MyOme applied those models to the embryos, crunching polygenic risk scores for 12 diseases, including breast cancer, coronary artery disease, and type 2 diabetes. The team also experimented with combining the reconstructed embryo sequence of single genes, such as BRCA1 and BRCA2, that are known to dramatically raise risk of certain diseases, with an embryo’s polygenic risk scores for that condition — in this case, breast cancer.

Read more of this story at Slashdot.

Apple’s New Studio Display Has 64GB of Onboard Storage

New submitter Dru Nemeton shares a report from 9to5Mac: Apple’s new Studio Display officially hit the market on Friday, and we continue to learn new tidbits about what exactly’s inside the machine. While Apple touted that the Studio Display is powered by an A13 Bionic inside, we’ve since learned that the Studio Display also features 64GB of onboard storage, because who knows why… […] as first spotted by Khaos Tian on Twitter, the Studio Display also apparently features 64GB of onboard storage. Yes, 64GB: double the storage in the entry-level Apple TV 4K and the same amount of storage in the entry-level iPad Air 5. Also worth noting: the Apple TV 4K is powered by the A12 Bionic chip, so the Studio Display has it beat on that front as well. Apple hasnâ(TM)t offered any explanation for why the Studio Display features 64GB of onboard storage. It appears that less than 2GB of that storage is actually being used as of right now.

One unexciting possibility is that the A13 Bionic chip used inside the Studio Display is literally the exact same A13 Bionic chip that was first shipped in the iPhone 11. As you might remember, the iPhone 11 came with 64GB of storage in its entry-level configuration, meaning Apple likely produced millions of A13 Bionic chips with 64GB of onboard storage. What do you think? Will Apple ever tap into the A13 Bionic chip and 64GB storage inside the Studio Display for something more interesting?

Read more of this story at Slashdot.

Linux Random Number Generator Sees Major Improvements

An anonymous Slashdot reader summarizes some important news from the web page of Jason Donenfeld (creator of the open-source VPN protocol WireGuard):

The Linux kernel’s random number generator has seen its first set of major improvements in over a decade, improving everything from the cryptography to the interface used. Not only does it finally retire SHA-1 in favor of BLAKE2s [in Linux kernel 5.17], but it also at long last unites ‘/dev/random’ and ‘/dev/urandom’ [in the upcoming Linux kernel 5.18], finally ending years of Slashdot banter and debate:

The most significant outward-facing change is that /dev/random and /dev/urandom are now exactly the same thing, with no differences between them at all, thanks to their unification in random: block in /dev/urandom. This removes a significant age-old crypto footgun, already accomplished by other operating systems eons ago. […] The upshot is that every Internet message board disagreement on /dev/random versus /dev/urandom has now been resolved by making everybody simultaneously right! Now, for the first time, these are both the right choice to make, in addition to getrandom(0); they all return the same bytes with the same semantics. There are only right choices.

Phoronix adds:
One exciting change to also note is the getrandom() system call may be a hell of a lot faster with the new kernel. The getrandom() call for obtaining random bytes is yielding much faster performance with the latest code in development. Intel’s kernel test robot is seeing an 8450% improvement with the stress-ng getrandom() benchmark. Yes, an 8450% improvement.

Read more of this story at Slashdot.

Sleeping With the Light On May Be Harmful To You

“Exposure to even moderate ambient lighting during nighttime sleep, compared to sleeping in a dimly lit room, harms your cardiovascular function during sleep and increases your insulin resistance the following morning,” announced Northwestern Medicine, citing a new study recently published in the Proceedings of the National Academy of Sciences.

The Washington Post reports:
Researchers at Northwestern University had two groups of 10 young adults sleep in differently lit rooms. One group slept in rooms with dim light for two nights; the other slept one night in a room with dim night and the next in a room with moderate overhead light — about the equivalent of an overcast day. Participants wore heart monitors at night. In the morning, they did a variety of glucose tests.

Both groups got the same amount of sleep but their bodies experienced very different nights. Both groups responded well to insulin the first night, when they both slept in dim lighting. On the second night, however, the group sleeping in brighter lighting didn’t respond as well to insulin. The dim light sleepers’ insulin resistance scores fell about 4 percent on the second night, while the bright sleepers’ rose about 15 percent. Their heart rates were faster on the bright night, too.

“[J]ust a single night of exposure to moderate room lighting during sleep can impair glucose and cardiovascular regulation, which are risk factors for heart disease, diabetes and metabolic syndrome,” concludes senior study author Dr. Phyllis Zee. “It’s important for people to avoid or minimize the amount of light exposure during sleep.”
From Northwestern’s announcement:
There is already evidence that light exposure during daytime increases heart rate via activation of the sympathetic nervous system, which kicks your heart into high gear and heightens alertness to meet the challenges of the day. “Our results indicate that a similar effect is also present when exposure to light occurs during nighttime sleep,” Zee said….

An earlier study published in JAMA Internal Medicine looked at a large population of healthy people who had exposure to light during sleep. They were more overweight and obese, Zee said. “Now we are showing a mechanism that might be fundamental to explain why this happens. We show it’s affecting your ability to regulate glucose,” Zee said.

Read more of this story at Slashdot.

Researchers Discover a New (intermediate and Tetragonal) Form of Ice

Researchers at the University of Nevada, Las Vegas were trying to understand how water might behave under the high pressures inside distant planets.
But along the way the team discovered a new form of ice, reports Phys.org, “redefining the properties of water at high pressures.”

Solid water, or ice, is like many other materials in that it can form different solid materials based on variable temperature and pressure conditions, like carbon forming diamond or graphite. However, water is exceptional in this aspect as there are at least 20 solid forms of ice known to us.

A team of scientists working in UNLV’s Nevada Extreme Conditions Lab pioneered a new method for measuring the properties of water under high pressure. The water sample was first squeezed between the tips of two opposite-facing diamonds — freezing into several jumbled ice crystals. The ice was then subjected to a laser-heating technique that temporarily melted it before it quickly re-formed into a powder-like collection of tiny crystals. By incrementally raising the pressure, and periodically blasting it with the laser beam, the team observed the water ice make the transition from a known cubic phase, Ice-VII, to the newly discovered intermediate, and tetragonal, phase, Ice-VIIt, before settling into another known phase, Ice-X….

While it’s unlikely we’ll find this new phase of ice anywhere on the surface of Earth, it is likely a common ingredient within the mantle of Earth as well as in large moons and water-rich planets outside of our solar system. The team’s findings were reported in the March 17 issue of the journal Physical Review B…. The work also recalibrates our understanding of the composition of exoplanets, UNLV physicist Ashkan Salamat added. Researchers hypothesize that the Ice-VIIt phase of ice could exist in abundance in the crust and upper mantle of expected water-rich planets outside of our solar system, meaning they could have conditions habitable for life.

Thanks to long-time Slashdot reader fahrbot-bot for sharing the story…

Read more of this story at Slashdot.

‘Biggest Change Ever’ to Go Brings Generics, Native Fuzzing, and a Performance Boost

“Supporting generics has been Go’s most often requested feature, and we’re proud to deliver the generic support that the majority of users need today,” the Go blog announced this week. *

It’s part of what Go’s development team is calling the “biggest change ever to the language”.

SiliconANGLE writes that “Right out of the gate, Go 1.18 is getting a CPU speed performance boost of up to 20% for Apple M1, ARM64 and PowerPC64 chips. This is all from an expansion of Go 1.17’s calling conventions for the application binary interface on these processor architectures.”

And Go 1.18 also introduces native support for fuzz testing — the first major programming language to do so, writes ZDNet:

As Google explains, fuzz testing or ‘fuzzing’ is a means of testing the vulnerability of a piece of software by throwing arbitrary or invalid data at it to expose bugs and unknown errors. This adds an additional layer of security to Go’s code that will keep it protected as its functionality evolves — crucial as attacks on software continue to escalate both in frequency and complexity. “At Google we are committed to securing the online infrastructure and applications the world depends upon,” said Eric Brewer, VIP infrastructure at Google….
While other languages support fuzzing, Go is the first major programming language to incorporate it into its core toolchain, meaning — unlike other languages — third-party support integrations aren’t required.

Google is emphasizing Go’s security features — and its widespread adoption. ZDNet writes:

Google created Go in 2007 and was designed specifically to help software engineers build secure, open-source enterprise applications for modern, multi-core computing systems. More than three-quarters of Cloud Native Computing Foundation projects, including Kubernetes and Istio, are written in Go, says Google. [Also Docker and Etc.] According to data from Stack Overflow, some 10% of developers are writing in Go worldwide, and there are signs that more recruiters are seeking out Go coders in their search for tech talent….. “Although we have a dedicated Go team at Google, we welcome a significant amount of contributions from our community. It’s a shared effort, and with their updates we’re helping our community achieve Go’s long-term vision.
Or, as the Go blog says:

We want to thank every Go user who filed a bug, sent in a change, wrote a tutorial, or helped in any way to make Go 1.18 a reality. We couldn’t do it without you. Thank you.

Enjoy Go 1.18!

* Supporting generics “includes major — but fully backward-compatible — changes to the language,” explains the release notes. Although it adds a few cautionary notes:

These new language changes required a large amount of new code that has not had significant testing in production settings. That will only happen as more people write and use generic code. We believe that this feature is well implemented and high quality. However, unlike most aspects of Go, we can’t back up that belief with real world experience. Therefore, while we encourage the use of generics where it makes sense, please use appropriate caution when deploying generic code in production.

While we believe that the new language features are well designed and clearly specified, it is possible that we have made mistakes…. it is possible that there will be code using generics that will work with the 1.18 release but break in later releases. We do not plan or expect to make any such change. However, breaking 1.18 programs in future releases may become necessary for reasons that we cannot today foresee. We will minimize any such breakage as much as possible, but we can’t guarantee that the breakage will be zero.

Read more of this story at Slashdot.

More Apple M1 Ultra Benchmarks Show It Doesn’t Beat the Best GPUs from Nvidia and AMD

Tom’s Guide tested a Mac Studio workstation equipped with an M1 Ultra with the Geekbench 5.4 CPU benchmarks “to get a sense of how effectively it handles single-core and multi-core workflows.”

“Since our M1 Ultra is the best you can buy (at a rough price of $6,199) it sports a 20-core CPU and a 64-core GPU, as well as 128GB of unified memory (RAM) and a 2TB SSD.”

Slashdot reader exomondo shares their results:
We ran the M1 Ultra through the Geekbench 5.4 CPU benchmarking test multiple times and after averaging the results, we found that the M1 Ultra does indeed outperform top-of-the-line Windows gaming PCs when it comes to multi-core CPU performance. Specifically, the M1 Ultra outperformed a recent Alienware Aurora R13 desktop we tested (w/ Intel Core i7-12700KF, GeForce RTX 3080, 32GB RAM), an Origin Millennium (2022) we just reviewed (Core i9-12900K CPU, RTX 3080 Ti GPU, 32GB RAM), and an even more 3090-equipped HP Omen 45L we tested recently (Core i9-12900K, GeForce RTX 3090, 64GB RAM) in the Geekbench 5.4 multi-core CPU benchmark.

However, as you can see from the chart of results below, the M1 Ultra couldn’t match its Intel-powered competition in terms of CPU single-core performance. The Ultra-powered Studio also proved slower to transcode video than the afore-mentioned gaming PCs, taking nearly 4 minutes to transcode a 4K video down to 1080p using Handbrake. All of the gaming PCs I just mentioned completed the same task faster, over 30 seconds faster in the case of the Origin Millennium. Before we even get into the GPU performance tests it’s clear that while the M1 Ultra excels at multi-core workflows, it doesn’t trounce the competition across the board. When we ran our Mac Studio review unit through the Geekbench 5.4 OpenCL test (which benchmarks GPU performance by simulating common tasks like image processing), the Ultra earned an average score of 83,868. That’s quite good, but again it fails to outperform Nvidia GPUs in similarly-priced systems.

They also share some results from the OpenCL Benchmarks browser, which publicly displays scores from different GPUs that users have uploaded:
Apple’s various M1 chips are on the list as well, and while the M1 Ultra leads that pack it’s still quite a ways down the list, with an average score of 83,940. Incidentally, that means it ranks below much older GPUs like Nvidia’s GeForce RTX 2070 (85,639) and AMD’s Radeon VII (86,509). So here again we see that while the Ultra is fast, it can’t match the graphical performance of GPUs that are 2-3 years old at this point — at least, not in these synthetic benchmarks. These tests don’t always accurately reflect real-world CPU and GPU performance, which can be dramatically influenced by what programs you’re running and how they’re optimized to make use of your PC’s components.
Their conclusion?
When it comes to tasks like photo editing or video and music production, the M1 Ultra w/ 128GB of RAM blazes through workloads, and it does so while remaining whisper-quiet. It also makes the Mac Studio a decent gaming machine, as I was able to play less demanding games like Crusader Kings III, Pathfinder: Wrath of the Righteous and Total War: Warhammer II at reasonable (30+ fps) framerates. But that’s just not on par with the performance we expect from high-end GPUs like the Nvidia GeForce RTX 3090….
Of course, if you don’t care about games and are in the market for a new Mac with more power than just about anything Apple’s ever made, you want the Studio with M1 Ultra.

Read more of this story at Slashdot.

The Free Software Foundation’s ‘LibrePlanet’ Conference Happens Online This Weekend

LibrePlanet, the annual conference hosted by the Free Software Foundation, will be happening online this weekend. The event “provides an opportunity for community activists, domain experts, and people seeking solutions for themselves to come together in order to discuss current issues in technology and ethics,” according to its web page. This year’s LibrePlanet theme is “Living Liberation”.

And while you’re listening to the presentations, you can apparently also interact with the rest of the community:

Each LibrePlanet room has its own IRC channel on the Libera.Chat network… Want to interact with other conference-goers in a virtual space? Join us on LibreAdventure, where you’ll be able to video chat with fellow free software users, journey to the stars, and walk around a replica of the FSF office!

Our Minetest server is back by popular demand, and now running version 5.x of everyone’s favorite free software, voxel sandbox game. You can install Minetest through your GNU/Linux distro’s package manager, and point your client to minetest.libreplanet.org with the default port 30000.

Sunday’s presentations include “Living in freedom with GNU Emacs” and “Hacking my brain: Free virtual reality implementations and their potential for therapeutic use.”

And Sunday will also include a talk from Seth Schoen, the first staff technologist at the Electronic Frontier Foundation (who helped develop the Let’s Encrypt certificate authority) titled “Reducing Internet address waste: The IPv4 unicast extensions project.”

View the complete schedule here.

Read more of this story at Slashdot.

GTA V is Back for a New Generation

Rockstar’s anarchic masterpiece has been freshened up for PlayStation 5 and Xbox Series X nine years after it was originally released. From a report: And so the boys are back in town. Michael, Trevor and Franklin, the sociopathic trio that lit up the gaming scene nine years ago, have been made over for the 2020s with this crisp new reworking of Grand Theft Auto V for PlayStation 5 and Xbox Series X. The game’s violent narrative of shifting loyalties and doomed machismo felt wild and edgy back in 2013, so how does it fare in the modern era? The good news is, the overhauled visuals definitely give the game new zest and freshness. You can play in either 4K at 30 frames-per-second or in a performance mode that lowers the resolution but bumps up the frame rate to 60, giving wonderful fluidity to car chases, swooping helicopter rides and mass shootouts. The DualSense controller features on PlayStation 5 are very good too: improved driving feedback via the analogue triggers makes the game’s cumbersome handling a little easier to, well, handle. It’s been quite a joy to rediscover this alternate-reality California; to see the sun drop behind the downtown skyscrapers, or to hit Senora as dawn splashes orange-yellow light across the burning desert.

What the vast upshift in resolution can’t hide is the fact that GTA V is a game originally designed for consoles that are now two generations out of date. The character models and facial details look positively archaic compared with, say, Horizon Forbidden West, and the building architecture too seems almost quaint in its stylised blockiness. Compare it with 2018’s Red Dead Redemption 2 and you can see just how far Rockstar has come in its building of intricate next-gen worlds. In many ways, however, the design of the world itself has not been been bettered in the decade since it arrived. The size of San Andreas, the sheer variety of landscapes and the diversity of actions and activities is still incredible — Cyberpunk 2077 may look better, but it doesn’t let you play golf or tennis, or go on day trips on a bike, or set up incredibly complex car or helicopter stunts. Los Santos is a vast playground, a gangster Fortnite — a factor underlined by the massive community that still gathers in GTA Online (which is where we find this new version’s only totally new content — Hao’s Special Works, which lets you unlock faster cars and new tasks).

Read more of this story at Slashdot.

NYT Takes Down Third-Party Wordle Archive

The New York Times, which acquired Wordle in January, is putting an end to unofficial takes of the game. The latest casualty is Wordle Archive, a website that let users play through hundreds of previous daily five-letter Wordle puzzles. According to Ars Technica, the site “has been taken down at the request of Wordle owner The New York Times.” From the report: The archival site, which offered a backward-looking play feature that’s not available in the NYT’s official version of Wordle, had been up since early January. But it was taken down last week and replaced with a message saying, “Sadly, the New York Times has requested that the Wordle Archive be taken down.” A Twitter search shows dozens of daily Wordle Archive players who were willing to share their results on social media up through March 7. “The usage was unauthorized, and we were in touch with them,” a New York Times representative said in response to an Ars Technica comment request. “We don’t plan to comment beyond that.”

The Wordle Archive is still fully playable in its own archived form (as of March 5) at the Internet Archive, appropriately enough. Other sites that allow you to play archived Wordle puzzles are not hard to find, as are sites that let you play unlimited Wordle puzzles beyond the usual one-a-day limit. But some of those sites may be under threat, if the Times’ treatment of Wordle Archive is any indication.

Read more of this story at Slashdot.