New Biocomputing Method Uses Enzymes As Catalysts For DNA-Based Molecular Computing

Researchers at the University of Minnesota report via Phys.Org: Biocomputing is typically done either with live cells or with non-living, enzyme-free molecules. Live cells can feed themselves and can heal, but it can be difficult to redirect cells from their ordinary functions toward computation. Non-living molecules solve some of the problems of live cells, but have weak output signals and are difficult to fine-tune and regulate. In new research published in Nature Communications, a team of researchers at the University of Minnesota has developed a platform for a third method of biocomputing: Trumpet, or Transcriptional RNA Universal Multi-Purpose GatE PlaTform.

Trumpet uses biological enzymes as catalysts for DNA-based molecular computing. Researchers performed logic gate operations, similar to operations done by all computers, in test tubes using DNA molecules. A positive gate connection resulted in a phosphorescent glow. The DNA creates a circuit, and a fluorescent RNA compound lights up when the circuit is completed, just like a lightbulb when a circuit board is tested.

The research team demonstrated that:

– The Trumpet platform has the simplicity of molecular biocomputing with added signal amplification and programmability.
– The platform is reliable for encoding all universal Boolean logic gates (NAND, NOT, NOR, AND, and OR), which are fundamental to programming languages.
– The logic gates can be stacked to build more complex circuits.

The team also developed a web-based tool facilitating the design of sequences for the Trumpet platform. “Trumpet is a non-living molecular platform, so we don’t have most of the problems of live cell engineering,” said co-author Kate Adamala, assistant professor in the College of Biological Sciences. “We don’t have to overcome evolutionary limitations against forcing cells to do things they don’t want to do. This also gives Trumpet more stability and reliability, with our logic gates avoiding the leakage problems of live cell operations.”

“It could make a lot of long-term neural implants possible. The applications could range from strictly medical, like healing damaged nerve connections or controlling prosthetics, to more sci-fi applications like entertainment or learning and augmented memory,” added Adamala.

Read more of this story at Slashdot.

Ukraine Is Now Using Steam Decks To Control Machine Gun Turrets

Thanks to a crowdfunding campaign dating back to 2014, soldiers in Ukraine are now using Steam Decks to remotely operate a high-caliber machine gun turret. The weapon is called the “Sabre” and is unique to Ukraine. Motherboard reports: Ukrainian news outlet TPO Media recently reported on the deployment of a new model of the Sabre on its Facebook page. Photos and videos of the system show soldiers operating a Steam Deck connected to a large machine gun via a heavy piece of cable. According to the TPO Media post, the Sabre system allows soldiers to fight the enemy from a great distance and can handle a range of calibers, from light machine guns firing anti-tank rounds to an AK-47.

In the TPO footage, the Sabre is firing what appears to be a PKT belt-fed machine gun. The PKT is a heavy barrelled machine that doesn’t have a stock and is typically mounted on vehicles like armored personnel carriers. It uses a solenoid trigger so it can be fired remotely, which is the cable running out of the back of the gun and into the complex of metal and wires on the side of the turret.

The Sabre system wasn’t always controlled with a Steam Deck […]. The first instances of the weapon appeared in 2014. The U.S. and the rest of NATO is giving Ukraine a lot of money for defense now, but that wasn’t the case when Russia first invaded in 2014. To fill its funding gaps, Ukrainians ran a variety of crowdfunding campaigns. Over the years, Ukraine has used crowdfunding to pay for everything from drones to hospitals. One of the most popular websites is The People’s Project, and it’s there that the Sabre was born. The People’s Project launched the crowdfunding campaign for Sabre in 2015 and collected more than $12,000 for the project over the next two years. It’s initial goal was to deploy 10 of these systems.

Read more of this story at Slashdot.

Spotify Tries To Win Indie Authors By Cutting Audiobook Fees

In an effort to appeal to indie authors, Spotify’s Findaway audiobook seller “will no longer take a 20 percent cut of royalties for titles sold on its DIY Voices platform — so long as the sales are made on Spotify,” reports The Verge. From the report: In a company blog post published on Monday, Findaway said that it would “pass on cost-saving efficiencies” from its integration with the streaming service. While it’s free for authors to upload their audiobooks onto Findaway’s Voices platform, the company normally uses an 80/20 pricing structure — where Findaway takes a 20 percent fee on all royalties earned. But that fee comes after sales platforms take their own 50 percent cut on the list price. So under the old revenue split, an author who sold a $10 audiobook would have to give $5 to Spotify and $1 to Findaway. But moving forward, that same author will no longer have to pay the $1 distribution fee to Findaway when a sale is made through Spotify.

The margins on audiobooks are exceptionally high, much to the chagrin of the authors. For example, Audible takes 75 percent of retail sales (though it’ll only take 60 percent with an exclusivity contract). Many authors share royalties with their narrators and have to pay production fees — meaning they get an even smaller share of royalties. The move by Spotify and Findaway is likely a bid to draw more indie authors from Audible, which is currently its biggest competitor. But Spotify’s audiobooks business — which it launched last fall — still has a long way to go. Unlike music or podcasts, most audiobooks on Spotify must be purchased individually, and sales are restricted to its web version. Even CEO Daniel Ek admitted that the current process of buying an audiobook through Spotify is “pretty horrible.” “We at Spotify are just at the beginning of our journey supporting independent authors — we have many plans for how to help authors expand their reach, maximize revenue, and ultimately build a strong audiobooks business,” said Audiobook’s communications chief, Laura Pezzini.

Read more of this story at Slashdot.

OpenAI Threatens Popular GitHub Project With Lawsuit Over API Use

A GitHub project called GPT4free has received a letter from OpenAI demanding that the repo be shut down within five days or face a lawsuit. Tom’s Hardware reports: Anyone can use ChatGPT for free, but if you want to use GPT4, the latest language model, you have to either pay for ChatGPT Plus, pay for access to OpenAI’s API, or find another site that has incorporated GPT4 into its own free chatbot. There are sites that use OpenAI such as Forefront and You.com, but what if you want to make your own bot and don’t want to pay for the API? A GitHub project called GPT4free allows you to get free access to the GPT4 and GPT3.5 models by funneling those queries through sites like You.com, Quora and CoCalc and giving you back the answers. The project is GitHub’s most popular new repo, getting 14,000 stars this week.

Now, according to Xtekky, the European computer science student who runs the repo, OpenAI has sent a letter demanding that he take the whole thing down within five days or face a lawsuit. I interviewed Xtekky via Telegram, and he said he doesn’t think OpenAI should be targeting him since he isn’t connecting directly to the company’s API, but is instead getting data from other sites that are paying for their own API licenses. If the owners of those sites have a problem with his scripts querying them, they should approach him directly, he posited. […] Even if the original repo is taken down, there’s a great chance that the code — and this method of accessing GPT4 and GPT3.5 — will be published elsewhere by members of the community. Even if GPT4Free had never existed anyone can find ways to use these sites’ APIs if they continue to be unsecured. “Users are sharing and hosting this project everywhere,” he said. “Deletion of my repo will be insignificant.”

Read more of this story at Slashdot.

Russian Forces Suffer Radiation Sickness After Digging Trenches and Fishing in Chernobyl

The Independent reports:

Russian troops who dug trenches in Chernobyl forest during their occupation of the area have been struck down with radiation sickness, authorities have confirmed.

Ukrainians living near the nuclear power station that exploded 37 years ago, and choked the surrounding area in radioactive contaminants, warned the Russians when they arrived against setting up camp in the forest. But the occupiers who, as one resident put it to The Times, “understood the risks” but were “just thick”, installed themselves in the forest, reportedly carved out trenches, fished in the reactor’s cooling channel — flush with catfish — and shot animals, leaving them dead on the roads…

In the years after the incident, teams of men were sent to dig up the contaminated topsoil and bury it below ground in the Red Forest — named after the colour the trees turned as a result of the catastrophe… Vladimir Putin’s men reportedly set up camp within a six-mile radius of reactor No 4, and dug defensive positions into the poisonous ground below the surface.

On 1 April, as Ukrainian troops mounted counterattacks from Kyiv, the last of the occupiers withdrew, leaving behind piles of rubbish. Russian soldiers stationed in the forest have since been struck down with radiation sickness, diplomats have confirmed. Symptoms can start within an hour of exposure and can last for several months, often resulting in death.

Read more of this story at Slashdot.

Google Has More Powerful AI, Says Engineer Fired Over Sentience Claims

Remember that Google engineer/AI ethicist who was fired last summer after claiming their LaMDA LLM had become sentient?

In a new interview with Futurism, Blake Lemoine now says the “best way forward” for humankind’s future relationship with AI is “understanding that we are dealing with intelligent artifacts. There’s a chance that — and I believe it is the case — that they have feelings and they can suffer and they can experience joy, and humans should at least keep that in mind when interacting with them.” (Although earlier in the interview, Lemoine concedes “Is there a chance that people, myself included, are projecting properties onto these systems that they don’t have? Yes. But it’s not the same kind of thing as someone who’s talking to their doll.”)

But he also thinks there’s a lot of research happening inside corporations, adding that “The only thing that has changed from two years ago to now is that the fast movement is visible to the public.” For example, Lemoine says Google almost released its AI-powered Bard chatbot last fall, but “in part because of some of the safety concerns I raised, they deleted it… I don’t think they’re being pushed around by OpenAI. I think that’s just a media narrative. I think Google is going about doing things in what they believe is a safe and responsible manner, and OpenAI just happened to release something.”

“[Google] still has far more advanced technology that they haven’t made publicly available yet. Something that does more or less what Bard does could have been released over two years ago. They’ve had that technology for over two years. What they’ve spent the intervening two years doing is working on the safety of it — making sure that it doesn’t make things up too often, making sure that it doesn’t have racial or gender biases, or political biases, things like that. That’s what they spent those two years doing…

“And in those two years, it wasn’t like they weren’t inventing other things. There are plenty of other systems that give Google’s AI more capabilities, more features, make it smarter. The most sophisticated system I ever got to play with was heavily multimodal — not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it. That’s the one that I was like, “you know this thing, this thing’s awake.” And they haven’t let the public play with that one yet. But Bard is kind of a simplified version of that, so it still has a lot of the kind of liveliness of that model…

“[W]hat it comes down to is that we aren’t spending enough time on transparency or model understandability. I’m of the opinion that we could be using the scientific investigative tools that psychology has come up with to understand human cognition, both to understand existing AI systems and to develop ones that are more easily controllable and understandable.”

So how will AI and humans will coexist? “Over the past year, I’ve been leaning more and more towards we’re not ready for this, as people,” Lemoine says toward the end of the interview. “We have not yet sufficiently answered questions about human rights — throwing nonhuman entities into the mix needlessly complicates things at this point in history.”

Read more of this story at Slashdot.

Red Hat’s 30th Anniversary: How a Microsoft Competitor Rose from an Apartment-Based Startup

For Red Hat’s 30th anniversary, North Carolina’s News & Observer newspaper ran a special four-part series of articles.
In the first article Red Hat co-founder Bob Young remembers Red Hat’s first big breakthrough: winning InfoWorld’s “OS of the Year” award in 1998 — at a time when Microsoft’s Windows controlled 85% of the market.
“How is that possible,” Young said, “that one of the world’s biggest technology companies, on this strategically critical product, loses the product of the year to a company with 50 employees in the tobacco fields of North Carolina?” The answer, he would tell the many reporters who suddenly wanted to learn about his upstart company, strikes at “the beauty” of open-source software.
“Our engineering team is an order of magnitude bigger than Microsoft’s engineering team on Windows, and I don’t really care how many people they have,” Young would say. “Like they may have thousands of the smartest operating system engineers that they could scour the planet for, and we had 10,000 engineers by comparison….”

Young was a 40-year-old Canadian computer equipment salesperson with a software catalog when he noticed what Marc Ewing was doing. [Ewing was a recent college graduate bored with his two-month job at IBM, selling customized Linux as a side hustle.] It’s pretty primitive, but it’s going in the right direction, Young thought. He began reselling Ewing’s Red Hat product. Eventually, he called Ewing, and the two met at a tech conference in New York City. “I needed a product, and Marc needed some marketing help,” said Young, who was living in Connecticut at the time. “So we put our two little businesses together.”

Red Hat incorporated in March 1993, with the earliest employees operating the nascent business out of Ewing’s Durham apartment. Eventually, the landlord discovered what they were doing and kicked them out.
The four articles capture the highlights. (“A visual effects group used its Linux 4.1 to design parts of the 1997 film Titanic.”) And it doesn’t leave out Red Hat’s skirmishes with Microsoft. (“Microsoft was owned by the richest person in the world. Red Hat engineers were still linking servers together with extension cords. “) “We were changing the industry and a lot of companies were mad at us,” says Michael Ferris, Red Hat’s VP of corporate development/strategy. Soon there were corporate partnerships with Netscape, Intel, Hewlett-Packard, Compaq, Dell, and IBM — and when Red Hat finally goes public in 1999, its stock sees the eighth-largest first-day gain in Wall Street history, rising in value in days to over $7 billion and “making overnight millionaires of its earliest employees.”

But there’s also inspiring details like the quote painted on the wall of Red Hat’s headquarters in Durham: “Every revolution was first a thought in one man’s mind; and when the same thought occurs to another man, it is the key to that era…” It’s fun to see the story told by a local newspaper, with subheadings like “It started with a student from Finland” and “Red Hat takes on the Microsoft Goliath.”

Something I’d never thought of. 2001’s 9/11 terrorist attack on the World Trade Center “destroyed the principal data centers of many Wall Street investment banks, which were housed in the twin towers. With their computers wiped out, financial institutions had to choose whether to rebuild with standard proprietary software or the emergent open source. Many picked the latter.” And by the mid-2000s, “Red Hat was the world’s largest provider of Linux…’ according to part two of the series. “Soon, Red Hat was servicing more than 90% of Fortune 500 companies.”
By then, even the most vehement former critics were amenable to Red Hat’s kind of software. Microsoft had begun to integrate open source into its core operations. “Microsoft was on the wrong side of history when open source exploded at the beginning of the century, and I can say that about me personally,” Microsoft President Brad Smith later said.
In the 2010s, “open source has won” became a popular tagline among programmers. After years of fighting for legitimacy, former Red Hat executives said victory felt good. “There was never gloating,” Tiemann said.

“But there was always pride.”
In 2017 Red Hat’s CEO answered questions from Slashdot’s readers.

Read more of this story at Slashdot.

Can OpenAI Trademark ‘GPT’?

“ThreatGPT, MedicalGPT, DateGPT and DirtyGPT are a mere sampling of the many outfits to apply for trademarks with the United States Patent and Trademark Office in recent months,” notes TechCrunch, exploring the issue of whether OpenAI can actually trademark the phrase ‘GPT’…

Little wonder that after applying in late December for a trademark for “GPT,” which stands for “Generative Pre-trained Transformer,” OpenAI last month petitioned the USPTO to speed up the process, citing the “myriad infringements and counterfeit apps” beginning to spring into existence. Unfortunately for OpenAI, its petition was dismissed last week… Given the rest of the queue in which OpenAI finds itself, that means a decision could take up to five more months, says Jefferson Scher, a partner in the intellectual property group of Carr & Ferrell and chair of the firm’s trademark practice group. Even then, the outcome isn’t assured, Scher explains… [H]elpful, says Scher, is the fact that OpenAI has been using “GPT” for years, having released its original Generative Pre-trained Transformer model, or GPT-1, back in October 2018…

Even if a USPTO examiner has no problem with OpenAI’s application, it will be moved afterward to a so-called opposition period, where other market participants can argue why the agency should deny the “GPT” trademark. Scher describes what would follow this way: In the case of OpenAI, an opposer would challenge Open AI’s position that “GPT” is proprietary and that the public perceives it as such instead of perceiving the acronym to pertain to generative AI more broadly…

It all begs the question of why the company didn’t move to protect “GPT” sooner. Here, Scher speculates that the company was “probably caught off guard” by its own success… Another wrinkle here is that OpenAI may soon be so famous that its renown becomes a dominant factor, says Scher. While one doesn’t need to be famous to secure a trademark, once an outfit is widely enough recognized, it receives protection that extends far beyond its sphere. Rolex is too famous a trademark to be used on anything else, for instance.

Thanks to Slashdot reader rolodexter for sharing the article.

Read more of this story at Slashdot.