China’s AI Industry Barely Slowed By US Chip Export Rules

Export controls imposed by the U.S. on microchips, aiming to hinder China’s technological advancements, have had minimal effects on the country’s tech sector. While the restrictions have slowed down variants of Nvidia’s chips for the Chinese market, it has not halted China’s progress in areas like AI, as the reduced performance is still an improvement for Chinese firms, and researchers are finding ways to overcome the limitations. Reuters reports: Nvidia has created variants of its chips for the Chinese market that are slowed down to meet U.S. rules. Industry experts told Reuters the newest one – the Nvidia H800, announced in March – will likely take 10% to 30% longer to carry out some AI tasks and could double some costs compared with Nvidia’s fastest U.S. chips. Even the slowed Nvidia chips represent an improvement for Chinese firms. Tencent Holdings, one of China’s largest tech companies, in April estimated that systems using Nvidia’s H800 will cut the time it takes to train its largest AI system by more than half, from 11 days to four days. “The AI companies that we talk to seem to see the handicap as relatively small and manageable,” said Charlie Chai, a Shanghai-based analyst with 86Research.

Part of the U.S. strategy in setting the rules was to avoid such a shock that the Chinese would ditch U.S. chips altogether and redouble their own chip-development efforts. “They had to draw the line somewhere, and wherever they drew it, they were going to run into the challenge of how to not be immediately disruptive, but how to also over time degrade China’s capability,” said one chip industry executive who requested anonymity to talk about private discussions with regulators. The export restrictions have two parts. The first puts a ceiling on a chip’s ability to calculate extremely precise numbers, a measure designed to limit supercomputers that can be used in military research. Chip industry sources said that was an effective action. But calculating extremely precise numbers is less relevant in AI work like large language models where the amount of data the chip can chew through is more important. […] The second U.S. limit is on chip-to-chip transfer speeds, which does affect AI. The models behind technologies such as ChatGPT are too large to fit onto a single chip. Instead, they must be spread over many chips – often thousands at a time — which all need to communicate with one another.

Nvidia has not disclosed the China-only H800 chip’s performance details, but a specification sheet seen by Reuters shows a chip-to-chip speed of 400 gigabytes per second, less than half the peak speed of 900 gigabytes per second for Nvidia’s flagship H100 chip available outside China. Some in the AI industry believe that is still plenty of speed. Naveen Rao, chief executive of a startup called MosaicML that specializes in helping AI models to run better on limited hardware, estimated a 10-30% system slowdown. “There are ways to get around all this algorithmically,” he said. “I don’t see this being a boundary for a very long time — like 10 years.” Moreover, AI researchers are trying to slim down the massive systems they have built to cut the cost of training products similar to ChatGPT and other processes. Those will require fewer chips, reducing chip-to-chip communications and lessening the impact of the U.S. speed limits.

Read more of this story at Slashdot.

New Biocomputing Method Uses Enzymes As Catalysts For DNA-Based Molecular Computing

Researchers at the University of Minnesota report via Phys.Org: Biocomputing is typically done either with live cells or with non-living, enzyme-free molecules. Live cells can feed themselves and can heal, but it can be difficult to redirect cells from their ordinary functions toward computation. Non-living molecules solve some of the problems of live cells, but have weak output signals and are difficult to fine-tune and regulate. In new research published in Nature Communications, a team of researchers at the University of Minnesota has developed a platform for a third method of biocomputing: Trumpet, or Transcriptional RNA Universal Multi-Purpose GatE PlaTform.

Trumpet uses biological enzymes as catalysts for DNA-based molecular computing. Researchers performed logic gate operations, similar to operations done by all computers, in test tubes using DNA molecules. A positive gate connection resulted in a phosphorescent glow. The DNA creates a circuit, and a fluorescent RNA compound lights up when the circuit is completed, just like a lightbulb when a circuit board is tested.

The research team demonstrated that:

– The Trumpet platform has the simplicity of molecular biocomputing with added signal amplification and programmability.
– The platform is reliable for encoding all universal Boolean logic gates (NAND, NOT, NOR, AND, and OR), which are fundamental to programming languages.
– The logic gates can be stacked to build more complex circuits.

The team also developed a web-based tool facilitating the design of sequences for the Trumpet platform. “Trumpet is a non-living molecular platform, so we don’t have most of the problems of live cell engineering,” said co-author Kate Adamala, assistant professor in the College of Biological Sciences. “We don’t have to overcome evolutionary limitations against forcing cells to do things they don’t want to do. This also gives Trumpet more stability and reliability, with our logic gates avoiding the leakage problems of live cell operations.”

“It could make a lot of long-term neural implants possible. The applications could range from strictly medical, like healing damaged nerve connections or controlling prosthetics, to more sci-fi applications like entertainment or learning and augmented memory,” added Adamala.

Read more of this story at Slashdot.

Ukraine Is Now Using Steam Decks To Control Machine Gun Turrets

Thanks to a crowdfunding campaign dating back to 2014, soldiers in Ukraine are now using Steam Decks to remotely operate a high-caliber machine gun turret. The weapon is called the “Sabre” and is unique to Ukraine. Motherboard reports: Ukrainian news outlet TPO Media recently reported on the deployment of a new model of the Sabre on its Facebook page. Photos and videos of the system show soldiers operating a Steam Deck connected to a large machine gun via a heavy piece of cable. According to the TPO Media post, the Sabre system allows soldiers to fight the enemy from a great distance and can handle a range of calibers, from light machine guns firing anti-tank rounds to an AK-47.

In the TPO footage, the Sabre is firing what appears to be a PKT belt-fed machine gun. The PKT is a heavy barrelled machine that doesn’t have a stock and is typically mounted on vehicles like armored personnel carriers. It uses a solenoid trigger so it can be fired remotely, which is the cable running out of the back of the gun and into the complex of metal and wires on the side of the turret.

The Sabre system wasn’t always controlled with a Steam Deck […]. The first instances of the weapon appeared in 2014. The U.S. and the rest of NATO is giving Ukraine a lot of money for defense now, but that wasn’t the case when Russia first invaded in 2014. To fill its funding gaps, Ukrainians ran a variety of crowdfunding campaigns. Over the years, Ukraine has used crowdfunding to pay for everything from drones to hospitals. One of the most popular websites is The People’s Project, and it’s there that the Sabre was born. The People’s Project launched the crowdfunding campaign for Sabre in 2015 and collected more than $12,000 for the project over the next two years. It’s initial goal was to deploy 10 of these systems.

Read more of this story at Slashdot.

Spotify Tries To Win Indie Authors By Cutting Audiobook Fees

In an effort to appeal to indie authors, Spotify’s Findaway audiobook seller “will no longer take a 20 percent cut of royalties for titles sold on its DIY Voices platform — so long as the sales are made on Spotify,” reports The Verge. From the report: In a company blog post published on Monday, Findaway said that it would “pass on cost-saving efficiencies” from its integration with the streaming service. While it’s free for authors to upload their audiobooks onto Findaway’s Voices platform, the company normally uses an 80/20 pricing structure — where Findaway takes a 20 percent fee on all royalties earned. But that fee comes after sales platforms take their own 50 percent cut on the list price. So under the old revenue split, an author who sold a $10 audiobook would have to give $5 to Spotify and $1 to Findaway. But moving forward, that same author will no longer have to pay the $1 distribution fee to Findaway when a sale is made through Spotify.

The margins on audiobooks are exceptionally high, much to the chagrin of the authors. For example, Audible takes 75 percent of retail sales (though it’ll only take 60 percent with an exclusivity contract). Many authors share royalties with their narrators and have to pay production fees — meaning they get an even smaller share of royalties. The move by Spotify and Findaway is likely a bid to draw more indie authors from Audible, which is currently its biggest competitor. But Spotify’s audiobooks business — which it launched last fall — still has a long way to go. Unlike music or podcasts, most audiobooks on Spotify must be purchased individually, and sales are restricted to its web version. Even CEO Daniel Ek admitted that the current process of buying an audiobook through Spotify is “pretty horrible.” “We at Spotify are just at the beginning of our journey supporting independent authors — we have many plans for how to help authors expand their reach, maximize revenue, and ultimately build a strong audiobooks business,” said Audiobook’s communications chief, Laura Pezzini.

Read more of this story at Slashdot.

OpenAI Threatens Popular GitHub Project With Lawsuit Over API Use

A GitHub project called GPT4free has received a letter from OpenAI demanding that the repo be shut down within five days or face a lawsuit. Tom’s Hardware reports: Anyone can use ChatGPT for free, but if you want to use GPT4, the latest language model, you have to either pay for ChatGPT Plus, pay for access to OpenAI’s API, or find another site that has incorporated GPT4 into its own free chatbot. There are sites that use OpenAI such as Forefront and You.com, but what if you want to make your own bot and don’t want to pay for the API? A GitHub project called GPT4free allows you to get free access to the GPT4 and GPT3.5 models by funneling those queries through sites like You.com, Quora and CoCalc and giving you back the answers. The project is GitHub’s most popular new repo, getting 14,000 stars this week.

Now, according to Xtekky, the European computer science student who runs the repo, OpenAI has sent a letter demanding that he take the whole thing down within five days or face a lawsuit. I interviewed Xtekky via Telegram, and he said he doesn’t think OpenAI should be targeting him since he isn’t connecting directly to the company’s API, but is instead getting data from other sites that are paying for their own API licenses. If the owners of those sites have a problem with his scripts querying them, they should approach him directly, he posited. […] Even if the original repo is taken down, there’s a great chance that the code — and this method of accessing GPT4 and GPT3.5 — will be published elsewhere by members of the community. Even if GPT4Free had never existed anyone can find ways to use these sites’ APIs if they continue to be unsecured. “Users are sharing and hosting this project everywhere,” he said. “Deletion of my repo will be insignificant.”

Read more of this story at Slashdot.

Russian Forces Suffer Radiation Sickness After Digging Trenches and Fishing in Chernobyl

The Independent reports:

Russian troops who dug trenches in Chernobyl forest during their occupation of the area have been struck down with radiation sickness, authorities have confirmed.

Ukrainians living near the nuclear power station that exploded 37 years ago, and choked the surrounding area in radioactive contaminants, warned the Russians when they arrived against setting up camp in the forest. But the occupiers who, as one resident put it to The Times, “understood the risks” but were “just thick”, installed themselves in the forest, reportedly carved out trenches, fished in the reactor’s cooling channel — flush with catfish — and shot animals, leaving them dead on the roads…

In the years after the incident, teams of men were sent to dig up the contaminated topsoil and bury it below ground in the Red Forest — named after the colour the trees turned as a result of the catastrophe… Vladimir Putin’s men reportedly set up camp within a six-mile radius of reactor No 4, and dug defensive positions into the poisonous ground below the surface.

On 1 April, as Ukrainian troops mounted counterattacks from Kyiv, the last of the occupiers withdrew, leaving behind piles of rubbish. Russian soldiers stationed in the forest have since been struck down with radiation sickness, diplomats have confirmed. Symptoms can start within an hour of exposure and can last for several months, often resulting in death.

Read more of this story at Slashdot.

Google Has More Powerful AI, Says Engineer Fired Over Sentience Claims

Remember that Google engineer/AI ethicist who was fired last summer after claiming their LaMDA LLM had become sentient?

In a new interview with Futurism, Blake Lemoine now says the “best way forward” for humankind’s future relationship with AI is “understanding that we are dealing with intelligent artifacts. There’s a chance that — and I believe it is the case — that they have feelings and they can suffer and they can experience joy, and humans should at least keep that in mind when interacting with them.” (Although earlier in the interview, Lemoine concedes “Is there a chance that people, myself included, are projecting properties onto these systems that they don’t have? Yes. But it’s not the same kind of thing as someone who’s talking to their doll.”)

But he also thinks there’s a lot of research happening inside corporations, adding that “The only thing that has changed from two years ago to now is that the fast movement is visible to the public.” For example, Lemoine says Google almost released its AI-powered Bard chatbot last fall, but “in part because of some of the safety concerns I raised, they deleted it… I don’t think they’re being pushed around by OpenAI. I think that’s just a media narrative. I think Google is going about doing things in what they believe is a safe and responsible manner, and OpenAI just happened to release something.”

“[Google] still has far more advanced technology that they haven’t made publicly available yet. Something that does more or less what Bard does could have been released over two years ago. They’ve had that technology for over two years. What they’ve spent the intervening two years doing is working on the safety of it — making sure that it doesn’t make things up too often, making sure that it doesn’t have racial or gender biases, or political biases, things like that. That’s what they spent those two years doing…

“And in those two years, it wasn’t like they weren’t inventing other things. There are plenty of other systems that give Google’s AI more capabilities, more features, make it smarter. The most sophisticated system I ever got to play with was heavily multimodal — not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it. That’s the one that I was like, “you know this thing, this thing’s awake.” And they haven’t let the public play with that one yet. But Bard is kind of a simplified version of that, so it still has a lot of the kind of liveliness of that model…

“[W]hat it comes down to is that we aren’t spending enough time on transparency or model understandability. I’m of the opinion that we could be using the scientific investigative tools that psychology has come up with to understand human cognition, both to understand existing AI systems and to develop ones that are more easily controllable and understandable.”

So how will AI and humans will coexist? “Over the past year, I’ve been leaning more and more towards we’re not ready for this, as people,” Lemoine says toward the end of the interview. “We have not yet sufficiently answered questions about human rights — throwing nonhuman entities into the mix needlessly complicates things at this point in history.”

Read more of this story at Slashdot.