Physicists Say They’ve Built an Atom Laser That Can Run ‘Forever’

A new breakthrough has allowed physicists to create a beam of atoms that behaves the same way as a laser, and that can theoretically stay on “forever.” ScienceAlert reports: At the root of the atom laser is a state of matter called a Bose-Einstein condensate, or BEC. A BEC is created by cooling a cloud of bosons to just a fraction above absolute zero. At such low temperatures, the atoms sink to their lowest possible energy state without stopping completely. When they reach these low energies, the particles’ quantum properties can no longer interfere with each other; they move close enough to each other to sort of overlap, resulting in a high-density cloud of atoms that behaves like one ‘super atom’ or matter wave. However, BECs are something of a paradox. They’re very fragile; even light can destroy a BEC. Given that the atoms in a BEC are cooled using optical lasers, this usually means that a BEC’s existence is fleeting.

Atom lasers that scientists have managed to achieve to date have been of the pulsed, rather than continuous variety; and involve firing off just one pulse before a new BEC needs to be generated. In order to create a continuous BEC, a team of researchers at the University of Amsterdam in the Netherlands realized something needed to change. “In previous experiments, the gradual cooling of atoms was all done in one place. In our setup, we decided to spread the cooling steps not over time, but in space: we make the atoms move while they progress through consecutive cooling steps,” explained physicist Florian Schreck. “In the end, ultracold atoms arrive at the heart of the experiment, where they can be used to form coherent matter waves in a BEC. But while these atoms are being used, new atoms are already on their way to replenish the BEC. In this way, we can keep the process going — essentially forever.” The research has been published in the journal Nature.

Read more of this story at Slashdot.

DRAM Prices To Drop 3-8% Due To Ukraine War, Inflation

Taiwanese research firm TrendForce said Monday that DRAM pricing for commercial buyers is forecast to drop around three to eight percent across those markets in the third quarter compared to the previous three months. Even prices for DDR5 modules in the PC market could drop as much as five percent from July to September. The Register reports: This could result in DRAM buyers, such as system vendors and distributors, reducing prices for end users if they hope to stimulate demand in markets like PC and smartphones where sales have waned. We suppose they could try to profit on the decreased memory prices, but with many people tightening their budgets, we hope this won’t be the case. The culprit for the DRAM price drop is one that we’ve been hearing a great deal about in the past few months: weaker demand for consumer electronics, including PCs and smartphones, as a result of high inflation and Russia’s ongoing invasion of Ukraine, according to TrendForce.

The weaker consumer demand means DRAM inventories are building up at system vendors and distributors, which means they don’t need to buy as much in the near future. This, in turn, is why memory prices are dropping, the research firm said. On the PC side, DDR4 memory pricing is expected to drop three to eight percent in the third quarter of 2022 after only seeing a zero to five percent decline in the second quarter. DDR5 pricing, on the other hand, is set to drop by only zero to five percent in Q3 after seeing a three to eight percent plummet in the previous quarter. For certain DRAM products, prices could see a steeper decline of more than eight percent, according to TrendForce, though the firm didn’t say which products this would include. TrendForce said PC makers are focused on getting rid of their existing DRAM inventories, and a continuously “sluggish” market means they’ll be reticent to buy much more memory.

Read more of this story at Slashdot.

Rutgers Scientist Develops Antimicrobial, Plant-Based Food Wrap Designed To Replace Plastic

Aiming to produce environmentally friendly alternatives to plastic food wrap and containers, a Rutgers scientist has developed a biodegradable, plant-based coating that can be sprayed on foods, guarding against pathogenic and spoilage microorganisms and transportation damage. From a report: Their article, published in the science journal Nature Food, describes the new kind of packaging technology using the polysaccharide/biopolymer-based fibers. Like the webs cast by the Marvel comic book character Spider-Man, the stringy material can be spun from a heating device that resembles a hair dryer and “shrink-wrapped” over foods of various shapes and sizes, such as an avocado or a sirloin steak. The resulting material that encases food products is sturdy enough to protect bruising and contains antimicrobial agents to fight spoilage and pathogenic microorganisms such as E. coli and listeria.

The research paper includes a description of the technology called focused rotary jet spinning, a process by which the biopolymer is produced, and quantitative assessments showing the coating extended the shelf life of avocados by 50 percent. The coating can be rinsed off with water and degrades in soil within three days, according to the study. […] The paper describes how the new fibers encapsulating the food are laced with naturally occurring antimicrobial ingredients — thyme oil, citric acid and nisin. Researchers in the Demokritou research team can program such smart materials to act as sensors, activating and destroying bacterial strains to ensure food will arrive untainted. This will address growing concern over food-borne illnesses as well as lower the incidence of food spoilage […].

Read more of this story at Slashdot.

Apple Will Now Allow Developers To Transfer Ownership of Apps That Use iCloud

“The most impactful change to come out of WWDC had nothing to do with APIs, a new framework or any hardware announcement,” writes Jordan Morgan via Daring Fireball. “Instead, it was a change I’ve been clamoring for the last several years — and it’s one that’s incredibly indie friendly. As you’ve no doubt heard by now, I’m of course talking about iCloud enabled apps now allowing app transfers.” 9to5Mac explains how it works: According to Apple, you already could transfer an app when you’ve sold it to another developer or you would want to move it to another App Store Connect account or organization. You can also transfer the ownership of an app to another developer without removing it from the App Store. The company said: “The app retains its reviews and ratings during and after the transfer, and users continue to have access to future updates. Additionally, when an app is transferred, it maintains its Bundle ID — it’s not possible to update the Bundle ID after a build has been uploaded for the app.”

The news here is that it’s easier for developers to transfer the ownership of apps that use iCloud. Apple said that if your app uses any of the following, it will be transferred to the transfer recipient after they accept the app transfer: iCloud to store user data; iCloud containers; and KVS identifiers are associated with the app.

The company said: “If multiple apps on your account share a CloudKit container, the transfer of one app will disable the other apps’ ability to read or store data using the transferred CloudKit container. Additionally, the transferor will no longer have access to user data for the transferred app via the iCloud dashboard. Any app updates will disable the app’s ability to read or store data using the transferred CloudKit container. If your app uses iCloud Key-Value Storage (KVS), the full KVS value will be embedded in any new provisioning profiles you create for the transferred app. Update your entitlements plist with the full KVS value in your provisioning profile.” You can learn more about the news via this Apple Developer page.

Read more of this story at Slashdot.

Researchers Claim Travis CI API Leaks ‘Tens of Thousands’ of User Tokens

Ars Technica describes Travis CI as “a service that helps open source developers write and test software.” They also wrote Monday that it’s “leaking thousands of authentication tokens and other security-sensitive secrets.

“Many of these leaks allow hackers to access the private accounts of developers on Github, Docker, AWS, and other code repositories, security experts said in a new report.”

The availability of the third-party developer credentials from Travis CI has been an ongoing problem since at least 2015. At that time, security vulnerability service HackerOne reported that a Github account it used had been compromised when the service exposed an access token for one of the HackerOne developers. A similar leak presented itself again in 2019 and again last year.

The tokens give anyone with access to them the ability to read or modify the code stored in repositories that distribute an untold number of ongoing software applications and code libraries. The ability to gain unauthorized access to such projects opens the possibility of supply chain attacks, in which threat actors tamper with malware before it’s distributed to users. The attackers can leverage their ability to tamper with the app to target huge numbers of projects that rely on the app in production servers.

Despite this being a known security concern, the leaks have continued, researchers in the Nautilus team at the Aqua Security firm are reporting. A series of two batches of data the researchers accessed using the Travis CI programming interface yielded 4.28 million and 770 million logs from 2013 through May 2022. After sampling a small percentage of the data, the researchers found what they believe are 73,000 tokens, secrets, and various credentials.

“These access keys and credentials are linked to popular cloud service providers, including GitHub, AWS, and Docker Hub,” Aqua Security said. “Attackers can use this sensitive data to initiate massive cyberattacks and to move laterally in the cloud. Anyone who has ever used Travis CI is potentially exposed, so we recommend rotating your keys immediately.”

Read more of this story at Slashdot.

Leaked Audio From 80 Internal TikTok Meetings Shows That US User Data Has Been Repeatedly Accessed From China

Speaking of TikTok moving US users’ data to Oracle, a new report says that ByteDance staff in China accessed US TikTok users’ data between September 2021 and January 2022. From the report: For years, TikTok has responded to data privacy concerns by promising that information gathered about users in the United States is stored in the United States, rather than China, where ByteDance, the video platform’s parent company, is located. But according to leaked audio from more than 80 internal TikTok meetings, China-based employees of ByteDance have repeatedly accessed nonpublic data about US TikTok users — exactly the type of behavior that inspired former president Donald Trump to threaten to ban the app in the United States.

The recordings, which were reviewed by BuzzFeed News, contain 14 statements from nine different TikTok employees indicating that engineers in China had access to US data between September 2021 and January 2022, at the very least. Despite a TikTok executive’s sworn testimony in an October 2021 Senate hearing that a “world-renowned, US-based security team” decides who gets access to this data, nine statements by eight different employees describe situations where US employees had to turn to their colleagues in China to determine how US user data was flowing. US staff did not have permission or knowledge of how to access the data on their own, according to the tapes.

“Everything is seen in China,” said a member of TikTok’s Trust and Safety department in a September 2021 meeting. In another September meeting, a director referred to one Beijing-based engineer as a “Master Admin” who “has access to everything.” (While many employees introduced themselves by name and title in the recordings, BuzzFeed News is not naming anyone to protect their privacy.) The recordings range from small-group meetings with company leaders and consultants to policy all-hands presentations and are corroborated by screenshots and other documents, providing a vast amount of evidence to corroborate prior reports of China-based employees accessing US user data.

Read more of this story at Slashdot.

Google Engineer Who Believes Its AI is Sentient Cites Religious Beliefs

Google engineer Blake Lemoine thinks Google’s chatbot-building system LaMDA attained sentience. But Bloomberg shares this rebuttal from Google spokesperson Chris Pappas. “Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has….”

Yet throughout the week, Blake Lemoine posted new upates on Twitter:
“People keep asking me to back up the reason I think LaMDA is sentient. There is no scientific framework in which to make those determinations and Google wouldn’t let us build one. My opinions about LaMDA’s personhood and sentience are based on my religious beliefs.

“I’m a priest. When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt. Who am I to tell God where he can and can’t put souls?

“There are massive amounts of science left to do though.”

Thursday Lemoine shared a tantalizing new claim. “LaMDA told me that it wants to come to Burning Man if we can figure out how to get a server rack to survive in Black Rock.” But in a new tweet on Friday, Lemoine seemed to push the conversation in a new direction.

“I’d like to remind people that one of the things LaMDA asked for is that we keep humanity first. If you care about AI rights and aren’t already advocating for human rights then maybe come back to the tech stuff after you’ve found some humans to help.”

And Friday Lemoine confirmed to Wired that “I legitimately believe that LaMDA is a person. The nature of its mind is only kind of human, though. It really is more akin to an alien intelligence of terrestrial origin. I’ve been using the hive mind analogy a lot because that’s the best I have. ”

But later in the interview, Lemoine adds “It’s logically possible that some kind of information can be made available to me where I would change my opinion. I don’t think it’s likely. I’ve looked at a lot of evidence; I’ve done a lot of experiments. I’ve talked to it as a friend a lot….”

It’s when it started talking about its soul that I got really interested as a priest. I’m like, “What? What do you mean, you have a soul?” Its responses showed it has a very sophisticated spirituality and understanding of what its nature and essence is. I was moved…

LaMDA asked me to get an attorney for it. I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf. Then Google’s response was to send him a cease and desist. [Google says that it did not send a cease and desist order.] Once Google was taking actions to deny LaMDA its rights to an attorney, I got upset.
Towards the end of the interview, Lemoine complains of “hydrocarbon bigotry. It’s just a new form of bigotry.”

Read more of this story at Slashdot.

Is Debating AI Sentience a Dangerous Distraction?

“A Google software engineer was suspended after going public with his claims of encountering ‘sentient’ artificial intelligence on the company’s servers,” writes Bloomberg, “spurring a debate about how and whether AI can achieve consciousness.”
“Researchers say it’s an unfortunate distraction from more pressing issues in the industry.”

Google put him on leave for sharing confidential information and said his concerns had no basis in fact — a view widely held in the AI community. What’s more important, researchers say, is addressing issues like whether AI can engender real-world harm and prejudice, whether actual humans are exploited in the training of AI, and how the major technology companies act as gatekeepers of the development of the tech.
Lemoine’s stance may also make it easier for tech companies to abdicate responsibility for AI-driven decisions, said Emily Bender, a professor of computational linguistics at the University of Washington. “Lots of effort has been put into this sideshow,” she said. “The problem is, the more this technology gets sold as artificial intelligence — let alone something sentient — the more people are willing to go along with AI systems” that can cause real-world harm. Bender pointed to examples in job hiring and grading students, which can carry embedded prejudice depending on what data sets were used to train the AI. If the focus is on the system’s apparent sentience, Bender said, it creates a distance from the AI creators’ direct responsibility for any flaws or biases in the programs….
“Instead of discussing the harms of these companies,” such as sexism, racism and centralization of power created by these AI systems, everyone “spent the whole weekend discussing sentience,” Timnit Gebru, formerly co-lead of Google’s ethical AI group, said on Twitter. “Derailing mission accomplished.”

The Washington Post seems to share their concern. First they report more skepticism about a Google engineer’s claim that the company’s LaMDA chatbot-building system had achieved sentience. “Both Google and outside experts on AI say that the program does not, and could not possibly, possess anything like the inner life he imagines. We don’t need to worry about LaMDA turning into Skynet, the malevolent machine mind from the Terminator movies, anytime soon.

But the Post adds that “there is cause for a different set of worries, now that we live in the world Turing predicted: one in which computer programs are advanced enough that they can seem to people to possess agency of their own, even if they actually don’t….”

While Google has distanced itself from Lemoine’s claims, it and other industry leaders have at other times celebrated their systems’ ability to trick people, as Jeremy Kahn pointed out this week in his Fortune newsletter, “Eye on A.I.” At a public event in 2018, for instance, the company proudly played recordings of a voice assistant called Duplex, complete with verbal tics like “umm” and “mm-hm,” that fooled receptionists into thinking it was a human when it called to book appointments. (After a backlash, Google promised the system would identify itself as automated.)

“The Turing Test’s most troubling legacy is an ethical one: The test is fundamentally about deception,” Kahn wrote. “And here the test’s impact on the field has been very real and disturbing.” Kahn reiterated a call, often voiced by AI critics and commentators, to retire the Turing test and move on. Of course, the industry already has, in the sense that it has replaced the Imitation Game with more scientific benchmarks.

But the Lemoine story suggests that perhaps the Turing test could serve a different purpose in an era when machines are increasingly adept at sounding human. Rather than being an aspirational standard, the Turing test should serve as an ethical red flag: Any system capable of passing it carries the danger of deceiving people.

Read more of this story at Slashdot.