Apple AirPods Can Work As More Affordable Hearing Aids, Study Finds

A new study has found that Apple’s wireless earbuds can serve as a more affordable and accessible sound amplification device than medical hearing aids. Gizmodo reports: Inspired by a feature called Live Listen released in 2016 by Apple, which allows an iPhone or iPad to be used as sound-boosting microphone, researchers from the Taipei Veterans General Hospital wondered whether the performance of AirPods 2 and the original AirPods Pro using this feature could compare to medical hearing aids. Apple does not position Live Listen as a tool for those dealing with hearing loss but as a way for users with normal hearing to boost desired sounds, like the calls of a bird. However, the researchers found that, in some situations, consumer-level personal sound amplification products faired quite well against pricier medically prescribed solutions, and given the popularity of products like Apple’s AirPods, there’s no stigma associated with wearing them.

The researchers tested the $129 AirPods 2 and $249 AirPods Pro paired with iPhone XS Max smartphones running iOS 13. They compared these against the $10,000 OTICON Opn 1 behind-the-ear hearing aids and a more affordable alternative, the $1,500 Bernafon MD1. The four options were tested with 21 participants dealing with mild to moderate hearing loss, who were asked to repeat short sentences read to them by the researchers in varying environments. In a quiet setting, the AirPods Pro were found to perform as well as the cheaper hearing aids and almost as good as the premium model, while the Air Pods 2 performed the worst of all four tested devices but still helped participants hear what was being read to them more clearly than not using a sound-enhancing device at all. In a noisy environment, the AirPods Pro performed even closer to the premium hearing aid model, thanks to their built-in noise cancellation, but only when the distracting noises were coming from the sides of the participant. When the noise was coming from the front, alongside the sample sentences being read by the researchers, both wireless earbud products failed to help improve what was being heard. “Hearing aids remain the best option for those dealing with hearing loss, but for those who don’t have access to them for whatever reason, a cheaper product like Apple’s AirPods Pro could provide noticeable improvements in hearing and clarity for those dealing with mild-to-moderate hearing loss and could serve as a useful alternative until over-the-counter solutions are more readily available and affordable,” concludes the report.

Earlier this year, the Food and Drug Administration decided to allow hearing aids to be sold over the counter and without a prescription to adults, a decision that “could fundamentally change technology,” said Nicholas Reed, an audiologist at the Department of Epidemiology at Johns Hopkins Bloomberg School of Public Health. Sony’s first OTC hearing aids were announced last month.

Read more of this story at Slashdot.

Meta’s AI-Powered Audio Codec Promises 10x Compression Over MP3

Last week, Meta announced an AI-powered audio compression method called “EnCodec” that can reportedly compress audio 10 times smaller than the MP3 format at 64kbps with no loss in quality. Meta says this technique could dramatically improve the sound quality of speech on low-bandwidth connections, such as phone calls in areas with spotty service. The technique also works for music. Ars Technica reports: Meta debuted the technology on October 25 in a paper titled “High Fidelity Neural Audio Compression,” authored by Meta AI researchers Alexandre Defossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi. Meta also summarized the research on its blog devoted to EnCodec.

Meta describes its method as a three-part system trained to compress audio to a desired target size. First, the encoder transforms uncompressed data into a lower frame rate “latent space” representation. The “quantizer” then compresses the representation to the target size while keeping track of the most important information that will later be used to rebuild the original signal. (This compressed signal is what gets sent through a network or saved to disk.) Finally, the decoder turns the compressed data back into audio in real time using a neural network on a single CPU.

Meta’s use of discriminators proves key to creating a method for compressing the audio as much as possible without losing key elements of a signal that make it distinctive and recognizable: “The key to lossy compression is to identify changes that will not be perceivable by humans, as perfect reconstruction is impossible at low bit rates. To do so, we use discriminators to improve the perceptual quality of the generated samples. This creates a cat-and-mouse game where the discriminator’s job is to differentiate between real samples and reconstructed samples. The compression model attempts to generate samples to fool the discriminators by pushing the reconstructed samples to be more perceptually similar to the original samples.”

Read more of this story at Slashdot.

Researchers Develop a Paper-Thin Loudspeaker

MIT engineers have developed a paper-thin loudspeaker that can turn any surface into an active audio source. MIT News reports: This thin-film loudspeaker produces sound with minimal distortion while using a fraction of the energy required by a traditional loudspeaker. The hand-sized loudspeaker the team demonstrated, which weighs about as much as a dime, can generate high-quality sound no matter what surface the film is bonded to. To achieve these properties, the researchers pioneered a deceptively simple fabrication technique, which requires only three basic steps and can be scaled up to produce ultrathin loudspeakers large enough to cover the inside of an automobile or to wallpaper a room.

A typical loudspeaker found in headphones or an audio system uses electric current inputs that pass through a coil of wire that generates a magnetic field, which moves a speaker membrane, that moves the air above it, that makes the sound we hear. By contrast, the new loudspeaker simplifies the speaker design by using a thin film of a shaped piezoelectric material that moves when voltage is applied over it, which moves the air above it and generates sound. […]

They tested their thin-film loudspeaker by mounting it to a wall 30 centimeters from a microphone to measure the sound pressure level, recorded in decibels. When 25 volts of electricity were passed through the device at 1 kilohertz (a rate of 1,000 cycles per second), the speaker produced high-quality sound at conversational levels of 66 decibels. At 10 kilohertz, the sound pressure level increased to 86 decibels, about the same volume level as city traffic. The energy-efficient device only requires about 100 milliwatts of power per square meter of speaker area. By contrast, an average home speaker might consume more than 1 watt of power to generate similar sound pressure at a comparable distance. The researchers showed the speaker in action, playing “We Are the Champions” by Queen. You can listen to it here.

Read more of this story at Slashdot.

Bowie Estate Sells Songwriting Catalog to Warner Music

David Bowie’s estate has sold his entire songwriting catalog to Warner Music, including classics like “Space Oddity,” “Let’s Dance” and “Heroes,” in the latest blockbuster deal for music rights. The New York Times reports: Warner’s music publishing division, Warner Chappell, announced the agreement on Monday, saying that it encompassed Bowie’s entire corpus as a songwriter, from the material on his 1967 debut album, “David Bowie,” to his final album, “Blackstar,” released just before Bowie’s death in 2016 at age 69. The deal, for more than 400 songs, also includes soundtrack music; the material for Bowie’s short-lived band Tin Machine from the late 1980s and early ’90s; and other works. The price of the transaction was not disclosed, but is estimated at about $250 million. “These are not only extraordinary songs, but milestones that have changed the course of modern music forever,” Guy Moot, the chief executive of Warner Chappell, said in a statement. David Bowie, the so-called “most wired rock star on the planet,” has been featured in a number of Slashdot stories over the years.

In 2002, Bowie talked about his new album, distribution deal with Sony, and how he’s “fully confident that copyright, for instance, will no longer exist in 10 years, and authorship and intellectual property is in for such a bashing.”

In the late 90s, Bowie advocated for MP3s, telling The Guardian that they “could change the entire idea of what music is — and that isn’t so bad.” Years later, he seemed to agree that concert ticket prices needed to increase to offset the rise in P2P file sharing and illegal downloads.

Read more of this story at Slashdot.

How Peter Jackson’s Beatles Documentary Used Custom AI To Remove Background Noise

Peter Jackson’s seven-hour documentary “Get Back” (now streaming on Disney+) edits footage from the Beatles’ ambitious recording sessions for their 1970 album Let It Be. But long-time Slashdot reader MattSparkes writes that the whole documentary “would have been impossible without custom-made artificial intelligence, say sound engineers.”
Sixty hours of footage were recorded but most of the audio was captured by a single microphone that picked up the musicians’ instruments in a noisy jumble rather than a carefully crafted mix. It also recorded background noise and chatter, which made much of the footage unusable.

The team scoured academic papers on using AI to separate audio sources but realised that none of the previous research would work for a music documentary. They consulted with Paris Smaragdis at the University of Chicago and started to create a neural network called MAL (machine assisted learning) and a set of training data that was higher quality than datasets used in academic experiments.
The Washington Post describes it as “a sort of sonic forensics,” adding that the name MAL was a deliberate homage to the HAL computer in 2001: a Space Odyssey — and to the Beatles’ beloved road manager and principal assistant, Mal Evans.
Using MAL, Jackson and his colleagues were able to painstakingly and precisely isolate each and every audio track — be it musical instrumentation, singing or studio chatter — from the original mono recordings made for most of “Let It Be.”

“What we’ve managed to do is split it all apart in a way that is utterly clean and sounds much better,” Jackson said.

Other interesting observations from the Post:

“Get Back” tapped nearly 120 hours of previously unheard audio recordings. Jackson and his team started work in 2017.
Jackson’s team also “carefully restored, upgraded and enlarged the grainy original 16-millimeter” footage from the 1969 documentary Let It Be “so that it now pops with vibrant color.”
Jackson’s documentary “was originally set to open in theaters last year as a two-and-a-half hour feature film, but was pushed back by the pandemic. With more time unexpectedly on his hands, Jackson transformed his feature film into the six-hour epic….”
Jackson would also like to release an expanded director’s cut sometime in the future, “but there are no current plans to do so.”
“At one point, Jackson’s favorite version of his Get Back film clocked in at 18 hours…”

Read more of this story at Slashdot.