Saturday, June 15, 2024

Artificial Intelligence news

How to opt out...

MIT Technology Review’s How To series helps you get things done.  If you...

Apple is promising personalized...

At its Worldwide Developer Conference on Monday, Apple for the first time...

What using artificial intelligence...

This story originally appeared in The Algorithm, our weekly newsletter on AI....

The data practitioner for...

The rise of generative AI, coupled with the rapid adoption and democratization...
HomeNewsNoise-canceling headphones use...

Noise-canceling headphones use AI to let a single voice through


Modern life is noisy. If you don’t like it, noise-canceling headphones can reduce the sounds in your environment. But they muffle sounds indiscriminately, so you can easily end up missing something you actually want to hear.

A new prototype AI system for such headphones aims to solve this. Called Target Speech Hearing, the system gives users the ability to select a person whose voice will remain audible even when all other sounds are canceled out.

Although the technology is currently a proof of concept, its creators say they are in talks to embed it in popular brands of noise-canceling earbuds and are also working to make it available for hearing aids.

“Listening to specific people is such a fundamental aspect of how we communicate and how we interact in the world with other humans,” says Shyam Gollakota, a professor at the University of Washington, who worked on the project. “But it can get really challenging, even if you don’t have any hearing loss issues, to focus on specific people when it comes to noisy situations.” 

The same researchers previously managed to train a neural network to recognize and filter out certain sounds, such as babies crying, birds tweeting, or alarms ringing. But separating out human voices is a tougher challenge, requiring much more complex neural networks.

That complexity is a problem when AI models need to work in real time in a pair of headphones with limited computing power and battery life. To meet such constraints, the neural networks needed to be small and energy efficient. So the team used an AI compression technique called knowledge distillation. This meant taking a huge AI model that had been trained on millions of voices (the “teacher”) and having it train a much smaller model (the “student”) to imitate its behavior and performance to the same standard.   

The student was then taught to extract the vocal patterns of specific voices from the surrounding noise captured by microphones attached to a pair of commercially available noise-canceling headphones.

To activate the Target Speech Hearing system, the wearer holds down a button on the headphones for several seconds while facing the person to be focused on. During this “enrollment” process, the system captures an audio sample from both headphones and uses this recording to extract the speaker’s vocal characteristics, even when there are other speakers and noises in the vicinity.

These characteristics are fed into a second neural network running on a microcontroller computer connected to the headphones via USB cable. This network runs continuously, keeping the chosen voice separate from those of other people and playing it back to the listener. Once the system has locked onto a speaker, it keeps prioritizing that person’s voice, even if the wearer turns away. The more training data the system gains by focusing on a speaker’s voice, the better its ability to isolate it becomes. 

For now, the system is only able to successfully enroll a targeted speaker whose voice is the only loud one present, but the team aims to make it work even when the loudest voice in a particular direction is not the target speaker.

Singling out a single voice in a loud environment is very tough, says Sefik Emre Eskimez, a senior researcher at Microsoft who works on speech and AI, but who did not work on the research. “I know that companies want to do this,” he says. “If they can achieve it, it opens up lots of applications, particularly in a meeting scenario.”

While speech separation research tends to be more theoretical than practical, this work has clear real-world applications, says Samuele Cornell, a researcher at Carnegie Mellon University’s Language Technologies Institute, who did not work on the research. “I think it’s a step in the right direction,” Cornell says. “It’s a breath of fresh air.”



Article Source link and Credit

Continue reading

Five ways criminals are using AI

Artificial intelligence has brought a big boost in productivity—to the criminal underworld.  Generative AI provides a new, powerful tool kit that allows malicious actors to work far more efficiently and internationally than ever before, says Vincenzo Ciancaglini, a senior...

OpenAI’s latest blunder shows the challenges facing Chinese AI models

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday. Last week’s release of GPT-4o, a new AI “omnimodel” that you can interact with using voice,...

Meta says AI-generated election content is not happening at a “systemic level”

Meta has seen strikingly little AI-generated misinformation around the 2024 elections despite major votes in countries such as Indonesia, Taiwan, and Bangladesh, said the company’s president of global affairs, Nick Clegg, on Wednesday.  “The interesting thing so far—I stress,...