New Developments in Hearing Assisted Technology

recycling hearing aids
Recycling Hearing Aids and Hearing Aid Batteries
July 9, 2019
I Think I Need Hearing Aids. Now What?
August 7, 2019
new developments in hearing listening technology

new developments in hearing listening technology

When we’re out in a crowd waiting for someone, our brain and hearing work together to pick out their voice, even before we can see them walking our way. Parents who take their kids to the playground are super-tuned to their voices as well as their shrieks and giggles and can pick them out of a crowded area easily.

For those living with hearing loss, this is often impossible. The ability to pick out distinct voices becomes difficult and while hearing aids can help amplify sound, so people can hear again, there are limitations when it comes to these devices. You will hear the people talking around you, but to pick out a specific voice from the many others, in addition to any background noise isn’t likely to happen with a hearing aid.

This means that everyone is clearly heard, often along with glasses and silverware clinking, people’s laughter, and subtle music, though most devices can do a good job of muting the background sounds a bit. Hearing aids are able to amplify sound. Devices in today’s market don’t have the ability to amplify a particular person’s voice above all other noise.

Through the wonders of technology, this is looking like a distinct possibility in the future. Engineers at The Zuckerman Institute at Columbia University have been experimenting with artificial intelligence that allows brain-controlled hearing devices to amplify specific voices that the brain detects.

Researchers learned that the brain waves of two people who are speaking begin to resemble each other. Armed with this understanding, engineers have been able to use the brains powerful and sensitive speech separation abilities to build intricate mathematical patterns. These patterns are able to pick out voices of people speaking and then compare them to the listener’s brain waves. They use this information to assess and amplify the voice closest to the brainwaves of the listener.

With this information, the brain is able to focus on the speaker they wish to hear. A previous version had the downfall that the system needed to be trained ahead of time in order to recognize a specific speaker. It had issues when a new person was introduced and was not able to pick up this new voice, thus failing.

If you were at a ball game with friends but someone sat down by you and began talking, the previous version would fail. With today’s technological advancements, that issue has been resolved. Columbia Technology Ventures funded the revamp of the algorithms that the original model contained. Dr. Nima Mesgarani, PhD, along with Cong Han and James O’Sullivan, PhD, worked with the brains sensory neural system and created a more advanced system.

This amazing technological advancement is still experimental, but it is capable of adapting to any possible speaker that the listener concentrates on.

“Our end result was a speech-separation algorithm that performed similarly to previous versions but with an important improvement,” said Dr. Mesgarani. “It could recognize and decode a voice — any voice — right off the bat.”

In order to test the performance of the algorithm, the research team worked with Ashesh Dinesh Mehta, MD, PhD, who is a neurosurgeon at Northwell Health Institute for Neurology and Neurosurgery. Dr. Mehta works with epilepsy patients, some of whom must have surgery regularly.

“These patients volunteered to listen to different speakers while we monitored their brain waves directly via electrodes implanted in the patients’ brains,” said Dr. Mesgarani. “We then applied the newly developed algorithm to that data.”

With the cooperation of these generous volunteers, doctors were able to follow the volunteers focus as new speakers were introduced. These were people who they hadn’t ever heard speak. Technology allowed the system to amplify any voice that the patient was focused on. If their attention shifted to another speaker, the system automatically adjusted to the new speaker.

To date, the system has not been tested in an outdoor environment, but the goal is to have the system work equally well outdoors as it does inside. Whether it’s in a crowded bar or at a ball game watching your kids, the end result is to have the ability to hear each speaker that the wearer is focusing on.

The current testing is taking place with patients who are involved in epilepsy testing. With the regular surgery’s doctors are able to insert the device during surgery. The hope for the future is that researchers will find a way to transform the prototype device into something that can be attached noninvasively to the scalp or near the ear. Researchers also plan to advance the algorithm even further, so it has the ability to operate on a more widespread level and offer more possibilities in other environments.

With our everchanging world of technology, this, and more will be possible in the future. Doctors and researchers today are working to even the odds for those who live with hearing loss, so they can have the same advantages as people with perfect hearing.

x

We use cookies to give you the best online experience. By agreeing you accept the use of cookies in accordance with our cookie policy.

I accept I decline Privacy Center Privacy Settings Learn More about our Cookie Policy