
I was anxious and depressed as I walked into the party because I knew it would be crowded and noisy. As I feared, I was greeted by thumping music, the clatter of glasses and dishes, and a cacophony of loud conversations all around me.
It was the kind of environment that I, and just about anyone with hearing loss, try to avoid, a room where speech is drowned out in background noise, and where I would sometimes find myself in a corner wearing a clueless smile.
But this time it was different. As I sat down with my friends, something changed. The flood of noise that surrounded me seemed to recede and the voices of my friends became clearer and more distinct to the point where I could understand them. For me, that was the most dramatic example of AI’s ability to untangle speech from noise.
I was wearing a pair of Phonak Audéo Sphere Infinio hearing aids which were, in effect, doing the work that my brain was once able to do. When I had normal hearing, my brain could sift and sort through the signals it was receiving from my ears to pick out voices, for example, in a noisy room. But now, because of my severe hearing loss, my brain simply does not receive enough information to do the job.
Phonak and the other Big Five hearing aid companies are now using, or plan to use Deep Neural Networks (DNN) to deliver a cleaned-up signal with much of the “noise” removed. We are now at the dawn of the biggest revolution in hearing aid technology since the introduction of digital processors some 25 years ago.
It’s a revolution that’s caught the attention of Dr. Lawrence Lustig at Columbia University, one of the top ENTs in the U.S. and a leading expert on hearing loss. He notes that AI equipped hearing aids will likely forestall the need for cochlear implants in many of his patients, and he notes that data he has seen on the Sphere’s speech-in-noise capability is, “jaw-dropping and it’s going to keep traditional implant candidates in hearing aids longer. I think it’s going to have a major impact.”
As someone who is on a slow, slide into silence that will likely end with a cochlear implant, that comes as very heartening news.
All of this requires a lot of computing horsepower. Phonak’s solution was to put two chips into each Sphere. One does the more conventional work of hearing aids while the other, it calls Deepsonic, is dedicated solely to AI and DNN processing.
“It took a tremendous amount of investment and effort to develop a chipset that is this powerful and can run in real time on our hearing aids,” says Jason Galster, the VP of Clinical Research at Sonova, Phonak’s Swiss parent company.
As you can guess, the Spheres don’t come cheap but their price is in line with other premium prescription hearing aids.
Galster also points out that while the Sphere’s AI capability gets most of the attention, it also incorporates other helpful technologies. “These are tremendously complex devices and the technology is evolving. For example, we also have a feature called “stereo zoom”. We use the two microphones in each hearing aid linked wirelessly to each other to create a real time network of four microphones. That allows for focused directionality. So we get a narrow beam directly in front of the wearer so they can focus on, say, the voice of one person sitting across from them.”
“Another example, we invest a lot in speech-in-noise but also know that speech in quiet environments is important since our research shows that’s where users spend 54% of their time. So we introduced “speech enhancer” which picks up low level speech in quiet places and boosts it to make it more audible. That helps when someone is talking to you from across the room or even another part of the house.”
Phonak is an outlier in the industry when it comes to Bluetooth. “We know that if you’re limiting yourself to a Made For iPhone or a Made for Android connection that that’s a clear bottleneck”, says Galster, “You’re only allowed to connect to a certain ecosystem of devices and we really wanted to make sure that patients had access to connecting to as many devices as possible”.
So Phonak opted to use classic Bluetooth which means, for example, that I can connect directly with my laptop without the need for an intermediate device such as a streamer. That has proven to be a very useful feature.
The MyPhonak app for iPhone and Android offers a suite of settings and adjustments, many of them I found to be useful. But for the most part I leave it set on “Automatic” which will also switch on the “Sphere” AI setting in noisy places.
The app also lets you keep an eye on the battery levels. When the “Sphere” mode and the Deepsonic chip kick in they draw a lot of power that significantly reduces battery life. I should also point out that having that extra chip inside also means that these hearing aids are a little bigger and wider which may give some pause.
Overall, If you are going to lose your hearing you’ve picked a good time. The AI revolution is here and, for the moment at least, Phonak is leading the pack.