Artificial Intelligence and Hearing Aids

hearing aids artificial intelligence
HHTM
May 29, 2018

Artificial Intelligence (AI) has become a common buzzword of late. AI used to be only of interest to bearded computer nerds (me) and fans of sci-fi movies (also me), but the term has now entered the common language and we see newspapers running stories about everyone losing their jobs to smart robots, driverless cars becoming popular and intelligent computers beating humans at board games like Chess and Go. AI has hit the big time.

 

What is AI anyway?

 

AI can be defined as a computer’s ability to make a decision without being explicitly told to by a human; for example, a Tesla can recognize a road without the driver’s input, Big Blue can determine optimal chess moves on its own, and so on.

At a basic level, any “intelligent” machine is making a number of “if this or that” decisions based on some input data. Big Blue’s basic inputs would be the chess board and the positions of all the pieces, just from that it could determine a good next move, but will have many more inputs than that: it will know all the previous moves in this game, whether the opponent playing a offensive or defensive strategy, all potential moves from this point onwards for itself and the opponent, how long the game has taken, etc, etc, and so on.

 

Any AI machine has a list of inputs and some defined yes/no decisions to make on it – once it has gone through all of the yes/no decisions it comes to a final output: In chess, it would be which piece to move and to where; for the Tesla, it would be what driving action to take (i.e. drive on, brake, turn, indicate), etc.

 

Machines are able to make these decisions on their own because they have been trained with a huge amount of test inputs. Big Blue will have been fed millions of chess games including all the moves and will have logged which color won, it then refers back to all of these during its decision making on the current game. Similarly, a Tesla will have driven billions of test miles in a simulation, seen millions of people step out in to the road, and braked for countless cars at junctions.

 

AI and audiology

 

To my mind, an obvious application of AI in a hearing aid is giving the aid the ability to optimize its own settings based on factors other than sound. To do that, we would need to train the hearing aid with its owner’s data and the environments it is used in. So what could our input data be?

If the technology was available, the aid could log:

  • Longitude / latitude
  • Time & date
  • SN ratio
  • Current hearing program in use
  • Volume setting
  • Bio data from a smartwatch

If a hearing aid was logging all of that day-to-day it could start to make assertions like, “Mary hears effortlessly in the diner between 8pm and 9pm on Fridays, she has been awake for 12 hours, has not modified her hearing programme or changed volume today”.

A combination of GPS data, sound input data, the hearing aid’s interpretation of listening performance couple with biometric data from a smartwatch will become a deep and thorough dataset.

 

I know in my own experience, that I can’t understand and follow speech as easily when I’m tired. If a hearing aid knew when someone was tired, it could automatically adjust itself to help them more during those times or situations.

 

That would be a lot of data being recorded by the hearing aid and it would need somewhere to store all of that. The data would then have to be processed and assertions about best aid settings made for Mary to use when she next visits that exact combination of location, time and other factors. A small device on someone’s ear is not going to be able to store or process that amount of data so maybe a Bluetooth connection to a phone is an answer, where the phone becomes the storage and the processing power – effectively the phone does the heavy data lifting, the hearing aid is the data collector and receives optimal setting suggestions back from the phone.

Knowledge gained from this kind of data could improve the initial settings prescribed to new hearing aid users, which could help lower the number of patients giving up on their aids after a trial period – the initial settings would not be adjusted solely on the audiogram and a few “how does it sound?” questions at the fitting, it would instead be set based on the patient’s age, listening environments, general health profile and so on. The user wouldn’t have to wait two weeks of trying it out to visit the clinic for program changes, the aid would immediately be going to work making adjustments and improvements as their data gets logged and compared to the existing big dataset.

On a wider scale, if that hearing data is collected by all users who visited the same restaurant as Mary, the data will start to show patterns that will tell us more useful things: when is it best for someone with hearing problems to visit the diner? Where should they sit? Which settings do they need? And so on.

Big data is powerful, it will answer questions we haven’t even thought of yet.

 

 

Steve Claridge is a software engineer with substantial experience of building websites, desktop applications and other software. He has been wearing hearing aids for over 30 years – what started as a mild sensorineural loss has now progressed to a severe one. He is a co-founder of Audiology Engine and writes about his hearing loss at Hearing Aid Know

  1. Artifical intelligence has already come to the field of medical and already helped a lot in developing it more. It can help the hearing aids with automatic speech system by which you will be able to communicate with the people. Visit iCloud Support for more information.

Leave a Reply