Technology provides clear sound directionality to hearing aids as the system identifies faces and associates them with speech.
Addressing the shortcomings of traditional hearing aids, a research team in Taiwan has integrated various technologies, including computer vision, specialized algorithms, and microphone arrays, to enhance sound directionality. The resulting device, primarily designed for individuals with mild to moderate hearing loss, is detailed in a study published on December 5 in the IEEE Sensors Journal.
The innovative design features a dual-layer microphone array positioned on the ears and a necklace-style wearable device incorporating a camera with computer vision AI. A sophisticated algorithm assists the computer vision component in identifying faces in the environment, predicting the source of the sound associated with a particular face. In instances where the speaker is beyond the range of the computer vision system, an algorithm predicting sound origin based on angle and time of arrival takes over.
The final step involves a mixing algorithm that adjusts the sound heard by users to enhance the perception of sound directionality and subsequently fine-tunes the volume for an immersive auditory experience.
Yi-Chun “Terry” Du, an associate professor of biomedical engineering at National Cheng Kung University (NCKU), underscores the significance of sound directionality for the quality of life and safety of individuals with hearing loss. He expresses the team’s hope to integrate this technology into the daily lives of elderly patients with hearing loss, thereby enhancing the overall life quality of those with mild-to-moderate hearing impairments.
Directional Hearing Aid Technology
In their investigation, Du’s team assessed the performance of the hearing aid in a cohort of 30 patients. The results indicated that study participants could accurately identify sound sources using the computer vision component of their hearing aids, achieving an accuracy rate of 94 percent or higher at distances typical of conversations (160 centimeters or less). Even when sound originated from an area detectable by the microphones but not the computer vision device, users still successfully identified the source with over 90 percent accuracy.
Du highlights the effectiveness of the mixing algorithm in adjusting the volume of the left and right channels, allowing users to determine the location of the sound source. In a separate study involving elderly patients, the combined technology enabled users to achieve a 100 percent success rate in a clinical directional test.
While acknowledging the limited recognition range of the computer vision camera (75 degrees), Du emphasizes future considerations, including the use of wide-angle lenses or dual cameras to match the human eye’s broader field of view (120 degrees). This would enhance the device’s suitability for daily use.
Despite users in the study reporting a significant improvement in their ability to hear and discern the direction of sounds, some expressed hesitancy to adopt the device, reflecting a common stance among potential hearing aid users. Du suggests that future research could delve into understanding the underlying reasons for this reluctance and developing solutions to encourage wider hearing aid adoption.
Nevertheless, the current device developed by Du and his team exhibits advantages over existing aids, and early collaborations with companies interested in commercializing the product are underway.
Du indicates that the team is exploring the expansion of this technology to facilitate users in recognizing the speaker. Plans include integrating a computer vision-based smart reminder function for facial recognition, reminding users of the person they are conversing with. This not only streamlines conversations but also has the potential to foster closer bonds between the user and the identified speaker.