Hearing aids benefit their users via AI technologies, but increasingly its hard to find the AI talent to work on these devices.

copyright by www.rtinsights.com 

SwissCognitiveHearing aids are perhaps one of the most personal examples of real-time technology in action. Now, they are getting even more powerful, thanks to the introduction of artificial intelligence (AI). However, the talent required to build and support such systems is currently in short supply.

AI isn’t necessarily a new concept for hearing aids, Dr. Chris Heddon of Resonance Medical pointed out in a recent post . “Before the market for AI researchers became white hot, hearing aid companies had been working on various AI and machine learning approaches for quite some time,” he states. However, lately, AI technology has been getting more ubiquitous, and prices for tools and solutions have been dropping dramatically. “The removal of specific technological constraints, combined with the hearing aid industry’s need to address new and disruptive service delivery models, indicates that the time to bring AI to the hearing care market is now.”

For starters, Widex recently announced its Evoke hearing aid, which employs AI and machine learning to provide “users the ability to employ real-time machine learning that can solve the tricky hearing problems that users face in their daily lives.” Evoke’s smartphone AI app, called SoundSense Learn, is designed to help end users adjust their hearing aids precisely in the moment, something no humans can replicate to the same degree of accuracy. As the company puts it: “Most hearing aids give users the ability to customize their sound experience by adjusting frequency bands to boost or cut bass, middle or high tones. Adjusting frequencies works well in many situations once the initial settings have been set by a skilled audiologist. However, some situations are so complex that hitting the right combination of adjustments can be difficult.”

The SmartSense app is connected to the Evoke hearing aids and “uses machine learning to guide the users in optimizing the settings to their exact needs,” according to Widex. “The app gathers a variety of anonymous data such as how often they turn the volume up or down, which sound presets they use and how many custom settings they create – including those made with SoundSense Learn.” SoundSense Learn employs a machine learning algorithm together with reinforcement learning that enables the algorithm to learn in the moment. “The algorithm learns an optimal setting every time a user finds the sound to be a little below expectations in a given sound environment. It learns these settings by simply asking the user to compare two settings that are carefully picked by the algorithm. This allows it to learn an optimal setting in a new environment very fast.”

 

read more – copyright by www.rtinsights.com