Artificial intelligence in hearing systems
Artificial intelligence is a collective term for applications in which a machine (usually a powerful and complex computer) performs human-like services. This includes solving mathematical equations or logically inferring equationsm Think. An attempt is made to create an artificial neural network in a computer build a synthetic brain. The focus here is on machine learning (ML): the computer should learn from the past and thus better assess future events. Infinitely complex formulas / equations are created that convert all factors of the existing decision into numbers, so-called algorithms. These "calculation formulas" form the basis of today's artificial intelligence.
A current example: The Tesla company builds self-driving electric cars. These cars have multiple cameras that continuously film the surroundings, creating a three-dimensional, virtual environment for the car. The computer in the car processes this data and independently decides how quickly the car is to be steered and where. The car recognizes dangers, brakes itself and can act in an emergency intervention. In the factory, the car is “trained” with virtual traffic situations and thus has a basis of reaction methods for almost all scenarios. But the car also learns from everyone real Driving situation and uses the results for future decisions.
What does that have to do with hearing aids, one rightfully wonders ?! A whole lot!
Modern hearing systems are marvels of technology and high-performance computers. These also work with these so-called algorithms. A lot of information is stored in these algorithms about how the hearing system should adjust to different acoustic situations. Every acoustic stimulus has a kind of fingerprint and is clearly recognizable. For example, speech has a different pattern than tire noise or music. These individual patterns can then be expected.
An example from hearing acoustics: In order to increase hearing comfort in noisy situations and to improve speech understanding, the pattern of speech and the pattern of background noise (mostly low-frequency) or background noise are analyzed first. Now the sample areas (frequencies and levels) of the background noise are determined which deviate from the stored sample of the speech. This wearth then filtered out.
Another example: Position sensors in the hearing systems can determine whether you are standing, lying down or moving! In this way, the hearing system can adjust the wind noise suppression or bring about other changes that improve speech understanding when moving.
History and limits of technology:
The triumphant advance of computers has continued in our lives since the late 1990s. Computers defeated the world chess champion Kasparov and won quiz shows like "Jeopardy" (American TV show) through pure computing power. At this point in time there were no AIs (artificial intelligences). There are now so-called “supercomputers”, also known as quantum computers. These are incredibly powerful and already help corporations like Amazon or Facebook to control the power supply of server farms (huge halls full of computers) / to make it more efficient. The technology is currently reaching its limits when it comes to creative thinking tasks, it was not possible to create a “creative” supercomputer up to now.