NH10 median, pcolor adc0Figure JitterITD_ILD

We study basic auditory perception of normal-hearing (NH) and hearing-impaired people. We especially focus on spatial hearing in normal-hearing listeners and bilateral hearing via cochlear-implants (CIs), devices which seek to restore hearing in people who were previously profoundly deaf.

In the case of bilateral implantation, left-right localization of sound sources can be achieved with already now available clinical CIs. An open issue concerns the 3-dimensional localization, especially resolving the confusion between the front and back. In normal-hearing listeners, spectral cues, described by the head related transfer functions (HRTFs) help to localize sounds in all three dimensions. The fast and reliable front-back orientation is especially important for implanted children navigating in dangerous environments like heavy-traffic roads. Utilizing acoustic measurements and numerical simulations, our research aims at transmitting spatial cues to the auditory system via implant electrodes to enable CI listeners to localize sound-sources in sagittal planes. See the project "Spectral cues in auditory localization with cochlear implants"

Interaural time differences (ITDs) are the most salient cues for determining the lateral position of a sound. ITDs are also important for speech perception in noisy conditions. In CI listeners, the ITD sensitivity is limited. We investigate the perception of ITDs in NH listeners and provide algorithms to improve the ITD sensitivity in CI listeners. See our ongoing research line "Perception of interaural time differences"

By investigating basic mechanisms involved in spatial hearing, we are seeking methods to improve the spatial experience when listening to sounds like speech, and music. Our Lab Facilities allow the measurement of subject-dependent HRTFs and the creation of virtual binaural acoustics by filtering sounds with HRTFs and presenting the signals via headphones. Our setup for sound-localization experiments includes a real-time virtual visual environment, allowing the testing of arbitrary manipulations of HRTFs combined with audio-visual interactions. The research involves methods for capturing HRTFs, efficient implementations of binaural acoustics, or development of sound-localization models.

Current main projects:

Check our projects list for more information.

Staff

EAP