AES-ARI guest talk by Michele Geronazzo, University of Padowa
Friday, 6th of December 2013
Seminar room of the OEAW, 1040 Wien, Wohllebengasse 12-14, ground floor
In this talk, recent research activities of the Sound and Music Computing group at the University of Padova are presented, focusing on binaural audio through headphones and individualized virtual auditory displays. One of the major reasons why such kind of technologies are excluded from commercial applications in virtual and augmented reality lies in the lack of individualization of the entire rendering process.
Since measuring individual head-related transfer functions (HRTFs) results indeed both time- and resource-expensive, obtaining reliable HRTFs for a particular subject in different and more convenient ways is desirable. A common practice employs the trivial selection of an unique HRTF set for all listeners leading to fateful localization errors and lack of externalization. To overcome these limitations, a novel framework for synthetic HRTF design and customization is presented, combining the structural modeling paradigm with other HRTF selection techniques: The mixed structural modeling (MSM) approach regards the global HRTF as a combination of structural components (e.g. head, torso and external ear), which can be chosen to be either synthetic or recorded components. In both cases, customization is based on individual anthropometric data, which are used to either fit the model parameters or to select a recorded component within a set of available responses.
Our developed models for real-time HRTF synthesis allow to control separately the evolution of different acoustic phenomena involved in spatial sound perception exploiting image-based extraction of relevant anthropometric features. Interfaces for non-sighted users are analysed as one of the application domains where these technologies are currently being experimented.