Robert Baumgartner

  • Time-Frequency Implementation of HRTFs (HRTF-Imp)

    The FWF project "Time-Frequency Implementation of HRTFs" has started.

    Principal Investigator: Damian Marelli

    Co-Applicants: Peter BalazsPiotr Majdak

  • YIRG Dynamates: Dynamic auditory predictions in human an non-human primates

    Imagine yourself navigating through busy traffic, with cars, bikes and pedestrians moving around you in all directions. In this and many other situations, one of the crucial aspects for survival is accurately determining when and where external events are appearing, and whether and how things move. To allow both fast and accurate responses to external events, the brain continuously generates predictions about the future. For example, your brain predicts where a car will be by the time you want to cross the road. Yet, humans are not the only species for whom it is beneficial to predict the location and timing of sounds. Other primates might share these abilities, as they also need to navigate dense forest areas. To which degree recent evolutionary pressures have shaped human abilities in these contexts is unclear today.

    Furthermore, ambiguities in the sensory environment, due to noise or uncertainty about the source properties, require the brain to generate several co-existing predictions and make decisions among them. To date, most of the existing knowledge about perceptual decision making comes from studying visual tasks. However, very little is known regarding this process in the auditory modality, which serves at least as important functions for survival and social behaviour.

    The YIRG Dynamates aims to address those current key challenges in auditory cognition, by empirically testing and modelling the sensory prediction mechanisms in comparison across evolutionarily closely related species and in realistic but highly controllable virtual acoustic environments. Importantly, in humans, these mechanisms will also be investigated on a physiological level by means of high-density electroencephalography (EEG) allowing to examine whether the resulting model complies with biological restrictions on neural computation. Hence, this project rests on a strong collaboration between cognitive neuroscience, cognitive biology, and computational modelling. To this end, Dynamates comprises an interdisciplinary research team consisting of experts in the fields of computational neuroscience and psychoacoustics (Robert Baumgartner), human EEG and sensory processing (Ulrich Pomper), and comparative cognition between animal species (Michelle Spierings).

    Dynamates will conduct the first systematic comparative study on dynamic auditory predictions in space and time by both human and non-human primates. The findings of this project will broaden our understanding of the underlying neural mechanisms in humans, which may help future efforts to improve treatments for individuals with impairments or pathological biases in decision making. Moreover, the resulting computational model may stimulate further research by testable predictions of decision making across species as well as in more complex economic and social contexts, and it may be directly applied to improve artificial intelligence and virtual reality systems.

  • AABBA - Aural Assessment By means of Binaural Algorithms

    AABBA is an intellectual open group of scientists collaborating on development and applications of models of human spatial hearing

    AABBA's goal is to promote exploration and development of binaural and spatial models and their applications.

    AABBA members are academic scientists willing to participate in our activities. We meet annually for an open discussion and progress presentation, especially encouraging to bring in students and young scientists associated with members’ projects to our meetings. Our activities consolidate in joint publications and special sessions at international conferences. As a relevant tangible outcome, we provide validated (source) codes for published models of binaural and spatial hearing to our collection of auditory models, known as the auditory modeling toolbox (AMT).


    • Executive board: Piotr Majdak, Armin Kohlrausch, Ville Pulkki

    • Regular members:
      • Aachen: Janina Fels, ITA, RWTH Aachen
      • Bochum: Dorothea Kolossa, Ruhr-Universität Bochum
      • Cardiff: John Culling, School of Psychology, Cardiff University
      • Copenhagen: Torsten Dau & Tobias May, DTU, Lyngby
      • Dresden: Ercan Altinsoy, TU Dresden
      • Ghent: Sarah Verhulst & Alejandro Osses, Ghent University
      • Guangzhou: Bosun Xie, South China University of Technology, Guangzhou
      • Helsinki: Ville Pulkki & Nelli Salminen, Aalto University
      • Ilmenau: Alexander Raake, TU Ilmenau
      • Kosice: Norbert Kopčo, Safarik University, Košice
      • London: Lorenzo Picinali, Imperial College, London
      • Lyon: Mathieu Lavandier, Université de Lyon
      • Munich I: Werner Hemmert, TUM München
      • Munich II: Bernhard Seeber, TUM München 
      • Oldenburg I: Bernd Meyer, Carl von Ossietzky Universität Oldenburg
      • Oldenburg II: Mathias Dietz, Carl von Ossietzky Universität Oldenburg
      • Oldenburg-Eindhoven: Steven van de Par & Armin Kohlrausch, Universität Oldenburg
      • Paris: Brian Katz, Sorbonne Université
      • Patras: John Mourjopoulos, University of Patras
      • Rostock: Sascha Spors, Universität Rostock
      • Sheffield: Guy Brown, The University of Sheffield
      • Tabriz: Masoud Geravanchizadeh, University of Tabriz
      • Toulouse: Patrick Danès, Université de Toulouse
      • Troy: Jonas Braasch, Rensselaer Polytechnic Institute, Troy
      • Vienna: Bernhard Laback & Robert Baumgartner, Austrian Academy of Sciences, Wien
      • The AMT (Umbrella Project): Piotr Majdak
    • Honorary member and founder: Jens Blauert

    AABBA Group 2020
    AABBA group as of the 12th meeting 2020 in Vienna.


    Annual meetings are held at the beginning of each year:

    • 12th meeting: 16-17 January 2020, ViennaScheduleGroup photo
    • 11th meeting: 19-20 February 2019, Vienna. ScheduleGroup photo
    • 10th meeting: 30-31 January 2018, Vienna. Schedule. Group photo
    • 9th meeting: 27-28 February 2017, Vienna. Schedule.
    • 8th meeting: 21-22 January 2016, Vienna. Schedule.
    • 7th meeting: 22-23 February 2015, Berlin.
    • 6th meeting: 17-18 February 2014, Berlin.
    • 5th meeting: 24-25 January 2013, Berlin.
    • 4th meeting: 19-20 January 2012, Berlin.
    • 3rd meeting: 13-14 January 2011, Berlin.
    • 2nd meeting: 29-30 September 2009, Bochum.
    • 1st meeting: 23-26 March 2009, Rotterdam.


    • Upcoming: Structured Session "Binaural models: development and applications" at the Forum Acusticum 2020, Lyon.
    • Special Session "Binaural models: development and applications" at the ICA 2019, Aachen.
    • Special Session "Models and reproducible research" at the Acoustics'17 (EAA/ASA) 2017, Boston.
    • Structured Session "Applied Binaural Signal Processing" at the Forum Acusticum 2014, Krakòw.
    • Structured Session "The Technology of Binaural Listening & Understanding" at the ICA 2016, Buenos Aires.

    Contact person: Piotr Majdak

  • Born2Hear: Development and Adaptation of Auditory Spatial Processing Across the Human Lifespan

    The auditory system constantly monitors the environment to protect us from harmful events such as collisions with approaching objects. Auditory looming bias is an astoundingly fast perceptual bias favoring approaching compared to receding auditory motion and was demonstrated behaviorally even in infants of four months in age. The role of learning in developing this perceptual bias and its underlying mechanisms are yet to be investigated. Supervised learning and statistical learning are the two distinct mechanisms enabling neural plasticity. In the auditory system, statistical learning refers to the implicit ability to extract and represent regularities, such as frequently occurring sound patterns or frequent acoustic transitions, with or without attention while supervised learning refers to the ability to attentively encode auditory events based on explicit feedback. It is currently unclear how these two mechanisms are involved in learning auditory spatial cues at different stages of life. While newborns already possess basic skills of spatial hearing, adults are still able to adapt to changing circumstances such as modifications of spectral-shape cues. Spectral-shape cues are naturally induced when the complex geometry especially of the human pinna shapes the spectrum of an incoming sound depending on its source location. Auditory stimuli lacking familiarized spectral-shape cues are often perceived to originate from inside the head instead of perceiving them as naturally external sound sources. Changes in the salience or familiarity of spectral-shape cues can thus be used to elicit auditory looming bias. The importance of spectral-shape cues for both auditory looming bias and auditory plasticity makes it ideal for studying them together.

    Born2Hear project overview

    Born2Hear will combine auditory psychophysics and neurophysiological measures in order to 1) identify auditory cognitive subsystems underlying auditory looming bias, 2) investigate principle cortical mechanisms for statistical and supervised learning of auditory spatial cues, and 3) reveal cognitive and neural mechanisms of auditory plasticity across the human lifespan. These general research questions will be addressed within three studies. Study 1 will investigate the differences in the bottom-up processing of different spatial cues and the top-down attention effects on auditory looming bias by analyzing functional interactions between brain regions in young adults and then test in newborns whether these functional interactions are innate. Study 2 will investigate the cognitive and neural mechanisms of supervised learning of spectral-shape cues in young and older adults based on an individualized perceptual training on sound source localization. Study 3 will focus on the cognitive and neural mechanisms of statistical learning of spectral-shape cues in infants as well as young and older adults.

    Key publication: Baumgartner, R., Reed, D.K., Tóth, B., Best, V., Majdak, P., Colburn H.S., Shinn-Cunningham B. (2017): Asymmetries in behavioral and neural responses to spectral cues demonstrate the generality of auditory looming bias, in: Proceedings of the National Academy of Sciences of the USA 114, 9743-9748

    Project investigator (PI): Robert Baumgartner

    Project partner / Co-PI: Brigitta Tóth, Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary

    Collaboration partners:

    Duration: April 2020 - March 2024

    Supported by the Austrian Science Fund (FWF, I 4294-B) and NKFIH.

  • LocaPhoto

    Virtual Acoustics: Localization Model & Numeric Simulations (LocaPhoto)

    LocaPhoto consisted of three parts: geometry acquisition, HRTF calculation, and HRTF evaluation by means of localization model.


    Geometry acquisition

    First, we have evaluated the potential of various 3-D scanners by comparing 3-D meshes obtained for some listeners (Reichinger et al, 2013). For the general means of comparison, we have created "reference" meshes by taking silicon impressions from listeners' ears and scanning them in a high-energy computer tomography scanner. While generally capable, not all 3-D scanners were able to obtain meshes of required quality, thus, limiting their application in practical end-user situations.

    Further, we were working on a procedure to generate 3-D meshes directly from 2-D photos by means of photogrammetric-reconstruction algorithms. Under selected conditions, we have obtained 3-D meshes allowing to calculate perceptually-valid HRTFs (publication under preparation).

    HRTF calculation

    While working on the geometry acquisition, we have developed, implemented, and evaluated a procedure to efficiently calculate HRTFs from a 3-D mesh. The software package Mesh2HRTF is based on a Blender plugin for mesh preparation, an executable application based on boundary-element methods, and Matlab tool for HRTF post-processing (Ziegelwanger et al., 2015a). The evaluation was done by comparing HRTFs calculated for reference meshes to acoustically measured HRTFs. Differences between various conditions were evaluated as model predictions and sound-localization experiments. We have shown that in the proximity of the ear canal, meshes with an average edge length of 1 mm or less are required. Also, we have shown that a small area as the virtual microphone used in the calculations yields best results (Ziegelwanger et al., 2015).

    In order to further improve the calculations, we have applied a non-uniform a-priori mesh grading to HRTF calculations. This method reduces the number of elements in the mesh down to 10 000 while still yielding perceptually-valid HRTFs (Ziegelwanger et al., 2016). With that method, HRTF calculations within less than an hour are achievable.

    HRTF evaluation

    Given the huge amount of parameters in the numerical calculations, hundreds of calculated HRTF sets had to be tested. The evaluation of HRTF quality is a complex task because it involves many percepts like directional sound localization, sound externalization, apparent source widening, distance perception, timbre changes, and others. Generally, one would like to have HRTFs generating virtual auditory scenes as realistic as natural scenes. While a model evaluating kind of "degree of realism" was out-of-reach, we focused on a very important and well-explored aspect: directional sound localization.

    For sound localization in the lateral dimension (left/right), there are not may aspects requiring HRTF individualization. The listener-specific ITD, as the interaural broadband difference between the sound's time-of-arrival, can contribute, though. Thus, we first created a 3-D model of time-of-arrival able to describe the ITD with a few parameters based on listener's HRTFs (Ziegelwanger and Majdak, 2014). 

    For sound localization in sagittal planes (top/down, front/back), individualization of HRTFs is a large issue. The whole process of sagittal-plane localization is still not completely understood, but the role of the dorsal cochlear nucleus (DCN) was known already at the beginning of LocaPhoto. Thus, in LocaPhoto, we have developed a model able to predict sagittal-plane sound localization performance, based on the spectral processing found in the DCN. It was rigorously evaluated in various conditions and was found to predict listener-specific localization performance quite well (Baumgartner et al., 2014).

    In LocaPhoto, this model allowed to evaluate many numerically calculated HRTFs. Also, it allowed to uncover surprising properties of human sound localization (Majdak et al., 2014). It is implemented in the Auditory Modeling Toolbox (Søndergaard and Majdak, 2013). It has been used for various evaluations (Baumgartner et al., 2013) like the positioning of loudspeakers in loudspeaker-based sound reproduction (Baumgartner and Majdak, 2015). And, it serves as a basis for a 3-D sound localization model (Altoe et al., 2014) and model addressing sensorineural hearing losses (Baumgartner et al., 2016).


    Austrian Science Fund (FWF, P 24124-N13)


    February 2012 - October 2016


    • Baumgartner, R., Majdak, P., Laback, B. (2016): Modeling the Effects of Sensorineural Hearing Loss on Sound Localization in the Median Plane, in: Trends in Hearing 20, 1-11.
    • Ziegelwanger, H., Kreuzer, W., Majdak, P. (2016): A priori mesh grading for the numerical calculation of the head-related transfer functions, in: Applied Acoustics 114, 99 - 110.  
    • Baumgartner, R., Majdak, P. (2015): Modeling Localization of Amplitude-Panned Virtual Sources in Sagittal Planes, in: J. Audio Eng. Soc 63, 562-569.
    • Ziegelwanger, H., Kreuzer, W., Majdak, P. (2015): Mesh2HRTF: An open-source software package for the numerical calculation of head-related transfer functions, in: Proceedings of the 22nd International Congress on Sound and Vibration (ICSV). Florence, Italy, 1-8.
    • Ziegelwanger, H., Majdak, P., Kreuzer, W. (2015): Numerical calculation of head-related transfer functions and sound localization: Microphone model and mesh discretization, in: The Journal of the Acoustical Society of America 138, 208-222.  
    • Altoè, A., Baumgartner, R., Majdak, P., Pulkki, V. (2014): Combining count-comparison and sagittal-plane localization models towards a three-dimensional representation of sound localization, in: Proceedings of the 7th Forum Acusticum. Krakow, Poland, 1-6.
    • Baumgartner, R., Majdak, P., Laback, B. (2014): Modeling Sound-Source Localization in Sagittal Planes for Human Listeners., in: The Journal of the Acoustical Society of America 136, 791-802.
    • Majdak, P., Baumgartner, R., Laback, B. (2014): Acoustic and non-acoustic factors in modeling listener-specific performance of sagittal-plane sound localization, in: Frontiers in Psychology 5, 319(1-10).
    • Baumgartner, R., Majdak, P., Laback, B. (2013): Assessment of sagittal-plane sound localization performance in spatial-audio applications, in: Blauert, J. (ed.), The Technology of Binaural Listening. Berlin-Heidelberg-New York (Springer), 93-119
    • Reichinger, A., Majdak, P., Sablatnig, R., Maierhofer, S. (2013): Evaluation of Methods for Optical 3-D Scanning of Human Pinnas, in: Proceedings of the 3D Vision Conference 2013, Third Joint 3DIM/3DPVT Conference. Seattle, WA, 390-397.
    • Søndergaard, P., Majdak, P. (2013): The Auditory Modeling Toolbox, in: Blauert, J. (ed.), The Technology of Binaural Listening. Berlin, Heidelberg, New York (Springer), 33-56

    Contact for more information:

    Piotr Majdak (Principle Investigator)

    Michael Mihocic (HRTF measurement)

  • SpExCue: Role of spectral cues in sound externalization - objective measures & modeling

    Baumgartner et al. (2017a)

    Spatial hearing is important to monitor the environment for interesting or hazardous sounds and to selectively attend to them. The spatial separation between the two ears and the complex geometry of the human body provide auditory cues about the location of a sound source. Depending on where a sound is coming from, the pinna (or auricle) changes the sound spectrum before the sound reaches the eardrum. Since the shape of a pinna is highly individual (even more so than a finger print) it also affects the spectral cues in a very individual manner. In order to produce realistic auditory perception artificially, this individuality needs to be reflected as precisely as required, whereby the actual requirements are currently unclear. That is why SpExCue was about finding electrophysiological measures and prediction models of how spatially realistic (“externalized”) a virtual sound source is perceived to be.

    Virtual and augmented reality (VR/AR) systems aim to immerse a listener into a well-externalized 3D auditory space. This requires a perceptually accurate simulation of the listener’s natural acoustic exposure. Particularly challenging is to appropriately represent the high-frequency spectral cues induced by the pinnae. To simplify this task, we aim at developing a phenomenological computational model for sound externalization with a particular focus on spectral cues. The model will be designed to predict the listener’s degree of externalization based on binaural input signals and the listener’s individual head-related transfer functions (HRTFs) under static listening conditions.

    The naturally externalized auditory perception can be disrupted, for instance, when listening via headphones or hearing-assistive devices, and instead sounds are heard inside the head. Because of this change in externalization or perceived distance, our investigations of spectral cues also served to study the phenomenon of auditory looming bias (Baumgartner et al., 2017 PNAS): sounds approaching the listener are perceived more intensely than those that are receding from the listener. Previous studies demonstrated auditory looming bias exclusively by loudness changes (increasing/decreasing loudness used to simulate approaching/receding sounds). Hence, it was not clear whether this bias truly reflects perceptual differences in sensitivity to motion direction rather than changes in loudness. Our spectral cue changes were perceived as either approaching or receding at steady loudness and evoked auditory looming bias both on a behavioral level (approaching sounds easier to recognize than receding sounds) and an electrophysiological level (larger neural activity in response to approaching sounds). Therefore, our study demonstrated that the bias is truly about perceived motion in distance, not loudness changes.

    Further, SpExCue investigated how the combination of different auditory spatial cues affects attentional control in a speech recognition task with simultaneous talkers, which requires spatial selective attention like in a cocktail party (Deng et al., in prep). We found that natural combinations of auditory spatial cues caused larger neural activity in preparation to the test signal and optimized the neural processing of the attended speech.

    SpExCue also compared different computational modeling approaches that aim to predict the effect of spectral cue changes on how spatially realistic a sound is perceived (Baumgartner et al., 2017 EAA-ASA). Although many previous experimental results could be predicted by at least one of the models, none of them alone could explain these results. In order to assist the future design of more general computational models for spatial hearing, we finally created a conceptual cognitive model for the formation of auditory space (Majdak et al., in press).


    Erwin-Schrödinger Fellowship from Austrian Science Funds (FWF, J3803-N30) awarded to Robert Baumgartner. Duration: May 2016 - November 2017.

    Follow-up funding provided by Facebook Reality Labs, since March 2018. Project Investigator: Robert Baumgartner.


    • Baumgartner, R., Reed, D.K., Tóth, B., Best, V., Majdak, P., Colburn H.S., Shinn-Cunningham B. (2017): Asymmetries in behavioral and neural responses to spectral cues demonstrate the generality of auditory looming bias, in: Proceedings of the National Academy of Sciences of the USA 114, 9743-9748. (article)
    • Baumgartner, R., Majdak, P., Colburn H.S., Shinn-Cunningham B. (2017): Modeling Sound Externalization Based on Listener-specific Spectral Cues, presented at: Acoustics ‘17 Boston: The 3rd Joint Meeting of the Acoustical Society of America and the European Acoustics Association. Boston, MA, USA. (conference)
    • Deng, Y., Choi, I., Shinn-Cunningham, B., Baumgartner, R. (2019): Impoverished auditory cues limit engagement of brain networks controlling spatial selective attention, in: Neuroimage 202, 116151. (article)
    • Baumgartner, R., Majdak, P. (2019): Predicting Externalization of Anechoic Sounds, in: Proceedings of ICA 2019. (proceedings)
    • Majdak, P., Baumgartner, R., Jenny, C. (2019): Formation of three-dimensional auditory space, in: arXiv:1901.03990 [q-bio]. (preprint)
  • Time Frequency Virtual Acoustics (TF-VA)


    In the context of binaural virtual acoustics, a sound source is positioned in a free-field 3-D space around the listener by filtering it via head-related transfer functions (HRTFs). In a real-time application, numerous HRTFs need to be processed. The long impulse responses of the HRTFs require a high computational power, which is difficult to directly implement on current processors in situations involving more than a few simultaneous sources.

    Technically speaking, an HRTF is a linear time-invariant (LTI) system. An LTI system can be implemented in the time domain by direct convolution or recursive filtering. This approach is computationally inefficient. A computationally efficient approach consists of implementing the system in the frequency domain; however, this approach is not suitable for real-time applications since a very large delay is introduced. A compromise solution of both approaches is provided by a family of segmented-FFT methods, which permits a trade-off between latency and computational complexity. As an alternative, the sub-band method can be applied as a technique to represent linear systems in the time-frequency domain. Recent work has showed that the sub-band method offers an even better tradeoff between latency and computational complexity than segmented-FFT methods. However, the sub-band analysis is still mathematically challenging and its optimum configuration is dependant on the application under consideration.


    TF-VA involves developing and investigating new techniques for configuring the sub-band method by using advanced optimization methods in a functional analysis context. As a result, an optimization technique that minimizes the computational complexity of the sub-band method will be obtained.

    Two approaches will be considered: The first approach designs the time-frequency transform for minimizing the complexity of each HRTF. In the second approach, we will design a unique time-frequency transform, which will be used for a joint implementation of all HRTFs of a listener. This will permit an efficient implementation of interpolation techniques while moving sources spatially in real-time. The results will be evaluated in subjective localization experiments and in terms of localization models.


    • Main participator: Damian Marelli (University of Newcastle, Australia)
    • Co-applicants: Peter Balazs, Piotr Majdak
    • Project begin: November 2011
    • Funding: Lise-Meitner-Programm of the Austrian Science Fund (FWF) [M 1230-N13]