The rapid increase in available computing power and the fast evolution of audio interfacing and transmission technologies have led to a new age of immersive audio systems to reproduce spatial sound with surrounding loudspeakers. Many of these approaches require a precise and robust space-time-frequency analysis of sound fields. The joint project of ARI and IRCAM  combines the mathematical concepts provided by the ARI with the profound knowledge in real-time signal processing and acoustics of IRCAM. It addresses fundamental research questions in both fields and aims at developing improved methods for the target applications mentioned above.

The main questions that his project aims at are:

  • Is it possible to apply the frame-based signal-processing tools to a predefined geometrical alignment of microphones and/or loudspeakers (e.g. to the 64-channel spherical microphone array that is currently under development at IRCAM
  • How can acoustic fields on the sphere (e.g. measured with a spherical microphone array) be represented with frames in order to have better control of the space-time-frequency resolutions on different parts of the sphere?
  • Is it possible to apply this multi-resolution space-time-frequency representation to room acoustic sensing with multichannel spherical microphone arrays (e.g. to measure the spatial distribution of early reflections with higher resolution than provided with spherical harmonic analysis)?