This is the companion Webpage of the manuscript:
Thibaud Necciari, Nicki Holighaus, Peter Balazs, Zdeněk Průša, Piotr Majdak, and Olivier Derrien.
Abstract: Many audio applications rely on filter banks (FBs) to analyze, process, and re-synthesize sounds. For these applications, an important property of the analysis-synthesis system is the reconstruction error; it has to be kept to a minimum to avoid audible artifacts. Other advantageous properties include stability and low redundancy. To exploit some aspects of human auditory perception in the signal chain, some applications rely on FBs that approximate the frequency analysis performed in the auditory periphery, the gammatone FB being a popular example. However, current gammatone FBs only allow partial reconstruction and stability at high redundancies. In this article, we construct an analysis-synthesis system for audio applications. The proposed system, named Audlet, is based on an oversampled FB with filters distributed on auditory frequency scales. It allows perfect reconstruction for a wide range of FB settings (e.g., the shape and density of filters), efficient FB design, and adaptable redundancy. In particular, we show how to construct a gammatone FB with perfect reconstruction. Experiments demonstrate performance improvements of the proposed gammatone FB when compared to current gammatone FBs in terms of reconstruction error and stability, especially at low redundancies. An application of the framework to audio source separation illustrates its utility for audio processing.
|Rt||β = 1||β = 1/6||1024-point STFT|
Machine learning has become an integral part of our everyday lives over the last few year. Whether we use a smartphone, shop online, consume media, drive a car or much more, machine learning (ML) and, more generally, artificial intelligence (AI) support, influence and analyze us in different life situations. In particular deep learning methods based on artificial neural networks are used in many areas.
Also in the sciences ML and AI have already generated important impulses and it is expected that this influence will spread in the future to an even wider field of scientific disciplines.
This increases both the interest in a deeper, science-based understanding of ML methods, as well as the need for scientists of various disciplines to develop a strong understanding of the application and design of such methods.
The Institute for Acoustic Research, which conducts application-oriented basic research in the field of acoustics, is rising to this challenge and as founded the Machine Learning research group.
It sheds light on the different aspects of machine learning and artificial intelligence, with a particular focus on potential applications in acoustics. The collaboration of scientists from different disciplines in the areas of ML and AI will not only enable the Institute for Acoustic Research to make pioneering progress in all areas of sound research, but will also make essential contributions to theoretical issues in the highly up-to-date research field of artificial intelligence.
Millions of people use headphones everyday for listening to music, for watching movies, or when communicating with others. Nevertheless, the sounds presented via headphones are usually perceived inside the head and not at their actual natural spatial position. This limited perception is inherent and results in unrealistic listening situations.
When listening to a sound without headphones, the acoustic information of the sound source is modified by our head and our torso, an effect described by the head-related transfer functions (HRTFs). The shape of our ears contributes to that modification by filtering the sound depending on the source direction. But the ear is very listener-specific – its individuality is similar to that of a finger print, and thus HRTFs are very listener-specific. When listening to sounds via headphones, the listener-specific filtering is usually not available. One of the main reasons is the difficulty in the process of acquisition of the ear shape of a person, and thus in calculation of listener-specific HRTFs.
Thus, in softpinna, we will work on the development of new methods for a better acquisition of listener-specific ear shapes of a person. Specifically, we will investigate and improve the so-called "non-rigid registration" (NRR) algorithms, applied on 3-D ear geometries calculated from 2-D photos of a person’s ears. The improvement in the quality of the 3-D ear geometries acquisition will allow computer programs to accurately calculate the listener-specific HRTFs, thus enabling the incorporation of listener-specific HRTFs in future headphone systems providing realistic presentation of spatial sounds. The new ear-shape acquisition method will vastly reduce the technical requirements for accurate calculation of listener-specific HRTFs.
Seit der 37. Österreichischen Linguistiktagung 2009 ist das Institut für Schallforschung regelmäßig auf Österreichischen Linguistiktagungen (ÖLT) mit phonetischen Themen zu Gast. Seit 2013 wird durchgehend ein eigener Workshop zum Thema Phonetik im Rahmen der ÖLT abgehalten. Um allen Interessierten einen Ort zur Nachschau sowie Ankündigung zukünftiger Ereignisse zu bieten, werden hier Informationen zu den phonetischen Workshops der ÖLT gesammelt und zugänglich gemacht.
Nächste ÖLT: 46. Österreichische Linguistiktagung, 4.-6. Dezember 2020, Wien
Hier finden Sie Informationen zu den phonetischen Workshops bei der ÖLT der vergangenen Jahre.
"Phonetik in und über Österreich 2019"
Organisation: Nicola Klingler, Hannah Leykum, Jan Luttenberger, Michael Pucher, Carolin Schmid (Institut für Schallforschung, Österreichische Akademie der Wissenschaften) und Johanna Fanta-Jende (Institut für Germanistik, Universität Wien)
"Phonetik und Sprachtechnologie"
Organisation: Nicola Klingler, Hannah Leykum und Michael Pucher (Institut für Schallforschung, Österreichische Akademie der Wissenschaften)
"Phonetik in und über Österreich"
Leitung: Sylvia Moosmüller† (Institut für Schallforschung, Österreichische Akademie der Wissenschaften)
Koordination: Michaela Rausch-Supola (Institut für Schallforschung, Österreichische Akademie der Wissenschaften)
"Phonetik & Phonologie"
Organisation: Dina El Zarka, Petra Hödl, Ralf Vollmann (Universität Graz)
"Phonetik in und über Österreich"
Leitung: Sylvia Moosmüller† und Carolin Schmid (Institut für Schallforschung, Österreichische Akademie der Wissenschaften)
Leitung: Manfred Sellner (Universität Salzburg)
Computational Hearing and Psychoacoustics investigates several areas which rely on human hearing:
Publications: W. Deutsch