Michael Mihocic

  • Objective:

    The Acoustic Measurement Tool at the Acoustics Research Institute (AMTatARI) has been developed for the automatic measurement of system properties of electro-acoustic systems like loudspeakers and microphones. As a special function, this tool allows an automatic measurement of Head Related Transfer Functions (HRTF). 

    Measurement of the following features has been implemented so far:

    • total harmonic distortion (THD)
    • signal in noise and distortions (SINAD)
    • impulse response

    The impulse responses can be measured with the Maximum Length Sequences (MLS) or with exponential sweeps. Whereas, in case of the sweeps, the new multiple exponential sweep method (MESM) is available. This method is also used to measure HRTFs with AMTatARI.

  • Objective:

    The beam forming method focuses an arbitrary receiver coil using time delay and amplitude manipulation, and adds to the temporal signal of the microphones or the short time Fourier transform.

    Method:

    64 microphones are collected by a microphone array with arbitrary shape. For compatibility with acoustic holography, equal spacing and a grid with 8 x 8 microphones is used.

    Application:

    Localization of sound sources on high speed trains is a typical application. The method is used to separate locations along the train and especially the height of different sound sources. Typical sound sources on high speed trains are rail-wheel contact sites and aerodynamic areas. The aerodynamic conditions occur at all heights, especially at the pantograph.

  • Objective:

    This study explores the adaptation of localization mechanisms to warping of spectral localization features, as required for CI listeners to map those features to their reduced electric stimulation range.

    Methods and Results:

    The effect of warping the stimulation range from 2.8 to 16 kHz to the range from 2.8 to 8.5 kHz was studied in normal-hearing listeners. Fifteen subjects participated in a long-time localization-training study, involving two-hour daily audio-visual training over a period of three weeks. The Test Group listened to frequency-warped stimuli, the Control Group to low-pass filtered stimuli (8.5 kHz). The Control Group showed an initial increase of localization error and essentially reached the baseline performance at the end of the training period. The Test Group showed a strong initial increase of localization error, followed by a steady improvement of performance, even though not reaching the baseline performance at the end of the training period. These results are promising with respect to the idea to present high-frequency spectral localization cues to the stimulation range available with CIs

    Funding:

    FWF (Austrian Science Fund): Project #P18401-B15

    Publications:

    • Walder, T. (2010) Schallquellenlokalisation mittels Frequenzbereich-Kompression der Außenohrübertragungsfunktionen (englisch: Sound source localization with frequency-range compressed head-related transfer functions), Master thesis, Technical University of Graz & Kunstuniversität Graz.
    • Majdak, P., Walder, T., and Laback, B. (2011). Learning to Localize Band-Limited Sounds in Vertical Planes, presented at: 34st MidWinter Meeting of the Association for Research in Otolaryngology (ARO). Baltimore, Maryland.
  • Objective:

    This project investigates the effect on cochlear implant (CI) speech understanding caused by spectral peaks and notches, such as those resulting from the head-related transfer function filtering of a sound source. This is required to determine how spectral localization cues are best encoded with CIs, without destroying speech information.

    Application:

    Results from this project are required for the development of a 3-D localization strategy for CIs. Furthermore, the results give insight into the robustness of speech cues against spectral disruption in electric hearing.

    Funding:

    FWF (Austrian Science Fund): Project #P18401-B15

  • Objective:

    This project studies the effects of the upper-frequency boundary and of spectral warping on speech intelligibility among Cochlear Implant (CI) listeners, using a 12-channel implant, and normal hearing (NH) listeners.  This is important to determine how many basal channels are "free" for encoding spectral localization cues.

    Results:

    The results show that eight frequency channels and spectral content up to about 3 kHz are sufficient to transmit speech under unwarped conditions. If frequency warping was applied, the changes had to be limited ± 2 frequency channels to preserve good speech understanding. This outcome shows the range of allowed modifications for presenting spectral localization cues to CI listeners. About four channels were found to be "free" for encoding spectral localization cues

    Application:

    see the description of the CI-HRTF project

    Funding:

    FWF (Austrian Science Fund): Project #P18401-B15

    Publications:

    • Goupell, M., Laback, B., Majdak, P., and Baumgartner, W. D. (2007). Effects of upper-frequency boundary and spectral warping on speech intelligibility in electrical stimulation, J. Acoust. Soc. Am. 123, 2295-2309.
    • Goupell, M. J., Laback, B., Majdak, P., and Baumgartner, W-D. (2007). Effect of frequency-place mapping on speech intelligibility: implications for a cochlear implant localization strategy, presented at Conference on Implantable Auditory Prostheses (CIAP), Lake Tahoe.
    • Goupell, M. J., Laback, B., Majdak, P., and Baumgartner, W-D. (2007). Effect of different frequency mappings on speech intelligibility for CI listeners, proceedings of DAGA 2007, Stuttgart.
  • Objective:

    ExpSuite is a program that compiles the implementation of psychoacoustic experiments. ExpSuite is the name of a framework that is used as a basis for an application. It can be enlarged with customized and experiment-dependent methods (applications). The framework consists of a user-interface (experimentator-and-subject interface), signal processing modules (off-line and in real-time), and input-output modules.

    The user-interface is implemented in Visual Basic.NET and benefits from the "Rapid Application Development" environment, which develops experiments quickly. To compensate for the sometimes slow processing performance of VB, the stimulation signals can be processed in a vector-oriented way using a direct link to MATLAB. Because of the direct link to MATLAB, numerous MATLAB intern functions are available to the ExpSuite applications.

    The interface accessible to the people administering the tests contains several templates that can be chosen for a specific experiment. Either the keyboard, mouse, joypad, or joystick can be chosen as the input device. The user interface is designed for dual screen equipment, and allows a permanent surveillance of the experiment status on the same computer. Additionally, the transmission of the current experiment status to another computer is possible via a network connection.The framework supports two types of stimulation:

    • the standard acoustic stimulation using an audio interface for experiments with normal or impaired hearing subjects, and
    • the direct electric stimulation of cochlear implants for experiments with cochlear implant listeners.
  • The aim of this project is to maintain the experimental facilities in our institute's laboratory.

    The lab consists of four testing places:

    • GREEN and BLUE: Two sound-booths (IAC-1202A) are used for audio recording and psychoacoustic testing performed with headphones. Each of the booths is controlled from outside by a computer. Two bidirectional audio channels with sampling rates up to 192 kHz are available.
    • RED: A visually-separated corner can be used for experiments with cochlear implant listeners. A computer controls the experimental procedure using a bilateral, direct-electric stimulation.
    • YELLOW: A semi-anechoic room, with a size of 6 x 6 x 3 m, can be used for acoustic tests and measurements in a nearly-free field. As many as 24 bidirectional audio channels, virtual environments generated by a head mounted display, and audio and video surveillance are available for projects like HRTF measurement, localization tests or acoustic holography.

    The rooms are not only used for measurements and experiments, also the Acoustics Phonetics group is doing speech recordings for dialect research and speaker identification, for example for survey reports. The facilities are also used to detect psychoacoustical validations.

    During the breaks in experiments, the subjects can use an Internet terminal or relax on a couch while sipping hot coffee...

  • Objective and Method:

    Current cochlear implant (CI) systems are not designed for sound localization in the sagittal planes (front-back and up/down-dimensions). Nevertheless, some of the spectral cues that are important for sagittal plane localization in normal hearing (NH) listeners might be audible for CI listeners. Here, we studied 3-D localization with bilateral CI-listeners using "clinical" CI systems and with NH listeners. Noise sources were filtered with subject-specific head-related transfer functions, and a virtually structured environment was presented via a head-mounted display to provide feedback for learning. 

    Results:

    The CI listeners performed generally worse than NH listeners, both in the horizontal and vertical dimensions. The localization error decreases with an increase in the duration of training. The front/back confusion rate of trained CI listeners was comparable to that of untrained (naive) NH listeners and two times higher than for the trained NH listeners. 

    Application:

    The results indicate that some spectral localization cues are available to bilateral CI listeners, even though the localization performance is much worse than for NH listeners. These results clearly show the need for new strategies to encode spectral localization cues for CI listeners, and thus improve sagittal plane localization. Front-back discrimination is particularly important in traffic situations.

    Funding:

    FWF (Austrian Science Fund): Project # P18401-B15

    Publications:

    • Majdak, P., Goupell, M., and Laback, B. (2011). Two-Dimensional Localization of Virtual Sound Sources in Cochlear-Implant Listeners, Ear & Hearing.
    • Majdak, P., Laback, B., and Goupell, M. (2008). 3D-localization of virtual sound sources in normal-hearing and cochlear-implant listeners, presented at Acoustics '08  (ASA-EAA joint) conference, Paris
  • Objective:

    Humans' ability to localize sound sources in a 3-D space was tested.

    Method:

    The subjects listened to noises filtered with subject-specific head-related transfer functions (HRTFs). In the first experiment with new subjects, the conditions included a type of visual environment (darkness or structured virtual world) presented via head mounted display (HMD) and pointing method (head and finger/shooter pointing).

    Results:

    The results show that the errors in the horizontal dimension were smaller when head pointing was used. Finger/shooter pointing showed smaller errors in the vertical dimension. Generally, the different effects of the two pointing methods was significant but small. The presence of a structured, virtual visual environment significantly improved the localization accuracy in all conditions. This supports the idea that using a visual virtual environment in acoustic tasks, like sound localization, is beneficial. In Experiment II, the subjects were trained before performing acoustic tasks for data collection. The performance improved for all subjects over time, which indicates that training is necessary to obtain stable results in localization experiments.

    Funding:

    FWF (Austrian Science Fund): Project # P18401-B15

    Publications:

    • Majdak, P., Goupell, M., and Laback, B. (2010). 3-D localization of virtual sound sources: effects of visual environment, pointing method, and training, Attention, Perception, & Psychophysics 72, 454-469.
    • Majdak, P., Laback, B., Goupell, M., and Mihocic M. (2008). "The Accuracy of Localizing Virtual Sound Sources: Effects of Pointing Method and Visual Environment", presented at AES convention, Amsterdam.
  • Virtual Acoustics: Localization Model & Numeric Simulations (LocaPhoto)

    LocaPhoto consisted of three parts: geometry acquisition, HRTF calculation, and HRTF evaluation by means of localization model.

    overview

    Geometry acquisition

    First, we have evaluated the potential of various 3-D scanners by comparing 3-D meshes obtained for some listeners (Reichinger et al, 2013). For the general means of comparison, we have created "reference" meshes by taking silicon impressions from listeners' ears and scanning them in a high-energy computer tomography scanner. While generally capable, not all 3-D scanners were able to obtain meshes of required quality, thus, limiting their application in practical end-user situations.

    Further, we were working on a procedure to generate 3-D meshes directly from 2-D photos by means of photogrammetric-reconstruction algorithms. Under selected conditions, we have obtained 3-D meshes allowing to calculate perceptually-valid HRTFs (publication under preparation).

    HRTF calculation

    While working on the geometry acquisition, we have developed, implemented, and evaluated a procedure to efficiently calculate HRTFs from a 3-D mesh. The software package Mesh2HRTF is based on a Blender plugin for mesh preparation, an executable application based on boundary-element methods, and Matlab tool for HRTF post-processing (Ziegelwanger et al., 2015a). The evaluation was done by comparing HRTFs calculated for reference meshes to acoustically measured HRTFs. Differences between various conditions were evaluated as model predictions and sound-localization experiments. We have shown that in the proximity of the ear canal, meshes with an average edge length of 1 mm or less are required. Also, we have shown that a small area as the virtual microphone used in the calculations yields best results (Ziegelwanger et al., 2015).

    In order to further improve the calculations, we have applied a non-uniform a-priori mesh grading to HRTF calculations. This method reduces the number of elements in the mesh down to 10 000 while still yielding perceptually-valid HRTFs (Ziegelwanger et al., 2016). With that method, HRTF calculations within less than an hour are achievable.

    HRTF evaluation

    Given the huge amount of parameters in the numerical calculations, hundreds of calculated HRTF sets had to be tested. The evaluation of HRTF quality is a complex task because it involves many percepts like directional sound localization, sound externalization, apparent source widening, distance perception, timbre changes, and others. Generally, one would like to have HRTFs generating virtual auditory scenes as realistic as natural scenes. While a model evaluating kind of "degree of realism" was out-of-reach, we focused on a very important and well-explored aspect: directional sound localization.

    For sound localization in the lateral dimension (left/right), there are not may aspects requiring HRTF individualization. The listener-specific ITD, as the interaural broadband difference between the sound's time-of-arrival, can contribute, though. Thus, we first created a 3-D model of time-of-arrival able to describe the ITD with a few parameters based on listener's HRTFs (Ziegelwanger and Majdak, 2014). 

    For sound localization in sagittal planes (top/down, front/back), individualization of HRTFs is a large issue. The whole process of sagittal-plane localization is still not completely understood, but the role of the dorsal cochlear nucleus (DCN) was known already at the beginning of LocaPhoto. Thus, in LocaPhoto, we have developed a model able to predict sagittal-plane sound localization performance, based on the spectral processing found in the DCN. It was rigorously evaluated in various conditions and was found to predict listener-specific localization performance quite well (Baumgartner et al., 2014).

    In LocaPhoto, this model allowed to evaluate many numerically calculated HRTFs. Also, it allowed to uncover surprising properties of human sound localization (Majdak et al., 2014). It is implemented in the Auditory Modeling Toolbox (Søndergaard and Majdak, 2013). It has been used for various evaluations (Baumgartner et al., 2013) like the positioning of loudspeakers in loudspeaker-based sound reproduction (Baumgartner and Majdak, 2015). And, it serves as a basis for a 3-D sound localization model (Altoe et al., 2014) and model addressing sensorineural hearing losses (Baumgartner et al., 2016).

    Funding:

    Austrian Science Fund (FWF, P 24124-N13)

    Duration:

    February 2012 - October 2016

    Publications:

    • Baumgartner, R., Majdak, P., Laback, B. (2016): Modeling the Effects of Sensorineural Hearing Loss on Sound Localization in the Median Plane, in: Trends in Hearing 20, 1-11.
    • Ziegelwanger, H., Kreuzer, W., Majdak, P. (2016): A priori mesh grading for the numerical calculation of the head-related transfer functions, in: Applied Acoustics 114, 99 - 110.  
    • Baumgartner, R., Majdak, P. (2015): Modeling Localization of Amplitude-Panned Virtual Sources in Sagittal Planes, in: J. Audio Eng. Soc 63, 562-569.
    • Ziegelwanger, H., Kreuzer, W., Majdak, P. (2015): Mesh2HRTF: An open-source software package for the numerical calculation of head-related transfer functions, in: Proceedings of the 22nd International Congress on Sound and Vibration (ICSV). Florence, Italy, 1-8.
    • Ziegelwanger, H., Majdak, P., Kreuzer, W. (2015): Numerical calculation of head-related transfer functions and sound localization: Microphone model and mesh discretization, in: The Journal of the Acoustical Society of America 138, 208-222.  
    • Altoè, A., Baumgartner, R., Majdak, P., Pulkki, V. (2014): Combining count-comparison and sagittal-plane localization models towards a three-dimensional representation of sound localization, in: Proceedings of the 7th Forum Acusticum. Krakow, Poland, 1-6.
    • Baumgartner, R., Majdak, P., Laback, B. (2014): Modeling Sound-Source Localization in Sagittal Planes for Human Listeners., in: The Journal of the Acoustical Society of America 136, 791-802.
    • Majdak, P., Baumgartner, R., Laback, B. (2014): Acoustic and non-acoustic factors in modeling listener-specific performance of sagittal-plane sound localization, in: Frontiers in Psychology 5, 319(1-10).
    • Baumgartner, R., Majdak, P., Laback, B. (2013): Assessment of sagittal-plane sound localization performance in spatial-audio applications, in: Blauert, J. (ed.), The Technology of Binaural Listening. Berlin-Heidelberg-New York (Springer), 93-119
    • Reichinger, A., Majdak, P., Sablatnig, R., Maierhofer, S. (2013): Evaluation of Methods for Optical 3-D Scanning of Human Pinnas, in: Proceedings of the 3D Vision Conference 2013, Third Joint 3DIM/3DPVT Conference. Seattle, WA, 390-397.
    • Søndergaard, P., Majdak, P. (2013): The Auditory Modeling Toolbox, in: Blauert, J. (ed.), The Technology of Binaural Listening. Berlin, Heidelberg, New York (Springer), 33-56

    Contact for more information:

    Piotr Majdak (Principle Investigator)

    Michael Mihocic (HRTF measurement)

  • Objective:

    Head-related transfer functions (HRTFs) describe sound transmission from the free field to a place in the ear canal in terms of linear time-invariant systems. They contain spectral and temporal features that vary according to the sound direction. Differences among subjects requires the measuring of subjects' individual HRTFs for studies on localization in virtual environments. In this project, a system for HRTF measurement was developed and installed in the semi-anechoic room at the Austrian Academy of Sciences.

    Method:

    Measurement of an HRTF was considered a system identification of the electro-acoustic chain: sound source-room-HRTF-microphone. The sounds in the ear canals were captured using in-ear microphones. The direction of the sound source was varied horizontally by rotating the subject on a turntable, and vertically by accessing one of the 22 loudspeakers positioned in the median plane. An optimized form of system identification with sweeps, the multiple exponential sweep method (MESM), was used for the measurement of transfer functions with satisfactory signal-to-noise ratios occurring within a reasonable amount of time. Subjects' positions were tracked during the measurement to ensure sufficient measurement accuracy. Measurement of headphone transfer functions was included in the HRTF measurement procedure. This allows equalization of headphone influence during the presentation of virtual stimuli.

    Results:

    Multi-channel audio equipment has been installed in the semi-anechoic room, giving access to recording and stimuli presentation via 24 channels simultaneously.

    The multiple exponential sweep method was developed, allowing fast transfer function measurement of weakly non-linear time invariant systems for multiple sources.

    The measurement procedure was developed and a database of HRTFs was created. Until now, HRTF data for over 20 subjects had not been available to create virtual stimuli and present them via headphones.

    To virtually position sounds in space, the HRTFs are used for filtering free-field sounds. This results in virtual acoustic stimuli (VAS). To create VAS and present them via headphones, applications called Virtual Sound Positioning (VSP) and Loca (Part of our ExpSuite Software Project) have been implemented. It allows virtual sound positioning in a free-field environment using both stationary and moving sound sources

  • Objective:

    In this project, head-related transfer functions (HRTFs) are measured and prepared for localization tests with cochlear implant listeners. The method and apparatus used for the measurement is the same as used for the general HRTF measurement (see project HRTF-System); however, the place where sound is acquired is different. In this project, the microphones built into the behind-the-ear (BtE) processors of cochlear implantees are used. The processors are located on the pinna, and the unprocessed microphone signals are used to calculate the BtE-HRTFs for different spatial positions.

    The BtE-HRTFs are then used in localization tests like Loca BtE-CI.

  • Objective:

    The Acoustic Research Institute was mandated to do measurements with the acoustic 64-channel microphone array using the beam forming method to derive a source model for high speed trains according to the new guideline CNOSSOS-EU.

    Method:

    The beam forming method was used, because the train is a fast moving vehicle and therefore a transient acoustic source. Five heights were used in the evaluation based on the CNOSSOS-EU and additionally five heights were evaluated that fit to the geometry of the trains.

    Application:

    Speeds from 200 km/h up to 330 km/h were tested for the ICEs and from 200 km/h up to 250 km/h for the Railjet. At the same speed both trains had the same acoustic level.

  • Objective and Methods:

    This study investigates the effect of the number of frequency channels on vertical place sound localization, especially front/back discrimination. This is important to determine how many of the basal-most channels/electrodes of a cochlear implant (CI) are needed to encode spectral localization cues. Normal hearing subjects listening to a CI simulation (the newly developed GET vocoder) will perform the experiment using the localization method developed in the subproject "Loca Methods". Learning effects will be studied by obtaining visual feedback.

    Results:

    Experiments are underway.

    Application:

    Knowing the number of channels required to encode spectral cues for localization in the vertical planes is an important step in the development of a 3-D localization strategy for CIs. 

    Funding:

    FWF (Austrian Science Fund): Project #P18401-B15

    Publications:

    • Goupell, M., Majdak, P., and Laback, B. (2010). Median-plane sound localization as a function of the number of spectral channels using a channel vocoder, J. Acoust. Soc. Am. 127, 990-1001.
  • The project PASS, which is processed in cooperation with the IEW of the TU Vienna and psiacoustic GmbH, deals with the psychoacoustic evaluation of noise. The project is a continuation of the project RELSKG and deals with high and low noise barriers that are simulated with the 2.5 D boundary element method (BEM) assuming incoherent line sources. The comparison of the 2.5 D BEM with measurements resulted in a good agreement. Additionally measurements with rail dampers were taken into account in the psychoacoustic tests. The evaluation was done in two tests with 40 test persons. The first test determines the relative annoyance and the second the just noticeable difference in annoyance. The results ware that freight trains at the same A-level are less annoying than passenger trains and that at the same A-level the noise behind a noise barrier is a little bit more annoying than without a measure. The project started in 2013 and lasts until the end of 2014.

  • The spatially oriented format for acoustics (SOFA) is dedicated to store all kinds of acoustic informations related to a specified geometrical setup. The main task is to describe simple HRTF measurements, but SOFA also aims to provide the functionality to store measurements of something fancy like BRIRs with a 64-channel mic-array in a multi-source excitation situation or directivity measurement of a loudspeaker. The format is intended to be easily extendable, highly portable, and actually the greatest common divider of all publicly available HRTF databases at the moment of writing.

    SOFA defines the structure of data and meta data and stores them in a numerical container. The data description will be a hierarchical description when coming from free-field HRTFs (simple setup) and going to more complex setups like mic-array measurements in reverberant spaces, excited by a loudspeaker array (complex setup). We will use global geometry description (related to the room), and local geometry description (related to the listener/source) without limiting the number of acoustic transmitters and receivers. Room descriptions will be available by linking a CAD file within SOFA. Networking support will be provided as well allowing to remotely access HRTFs and BRIRs from client computers.

    SOFA is being developed by many contributors worldwide. The development is coordinated at ARI by Piotr Majdak.

    Further information:

    www.sofaconventions.org.
  • Objective:

    Bilateral use of current cochlear implant (CI) systems allows for the localization of sound sources in the left-right dimension. However, localization in the front-back and up-down dimensions (within the so-called sagittal planes) is restricted as a result of insufficient transmission of the relevant information.

  • Objective:

    Bilateral use of current cochlear implant (CI) systems allows for the localization of sound sources in the left-right dimension. However, localization in the front-back and up-down dimensions (within the so-called sagittal planes) is restricted as a result of insufficient transmission of the relevant information.

    Method:

    In normal hearing listeners, localization within the sagittal planes is mediated when the pinna (outer ear) evaluates the spectral coloring of incoming waveforms at higher frequencies. Current CI systems do not provide these so-called pinna cues (or spectral cues), because of behind-the-ear microphone placement and the processor's limited analysis-frequency range.

    While these technical limitations are relatively manageable, some fundamental questions arise:

    • What is the minimum number of channels required to encode the pinna cues relevant to vertical plane localization?
    • To what extent can CI listeners learn to localize sound sources using pinna cues that are mapped to tonotopic regions associated with lower characteristic frequencies (according to the position of typically implanted electrodes)?
    • Which modifications of stimulation strategies are required to facilitate the localization of sound sources for CI listeners?

    Application:

    The improvement of sound source localization in the front-back dimension is regarded as an important aspect in daily traffic safety.

    Funding:

    FWF (Austrian Science Fund): Project #P18401-B15

    Status:

    Finished in Sept. 2010

    Subprojects:

    • ElecRang: Effects of upper-frequency boundary and spectral warping on speech intelligibility in electrical stimulation
    • SpecSens: Sensitivity to spectral peaks and notches
    • Loca-BtE-CI: Localization with behind-the-ear microphones
    • Loca Methods: Pointer method for localizing sound sources
    • Loca#Channels: Number of channels required for median place localization
    • SpatStrat: Development and evaluation of a spatialization strategy for cochlear implants
    • HRTF-Sim: Numerical simulation of HRTFs
  • Objective:

    SysBahnLärm was a joint project of the ARI with the TU Vienna the Austrian Railways and industrial partners funded by the FFG as well as the ÖBB. Aim of the project was to create a handbook on the systemic reduction of railway noise. The ARI was responsible for the psychoacoustic evaluation of the effects of noise from wheels with different roughness and of different noise reduction systems e.g. rail damping systems. Further, the ARI investigated the emission pattern of the rail-wheel contact using our 64-channel microphone array.

    Method:

    Using measured train pass-by signals, a psychoacoustic testing procedure was developed and stimuli for this test were selected. Subjects had to rate the relative annoyance of different trains or different noise reduction systems with respect to each other.
    For investigating the rail-wheel contact, a beamforming technique was used in order to determine the point of the maximal emission relative to the top of the rail.

    Application:

    The handbook should act as a guideline for the different noise reduction measures and their respective advantages and problems.