Human sound-localization in sagittal planes (SPs) is based on spectral cues. These cues are described by head-related transfer functions (HRTFs) in terms of a linear time-invariant (LTI) system. It is assumed that humans learn to use their individual HRTFs and assign direction to an incoming sound by comparison with the internal HRTF representations. Existing SP localization models aim in simulating this comparison process to predict a listener’s response in SPs to an incoming sound. Langendijk and Bronkhorst (2002, JASA 112:1583-96) presented a probabilistic model to predict localization performance in the median SP. In this thesis this model has been extended by incorporating more physiology-related processing stages, introducing adaptation to the actual bandwidth of the incoming sound as well as to the listener’s individual sensitivity, and allowing for predictions beyond the median SP by implementing binaural weighting. Further, a stage to retrieve psychophysical performance parameters such as quadrant error rate, local polar error, or polar bias from the probabilistic model predictions has been developed and applied to predict experimental results from previous studies. The model has been also applied to evaluate and optimize a subband approximation technique for HRTFs, a computationally efficient method to render virtual auditory displays. The localization-model and subband approximation results are discussed, in particular in the light of the cost function used for the subband approximation.