Implications for pathological speech

Coordinated Project 2016-17 Scuola Normale Superiore (SNS), Pisa – Acoustic Research Institute (ARI), Austrian Academy of Science, Vienna
PIs: Chiara Celata (SNS), Sylvia Moosmueller (ARI)
Research personnel: Chiara Meluzzi (SNS), Bettina Hobel (ARI)

Short Description

The project aims at modeling the impact of speech gesture coordination on the rhythmical properties of languages.

Speech gestural structures are sets of gestures and a specification of how they are temporally and spatially coordinated with respect to one another. Gestural anticipations, posticipations and overlap are the ingredients of coarticulation, i.e., the coordinatory activity of speech movements that allows adjacent vowels and consonants to be produced simultaneously, thus resulting into one smooth whole.

Rhythm is the systematic patterning of timing, accent, and grouping in sequences of events and encompasses both speech and music domains. We only become aware of how important it is in verbal communication when we listen to non-fluent speech. For example, deaf people with impaired or absent auditory feedback can be taught, after cochlear implantation and logopedic rehabilitation, to develop an “auditory” map for speech processing and imitation, but native-like patterns of gestural and rhythmical coordination are much more difficult to achieve.

Both gestural coordination and rhythm thus contribute to the way fluent speech is programmed, produced, and even perceived.

However, we still miss a global understanding of how the two dimensions of gestural coordination and speech rhythm interact in natural languages.

Indeed, the gestural and the rhythmical approaches sometimes make different predictions. For example, we do not know whether the consonants composing heterosyllabic clusters are articulatorily independent from one another and are timed with respect to different vocalic nuclei, as some theoretical frameworks in the domain of gestural coordination would predict, or whether they are rather globally timed with the preceding vocalic nucleus, especially if it is stressed, as some proposals in the domain of speech rhythm assume. Also, we do not know if cross-linguistic differences in how heterosyllabic clusters are articulatorily coordinated to vocalic nuclei reflect or are reflected by cross-linguistic differences in the languages’ rhythmical properties.

This project thus tries to reconcile the gestural and the rhythmical perspective into a unified research framework devoted to uncovering how inter-segmental coordination influences, and is influenced by, the rhythmical properties of supra-segmental entities.

To that aim, we develop a series of cross-linguistic experiments on Italian and Standard Austrian German to clarify some critical aspects of speech organization in the two languages and to establish a link between language-specific phonotactic constraints and the temporal and spatial properties of segments’ production.

The experiments, based on a reading task, include acoustic analyses for the identification of the temporal patterns and articulatory (ultrasound tongue imaging, UTI) analyses for the investigation of gestural coordination.

In addition, it is the purposes of the project to set the stage for an analysis of how the speech of cochlear implanted speakers differs from normal speech with respect to gestural coordination and rhythmic patterns. Spontaneous conversations will be recorded of both Italian and Standard Austrian German speakers. The target of the acoustic analyses will be the identification of the areas of most prominent difficulty concerning both the coarticulatory and the temporal aspects of spontaneous speech produced by CI-speakers.