Category Archives: Research

These posts are dedicated to my research.

Uniformity in speech: The economy of reuse and adaptation across contexts

Myself, Connor Mayer, and Bryan Gick recently published “Uniformity in speech: The economy of reuse and adaptation across contexts” with Glossa. This article compares how Kiwis and North Americans produce flap sequences like “editor” in North America, or “added a” in New Zealand. Kiwis produce these similarly during slow and fast speech, North Americans often have two different methods for slow and fast speech. We show that difference likely stems from the extreme variability built into the “r”s of rhotic dialects of English reaching flaps because of reuse and adaptation of motor “chunks”.

To illustrate our claim: In the image below showing tongue tip frontness for the second vowel in 3-vowel-sequences for words like “editor”, you can see that for faster speech (6-7 syllables/second), there is a jump where the tongue tip is not nearly as fronted, but only for North American English (NAE) vowels, not for New Zealand English (NZE) vowels or NAE rhotic vowels. Here the high variability intrinsic to NAE rhotic (and commonly seen in other contexts) is visible in adjacent NAE non-rhotic vowels, but NZE has no access to rhotic vowels at all, so the non-rhotic vowels do not have a source of such motor control variability, even though such variability would provide mechanical advantage.

The abstract for this article, which explains in more technical but also more accurate terms, can be seen here:

“North American English (NAE) flaps/taps and rhotic vowels have been shown to exhibit extreme variability that can be categorized into subphonemic variants. This variability provides known mechanical benefits in NAE speech production. However, we also know languages reuse gestures for maximum efficiency during speech production; this uniformity of behavior reduces gestural variability. Here we test two conflicting hypotheses: Under a uniformity hypothesis in which extreme variability is inherent to rhotic vowels only, that variability can still transfer to flaps/taps and non-rhotic vowels due to adaptation across similar speech contexts. But because of the underlying reliance on extreme variability from rhotic vowels, this uniformity hypothesis does not predict extreme variability in flaps/taps within non-rhotic English dialects. Under a mechanical hypothesis in which extreme variability is inherent to all segments where it would provide mechanical advantage, including flaps/taps, such variability would appear across all English dialects with flaps/taps, affecting adjacent non-rhotic vowels through coarticulation whenever doing so would provide mechanical advantage. We test these two hypotheses by comparing speech-rate-varying NAE sequences with and without rhotic vowels to sequences from New Zealand English (NZE), which has flaps/taps, but no rhotic vowels at all. We find that NZE speakers all use similar tongue-tip motion patterns for flaps/taps across both slow and fast speech, unlike NAE speakers who sometimes use two different stable patterns, one for slow and another fast speech. Results show extreme variability is not inherent to flaps/taps across English dialects, supporting the uniformity hypothesis.”

Hearing, seeing, and feeling speech: the neurophysiological correlates of trimodal speech perception

Doreen Hansmann, myself, and Catherine Theys recently published a partially null-result article on the neurophysiological correlates of trimodal speech in Frontiers in Human Neuroscience: Hearing: Speech and Language. The short form is that while we saw behavioural differences showing integration of audio, visual, and tactile speech in closed-choice experiments, we could not extend that result to show an influence of tactile speech on brain activity – the effect is just to small:

Figure 3. Accuracy data for syllable /pa/ for auditory-only (A), audio-visual (AV), audio-tactile (AT), and audio-visual-tactile (AVT) conditions at each SNR level (–8, –14, –20 dB). Error bars are based on Binomial confidence intervals (95%).

The abstract for this article is below:

Introduction: To perceive speech, our brains process information from different sensory modalities. Previous electroencephalography (EEG) research has established that audio-visual information provides an advantage compared to auditory-only information during early auditory processing. In addition, behavioral research showed that auditory speech perception is not only enhanced by visual information but also by tactile information, transmitted by puffs of air arriving at the skin and aligned with speech. The current EEG study aimed to investigate whether the behavioral benefits of bimodal audio-aerotactile and trimodal audio-visual-aerotactile speech presentation are reflected in cortical auditory event-related neurophysiological responses.

Methods: To examine the influence of multimodal information on speech perception, 20 listeners conducted a two-alternative forced-choice syllable identification task at three different signal-to-noise levels.

Results: Behavioral results showed increased syllable identification accuracy when auditory information was complemented with visual information, but did not show the same effect for the addition of tactile information. Similarly, EEG results showed an amplitude suppression for the auditory N1 and P2 event-related potentials for the audio-visual and audio-visual-aerotactile modalities compared to auditory and audio-aerotactile presentations of the syllable/pa/. No statistically significant difference was present between audio-aerotactile and auditory-only modalities.

Discussion: Current findings are consistent with past EEG research showing a visually induced amplitude suppression during early auditory processing. In addition, the significant neurophysiological effect of audio-visual but not audio-aerotactile presentation is in line with the large benefit of visual information but comparatively much smaller effect of aerotactile information on auditory speech perception previously identified in behavioral research.

Exploring how speech air flow may impact the spread of airborne diseases

I am participating on an American Association for the Advancement of Science (AAAS) 2022 meeting panel on “Transmission of Airborne Pathogens through Expiratory Activities” on Friday, February 18th from 6:00 to 6:45 AM Greenwich mean time. You can register for the meeting by clicking here. In advance of that meeting, the University of British Columbia asked me some Q&A questions exploring how speech air flow may impact the spread of airborne diseases.

The AAAS meeting itself is hosted by Prof. Bryan Gick of the University of British Columbia. It has individual talks by Dr. Sima Asadi on “Respiratory behavior and aerosol particles in airborne pathogen transmission”, Dr. Nicole M. Bouvier on “Talking about respiratory infectious disease transmission”, and myself on “Human airflow while breathing, speaking, and singing with and without masks”.

Dr. Sima Asadi’s talk focuses on the particles emitted during human speech, and the efficacy of masks in controlling their outward emission. For this work, Sima received the Zuhair A. Munir Award for the Best Doctoral Dissertation in Engineering from UC Davis in 2021. She is currently a postdoctoral associate in Chemical Engineering at MIT (Boston).

Dr. (Prof) Nicole M. Bouvier is an associate professor of Medicine and Infectious Diseases and Microbiology at the Icahn School of Medicine at Mount Sinai (New York). Nichole discusses how we understand the roots by which respiratory microorganisms, like viruses and bacteria, transmit between humans, which is fundamental in how we develop both medical and public health countermeasures to reduce or prevent their spread. However, much of what we think we know is based on evidence that is incomplete at best, and full of confusing terminology, as the current COVID-19 pandemic has made abundantly clear.

I myself am new to airborne transmission research, coming instead from the perspective that visual and aero-tactile speech help with speech perception, and so masks would naturally interfere with clear communication. They would do this by potentially muffling some speech sounds, but mostly by cutting off the perceiver form visual and even tactile speech signals.

However, since my natural interests involve speech air flow, I was ideally suited to move into research studying how these same air flows may be reduced or eliminated by face masks. I conduct this research with a Mechanical Engineering team at the University of Canterbury, and some of their results are featured in my individual presentation. Our most recent publication on Speech air flow with and without face masks was highlighted in previous posts on Maps of Speech, and in a YouTube video found here.

Speech air flow with and without face masks

It took a while due to the absolutely shocking amount of work required for the “Gait change in tongue movement” article, but Natalia Kabaliuk, Luke Longworth, Peiman Pishyar‑Dehkordi, Mark Jermy and I were able to get our article on “Speech air flow with and without face masks” accepted to Scientific Reports (Nature Publishing Group). The article is now out (though a pre-review version had been available since we submitted this article to Sci Rep). You can also watch my YouTube video describing many of the results.

Here is an example of a low-stiffness air-flow from a porous mask, which allows leaks from the tops, bottoms, and sides, and forward flow prevention, as taken from Figure 5 of the article.

Figure 5. Audio and Schlieren of speech through a porous face mask (Frame 621, 1st block, CORI Supermask). Image from 88 ms after the release burst for the [kh ] in “loch”. Note that the k’s puff is smoother and less well defined than the one in Fig. 2, but still has eddies that change air-density across the span of the puff. The red-dashed line in the audio waveform indicates the timing of the schlieren frame.

And here is an example of typical higher-stiffness flow from a less porous mask from Figure 8.

Figure 8. Audio and Schlieren of speech with a tightly fitting surgical mask (Frame 334, 1st block, Henry Schlein surgical mask [level 2]). Air slowly flows out above the eyes, floating out and upward continuously. The red-dashed line in the audio waveform indicates the timing of the schlieren frame.

Masks can be made to fit tighter, as in well-designed KN95/N95 masks and masks with metal strips at the nose to prevent upward-escaping air flow. However, for all the masks we studied, the tradeoff was not entirely avoided. And with that, here is our abstract:

Face masks slow exhaled air flow and sequester exhaled particles. There are many types of face masks on the market today, each having widely varying fits, filtering, and air redirection characteristics. While particle filtration and flow resistance from masks has been well studied, their effects on speech air flow has not. We built a schlieren system and recorded speech air flow with 14 different face masks, comparing it to mask-less speech. All of the face masks reduced air flow from speech, but some allowed air flow features to reach further than 40 cm from a speaker’s lips and nose within a few seconds, and all the face masks allowed some air to escape above the nose. Evidence from available literature shows that distancing and ventilation in higher-risk indoor environment provide more benefit than wearing a face mask. Our own research shows all the masks we tested provide some additional benefit of restricting air flow from a speaker. However, well-fitted mask specifically designed for the purpose of preventing the spread of disease reduce air flow the most. Future research will study the effects of face masks on speech communication in order to facilitate cost/benefit
analysis of mask usage in various environments.

Gait Change in Tongue Movement

Bryan Gick and I recently published an article on “Gait Change in Tongue Movement” in Scientific Reports (Nature Publishing Group). Below is the abstract, with images alongside. However, if you want an easy-to-follow walkthrough of the paper, I also published a YouTube video on the paper on my YouTube Channel for Maps of Speech.

During locomotion, humans switch gaits from walking to running, and horses from walking to trotting to cantering to galloping, as they increase their movement rate. It is unknown whether gait change leading to a wider movement rate range is limited to locomotive-type behaviours, or instead is a general property of any rate-varying motor system. The tongue during speech provides a motor system that can address this gap. In controlled speech experiments, using phrases containing complex tongue-movement sequences, we demonstrate distinct gaits in tongue movement at different speech rates. As speakers widen their tongue-front displacement range, they gain access to wider speech-rate ranges.

At the widest displacement ranges, speakers also produce categorically different patterns for their slowest and fastest speech. Speakers with the narrowest tongue-front displacement ranges show one stable speech-gait pattern, and speakers with widest ranges show two. Critical fluctuation analysis of tongue motion over the time-course of speech revealed these speakers used greater effort at the beginning of phrases—such end-state-comfort effects indicate speech planning.

Based on these findings, we expect that categorical motion solutions may emerge in any motor system, providing that system with access to wider movement-rate ranges.

Evidence for active control of tongue lateralization in Australian English /l/

Jia Ying, Jason A. Shaw, Christopher Carignan, Michael Proctor, myself, and Catherine T. Best just published Evidence for active control of tongue lateralization in Australian English /l/. Most research on /l/ articulation has looked at motion timing along the midline, or midsagittal plane. This study compares that information to motion on the sides of the tongue. It focuses on Australian English (AusE), using three-dimensional electromagnetic articulography (3D EMA).

Fig. 11. Temporal dynamics of tongue curvature in the coronal plane over the entire V-/l/ interval. The brackets indicate onset (red) and coda (blue) /l/ intervals. Each bracket extends from the /l/ onset to its peak. For onset /l/s, the peak occurs earlier (at about 200 ms) than coda /l/s (at about 450 ms). A time of zero indicates the vowel onset. The 800-interval window captures the entire V-/l/ articulation in every token.

The articulatory analyses show: 1) consistent with past work, the timing lag between mid-sagittal tongue tip and tongue body gestures differs for syllable onsets and codas, and for different vowels.

2) The lateral channel is formed by tilting the tongue to the left/right side of the oral cavity as opposed to curving the tongue within the coronal plane

3) the timing of lateral channel formation relative to the tongue body gesture is consistent across syllable positions and vowel contexts – even though temporal lag between tongue tip and tongue body gestures varies.

This last result suggests that lateral channel formation is actively controlled as opposed to resulting as a passive consequence of tongue stretching. These results are interpreted as evidence that the formation of the lateral channel is a primary articulatory goal of /l/ production in AusE.

Locating de-lateralization in the pathway of sound changes affecting coda /l/

Patrycja Strycharczuk, Jason Shaw, and I just published Locating de-lateralization in the pathway of sound changes affecting coda /l/, in which we analyze New Zealand English /l/ using Ultrasound and Articulometry. You can find the article here. Put in the simplest English terms, the article shows the process by which /l/-sounds in speech can change over time from a light /l/ (like the first /l/ in ‘lull’) to a darker /l/ (like the second /l/ in ‘lull’). This darkening is the result of the upper-back, or dorsum, of the tongue moving closer to the back of the throat. This motion in turn reduces lateralization, or the lowering of the sides of the tongue away from the upper teeth. This is followed, over time, by the tongue tip no longer connecting to the front of the hard palate – the /l/ becomes a back vowel or vocalizes.

Two subcategories identified in the distribution of TT raising for the Vl#C context. Red = vocalized.

If you want a more technical description, Here is the abstract:

‘Vocalization’ is a label commonly used to describe an ongoing change in progress affecting coda /l/ in multiple accents of English. The label is directly linked to the loss of consonantal constriction observed in this process, but it also implicitly signals a specific type of change affecting manner of articulation from consonant to vowel, which involves loss of tongue lateralization, the defining property of lateral sounds. In this study, we consider two potential diachronic pathways of change: an abrupt loss of lateralization which follows from the loss of apical constriction, versus slower gradual loss of lateralization that tracks the articulatory changes to the dorsal component of /l/. We present articulatory data from seven speakers of New Zealand English, acquired using a combination of midsagittal and lateral EMA, as well as midsagittal ultrasound. Different stages of sound change are reconstructed through synchronic variation between light, dark, and vocalized /l/, induced by systematic manipulation of the segmental and morphosyntactic environment, and complemented by comparison of different individual articulatory strategies. Our data show a systematic reduction in lateralization that is conditioned by increasing degrees of /l/-darkening and /l/-vocalization. This observation supports the idea of a gradual diachronic shift and the following pathway of change: /l/-darkening, driven by the dorsal gesture, precipitates some loss of lateralization, which is followed by loss of the apical gesture. This pathway indicates that loss of lateralization is an integral component in the changes in manner of articulation of /l/ from consonantal to vocalic.

Native language influence on brass instrument performance

Matthias Heyne, myself, and Jalal Al-Tamimi recently published Native language influence on brass instrument performance: An application of generalized additive mixed models (GAMMs) to midsagittal ultrasound images of the tongue. The paper contains the bulk of the results form Matthias’ PhD Dissertation. The study is huge, with ultrasound tongue recordings of 10 New Zealand English (NZE) and 10 Tongan trombone players. There are 12,256 individual tongue contours of vowel tokens (7,834 for NZE, 4,422 for Tongan) and 7,428 individual tongue contours of sustained note production (3,715 for NZE, 3,713 for Tongan).

Figure 4 in the paper.

The results show that native language influences tongue position during Trombone note production. This includes tongue position and note variability. The results also support Dispersion Theory (Liljencrants and Lindblom 1972; Lindblom, 1986; Al-Tamimi and Ferragne, 2005) in that vowel production is more variable in Tongan, which has few vowels, then in NZE, which has many.

The results also show that note production at the back of the tongue maps to low-back vowel production (schwa and ‘lot’ for NZE, /o/ and /u/ for schwa). These two result sets support an analysis of local optimization with semi-independent tongue regions (Ganesh et al., 2010, Loeb, 2012).

The results do not, however, support the traditional brass pedagogy hypothesis that higher notes are played with a closer (higher) tongue position. However, Matthias is currently working with MRI data that *does* support the brass pedagogy hypothesis, and that we might not have seen this because of the ultrasound transducer stabilization system needed to keep the ultrasound probe aligned to the participant’s head.

Liljencrants, Johan, and Björn Lindblom. 1972. “Numerical Simulation of Vowel Quality Systems: The Role of Perceptual Contrast.” Language, 839–62.

Lindblom, Björn. 1963. Spectrographic study of vowel reduction. The Journal of the Acoustical Society of America 35(11): 1773–1781.

Al-Tamimi, J., and Ferragne, E. 2005. “Does vowel space size depend on language vowel inventories? Evidence from two Arabic dialects and French,” in Proceedings of the Ninth European Conference on Speech Communication and Technology, Lisbon, 2465–2468.

Ganesh, Gowrishankar, Masahiko Haruno, Mitsuo Kawato, and Etienne Burdet. 2010. “Motor Memory and Local Minimization of Error and Effort, Not Global Optimization, Determine Motor Behavior.” Journal of Neurophysiology 104 (1): 382–90.

Loeb, Gerald E. 2012. “Optimal Isn’t Good Enough.” Biological Cybernetics 106 (11–12): 757–65.

Tri-modal speech: Audio-visual-tactile integration in speech perception

Figure 3 in paper.

Myself, Doreen Hansmann, and Catherine Theys just published our article on “Tri-modal Speech: Audio-visual-tactile Integration in Speech Perception” in the Journal of the Acoustical Society of America. This paper was also presented as a poster at the American Speech-Language-Hearing Association (ASHA) Annual Convention in Orlando, Florida, November 21-22, 2019, winning a meritorious poster award.

TL-DR; People use auditory, visual, and tactile speech information to accurately identify syllables in noise. Auditory speech information is the most important, then visual information, and lastly aero-tactile information – but we can use them all at once.

Abstract

Speech perception is a multi-sensory experience. Visual information enhances (Sumby and Pollack, 1954) and interferes (McGurk and MacDonald, 1976) with speech perception. Similarly, tactile information, transmitted by puffs of air arriving at the skin and aligned with speech audio, alters (Gick and Derrick, 2009) auditory speech perception in noise. It has also been shown that aero-tactile information influences visual speech perception when an auditory signal is absent (Derrick, Bicevskis, and Gick, 2019a). However, researchers have not yet identified the combined influence of aero-tactile, visual, and auditory information on speech perception. The effects of matching and mismatching visual and tactile speech on two-way forced-choice auditory syllable-in-noise classification tasks were tested. The results showed that both visual and tactile information altered the signal-to-noise threshold for accurate identification of auditory signals. Similar to previous studies, the visual component has a strong influence on auditory syllable-in-noise identification, as evidenced by a 28.04 dB improvement in SNR between matching and mismatching visual stimulus presentations. In comparison, the tactile component had a small influence resulting in a 1.58 dB SNR match-mismatch range. The effects of both the audio and tactile information were shown to be additive.

Derrick, D., Bicevskis, K., and Gick, B. (2019a). “Visual-tactile speech perception and the autism quotient,” Frontiers in Communication – Language Sciences 3(61), 1–11, doi: http://dx.doi.org/10.3389/fcomm.2018.00061

Gick, B., and Derrick, D. (2009). “Aero-tactile integration in speech perception,” Nature 462, 502–504, doi: https://doi.org/10.1038/nature08572.

McGurk, H., and MacDonald, J. (1976). “Hearing lips and seeing voices,” Nature 264, 746–748, doi: http://dx.doi.org/https://doi.org/10.1038/264746a0

“Articulatory Phonetics” Resources

Back in 2013, Bryan Gick, Ian Wilson and myself published a textbook on “Articulatory Phonetics”. This book contained many assignments at the end of each chapter and in supplementary resources. After many years of using those assignments, Bryan, Ian, and many colleagues figured out that they needed some serious updating.

These updates have been completed, and are available here. The link includes recommended lab tools to use while teaching from this book, as well as links and external resources.

P.S. Don’t get too excited students – I’m not posting the answers here or anywhere else 😉