Matthias Heyne – Language and Music

Today I want to highlight the work of my former PhD student and always-colleague, Matthias Heyne.  Matthias is currently a Postdoctoral Research Associate in the Department of Speech, Language and Hearing Sciences at Boston University (more specifically, the Speech Neuroscience Laboratory, PI Prof. Frank Guenther).

Matthias has done amazing research into the relationship between native language and trombone play style. To quote him, Matthias’ “research explores the relationship of referential and non-referential forms of communication, such as language and (instrumental) music, respectively.”

Matthias and I have published an overview of visualization research in how people play brass instruments.  In addition, Matthias and I helped improve the way we analyze tongue contour shapes, and most recently, Matthias Heyne, Xuan Wang, myself, Kieran Dorreen, and Kevin Watson published an article demonstrating that /r/ production in non-rhotic New Zealand English follows many of the patterns found in the rhotic North American English.

Over the next year you can expect many more publications from Matthias, demonstrating the relationship between both acoustics and articulation of Tongan and English vowels and tongue position in steady-state trombone notes.  Expect research into diffusion MRI to follow as Matthias will be adding brain imagery research to his repertoire.

Matthias is an excellent new researcher, and I expect great things from him throughout a long career.  I am very proud to have had him as a PhD student, and to continue working and publishing with him.

3D-printable ultrasound probe stabilizer for speech research

Christopher Carignan, Wei-rong Chen, Muawiyath ShujauCatherine T. Best, and I recently published an article about our new 3D-printable ultrasound transducer stabilizer (probe holder). 

Ultrasound tongue imaging of speech requires the imaging probe to remain stable throughout data collection. Previous solutions to this stabilization problem have often been too cumbersome and/or expensive for wide-spread use. Our solution improves upon previous designs in both functionality and comfort, while also representing the first free and open-source 3D printable headset for both academic and clinical applications of ultrasound tongue imaging.

The non-metallic design permits the simultaneous collection of ultrasound and electromagnetic articulometry. For clinicians, the headset eliminates the need for holding the imaging probe manually, allowing them to interact with patients in an unencumbered way.

The printable materials we provided work for midsaggital imaging of the tongue using a few select ultrasound transducers like the Logiq E 8C-RS and the Telemed transducers for Articulate Instruments systems, but can be modified easily to allow for other probes, or for coronal tongue imaging.

The system costs from $200 (for a 100 micron print) to $600 USD (for a 20 micron print) in materials to produce, making it quite affordable.  It is also very comfortable compared to most stabilization systems, and is accurate to within about 2mm of motion in any direction, and 2 degrees of rotation in any direction.  More details can be found in the article documenting the system.

Here is an image of the system, fully assembled and worn:

Transducer stabilizer

Transducer stabilizer

 

 

The articulation of /ɹ/ in New Zealand English

Matthias Heyne, Xuan Wang, myself (Donald Derrick), Kieran Dorreen, and Kevin Watson have recently had an article documenting the articulation of  /ɹ/ in New Zealand English.

This work is therefore in part a follow-up to some of my co-authored research into biomechanical modelling of English  /ɹ/ variants, indicating that vocalic context influences variation through muscle stress, strain, and displacement.  It is, by these three measures, “easier” to move from an /i/ to a tip-down /ɹ/ , but from /a/ to a tip-up /ɹ/.

In this study, for speakers who vary at all (some only do tip-up or tip-down), they are most likely to produce tip-up /ɹ/ in these conditions:

back vowel > low central vowel > high front vowel

initial /ɹ/ > intervocalic /ɹ/ > following a coronal (“dr”) > following a velar (“cr”)

The results show that allophonic variation of NZE /ɹ/ is similar to that in American English, indicating that the variation is caused by similar constraints.  The results support theories of locally optimized modular speech motor control, and a mechanical model of rhotic variation.

The abstract is repeated below, with links to articles contained within:

This paper investigates the articulation of approximant /ɹ/ in New Zealand English (NZE), and tests whether the patterns documented for rhotic varieties of English hold in a non- rhotic dialect. Midsagittal ultrasound data for 62 speakers producing 13 tokens of /ɹ/ in various phonetic environments were categorized according to the taxonomy by Delattre & Freeman (1968), and semi-automatically traced and quantified using the AAA software (Articulate Instruments Ltd. 2012) and a Modified Curvature Index (MCI; Dawson, Tiede & Whalen 2016). Twenty-five NZE speakers produced tip-down /ɹ/ exclusively, 12 tip-up /ɹ/ exclusively, and 25 produced both, partially depending on context. Those speakers who produced both variants used the most tip-down /ɹ/ in front vowel contexts, the most tip- up /ɹ/ in back vowel contexts, and varying rates in low central vowel contexts. The NZE speakers produced tip-up /ɹ/ most often in word-initial position, followed by intervocalic, then coronal, and least often in velar contexts. The results indicate that the allophonic variation patterns of /ɹ/ in NZE are similar to those of American English (Mielke, Baker & Archangeli 2010, 2016). We show that MCI values can be used to facilitate /ɹ/ gesture classification; linear mixed-effects models fit on the MCI values of manually categorized tongue contours show significant differences between all but two of Delattre & Freeman’s (1968) tongue types. Overall, the results support theories of modular speech motor control with articulation strategies evolving from local rather than global optimization processes, and a mechanical model of rhotic variation (see Stavness et al. 2012).

Trip to Taiwan: Talks and conference

I spent October 14 to October 23, 2017 in Taiwan, giving many talks.  My first was a talk at National Taiwan university on Monday the 16th.  There I spoke about commercializing research.

On Wednesday the 18th, I went to Academia Sinica and the Institute of Linguistics and spoke on aero-tactile integration in speech perception.

Lastly, on the weekend of the 21-22nd, I spoke at workshop at National Tsing Hua University (my hosts) on ultrasound and EMA research.

If you want copies of the talks, send me an email to my work.  Apologies for the hassle: They are all too large to post to this website.

Ultrasound/EMA guide

This is a guide to the use of ultrasound and EMA in combination.  It is a bit out of date, and probably needs a day or two of work to make fully correct, but it describes the techniques I use with 3 researchers.  Of course I wrote this years ago, and now I can run an Ultrasound/EMA experiment by myself if I need to.

Ultrasound Video and Microphone Audio Capture

This is a simple set of one-line scripts for capturing ultrasound audio and video.

I built it to work as batch files through the WINDOWS OS command-line because that’s the OS that seems to give me the highest frame-rate. (I use macs, and this works with the windows OS booted from bootcamp).

Look at the README file to make sure you use the scripts properly.

As always, contact me if you have issues.

 

 

Crop and Segment Video

Here I offer you a program that will scan through all of the PRAAT textgrids in a folder, and for each it will search for the named textgrid tier.  Then it will loop through each segment in that tier, find the ones with text in them, and cut clips from a video with the exact same base-name based on those time stamps.  Each video will be cropped to the region given in the cropping variable (currently set for the Logiq E ultrasound).

The program uses R as a wrapper to load PRAAT textgrid files, uses a PERL program textgrid2csv.pl (copyright Theo Veenker <T.J.G.Veenker@uu.nl>) to make a CSV file usable in PRAAT, and work with that data.  

Therefore: 1) You have to extract audio from the video file you want to crop and segment, and transcribe and label that video in a PRAAT textgrid to the detail you want to use for each cropped video file (usually a word or phrase). 2) Go into the code, and change all the variables at the top according to your needs.

Lots of work, but this program will still save you heaps of time.  It is especially useful if you are using AAA for ultrasound analysis but only have video instead of AAA’s proprietary ultrasound file storage format.

Note, I provide sample data in the zip file to test the program – a swallow used for a palate trace.  Get the program to run with the sample before you modify it for your own purposes.

Aligning Audio and Video

Dealing with video files is just about the most obnoxious experience a researcher can have.  I wasted a *year* of research getting this one wrong before I realized the only, and I do mean only, effective solution involves FFMPEG.  Here I offer you a program that will re-align every video held in one directory and for which you have alignment data.

The program uses R as a wrapper to load a .csv file that contains the meta-data on a directory of video files that you want to align. 

Therefore: 1) You have to hand-check the audio-visual offsets for each file, and put that into the .csv file. 2) You also have to make sure you have installed FFMPEG, SOX, PERL, R, and the R modules “reader” and “gdata”. 3) You have to look inside my R-code and change the paths and extensions to make the program will work on your computer.

I provide a sample video with a swallow used to obtain a palate trace.  Get the software to work on your machine with this sample before modifying the code for your project.

 

‘Air Puffs’ – RNZ broadcast

Some time ago, I was on Radio New Zealand discussing my research on the use of air flow to enhance speech perception. Alas, it did not have the commercial value we thought it would have due to the need for higher airflow than is feasible necessary to enhance speech perception in continuous speech. However, it since led to the development of a mask-less and plate-less air flow estimation system that works well. The system provides useful biofeedback information that has the potential to help with speech therapy and accent modification.

Phoneme Quality

I rewrote a PRAAT script – shamelessly edited from Mietta’s amazing original – but modified to work well on both MAC and PC.  The scrip opens all the WAVE or AIFF files and matching textgrids, and take a look at the relevant tier (defaults to 3) to extract duration, f0 (pitch), F1, F2, F3, cog. The PRAAT script and readme file are located here

Mietta Lenne’s scripts, for those who don’t know, seem to be on Github these days.