I originally studied Anthropology, graduating with a BA in 1995, and an MA in 1997.  I then exited left to be a computer systems analyst in the United States, first working in Lakeland, Florida in 1998-1999, and then Boston, Massachusetts in 1999-2002.

In 2002 I received a strong calling to begin a PhD in Linguistics, and shortly afterward began my studies at the University of British Columbia.  In my time there, I had two research themes.  One was on air flow in both speech production and speech perception, in which I had co-authorships on JASA and Nature, both in 2009.

The other was the actual focus of my dissertation.  That research involved Ultrasound imaging of the tongue during production of flaps, taps, and rhotics in speech.  In that research, I attempted to explain subphonemic speech variability by taking into account a wide range of factors including potential upcoming articulatory conflicts, a person’s motor skills, and constant effects such as gravity and elasticity.

In 2010, I implemented computational modelling and editing of linguistic theory as the designer and architect of TreeForm, a syntax tree drawing software package. TreeForm support, information, and publications are found on my TreeForm page.

After completing my Phd in 2011, I took up a postdoctoral fellowship spanning Western Sydney University’s MARCS Institute in Australia and the University of Canterbury’s New Zealand Institute of Language, Brain, and Behaviour in New Zealand.

In 2012, I obtained a Marsden Royal Society of New Zealand Grant: Saving energy vs making yourself understood during speech production. This research was to conduct joint ultrasound and articulometry (EMA) research on the influence of speech rate on this categorical variation in speech. One result was the continuous prototyping to non-metallic ultrasound probe holders.  The current version can be 3D printed, and provides excellent stability of the probe against the head while allowing joint ultrasound/EMA data collection.

In 2012, I also obtained Ministry of Business Innovation and Employment – Science and Innovation Group grant entitled Aero-tactile enhancement of speech perception, jointly with Jen Hay, Scott Lloyd and Greg O’Beirne. This commercial grant focused on foundation research leading to improved audio devices.

In 2013 Bryan Gick, Ian Wilson and I published a textbook entitled Articulatory Phonetics. This book introduces speech science and articulatory phonetics to upper-level undergraduates and new graduate students in Linguistics and Speech Science. By presenting a unified path through these topics, Articulatory Phonetics promises to make the task of teaching these classes much easier and more effective.

In 2015, Tom De Rybel and I patented our first invention, a dynamic piezoelectric air flow system to enhance speech perception. In 2017, we patented our second invention, a mask-less and plate-less system for measuring oral and nasal air flow, nasalance, and audio. This second system is potentially suitable for biofeedback in speech therapy and accent modification.

I am now moving into studying sensory scales in speech, to add speech production/perception scales to the famous “sonority scale” of speech.  I believe that doing so will help us understand what appear to be cross-linguistic variations in how the sonority scale relates to phonotactics and syllable structure.  Instead, I believe many of these variations relate to the overlay of many different signal streams that capture the dynamics of multi-sensory speech.