(Trying) to use iZotope RX 11 Elements

Here is my experience of using iZotope RX 11 Elements for research:

1) I went to the website to buy iZotope RX 11 Elements – which has (I think) the tools I need: Audio Denoising, dereverb, and declick.
2) I had to send them proof I was an educator to get a 50% discount
3) After 3 days, they acknowledged and I could buy the product.
5) To actually access the product, I was sent to use their product manager.
6) This app-wrapped website forced me to create a login through Google.
7) I signed in with my google email, only to be told I need to use my “passkey”, which never appears ANYWHERE on my computer or phone.
8) I could hit “try another way” and get to a screen that lets me use my passkey (what?) and my password. OK, I have a password, I tried that.
9) I can type in my password (can I use my password holder tool, no. That was disabled!)
10) The password did not allow you to see what you type – and there is no option to do so.
11) I typed in my password. I was still forced to use “2 step verification”.
12) But I could tap “yes” on my phone instead of using a passkey that doesn’t exist.
13) I tap “yes” on my phone. And then I was redirected to the start screen!
14) I reopened the app and clicked login with google again …
15) This time, it brought me to the elements page.
16) I clicked authorize, and it crashed to desktop.
17) I then tried to re-open the app, and it allowed me to “authorize”, and then it opened to your website where … I … could … purchase … it?
18) I contacted Native Instruments support.
19) Support told me to use a new tool, Native Access 2, which they did NOT put face-forward on their website, to install RX 11 Elements..
20) I clicked their link on Native Access 2: 404 file not found.
21) I found a direct download instead.
22) I opened the link to download, and it began to download the Native Instruments software, which I could use to download RX 11 Elements.
23) I could not find any application in my application folder to run Elements.
24) I went online and found out this program isn’t even an application, but a set of plugins that works with other products (Unnamed)
25) This information was not at all obvious from the website, to put it mildly.
26) I contacted support again.
27) They send a new and better link to Native Access 2:
28) the install process was the same as the old one, complete with a request for a “passkey” that never pops up and never tells you where it is meant to be. I did steps 7-13 again minus the crash to desktop.
29) I went to the library and reinstalled RX 11 ELEMENTS.
30) I downloaded the DaVinchi video editing tool, which I found out on my own is meant to support Elements 11 as plugins.
31) I opened DaVinchi, tried to use the plugins, and it crashed to desktop.
32) This crash happened with VST and AU versions.
33) I contacted support again.
34) they asked me to download and use a Native Instruments tool to collect bug information.
35) I downloaded and installed the tool.
36) I got a bug report as a zip file.
37) I tried to email it to support as requested.
38) Email bounced: Native Instruments does not allow the emailing of zip files.
39) I emailed support again to inform them of the issue.
40) They had me put the zip file on a Google drive and send them a link that they are allowed to access.
41) They told me the original install had been incomplete, told me to try again.
42) I found and used their uninstall tool to delete product portal.
43) I used their uninstall tool to delete the RX 11 Audio Editor.
44) I then reinstalled RX 11 Elements.
45) I then reopened Davinci resolve 20
46) Same error:
47) I contacted support again.
48) Support replied back with a list of compatible systems. This does NOT include the current Mac OS X – Tahoe
49) I’m thoroughly (censored) until they support the current Mac OS, which was released September 15, 2025.
50) I contacted support to ask when Tahoe would be supported.
51) 10 days later: message from iZotope

We’d love to hear what you think of our customer service. Please take a moment to answer one simple question by clicking either link below:

How would you rate the support you received?

Good, I’m satisfied
Bad, I’m unsatisfied

1) I went to the website to buy iZotope RX 11 Elements – which has (I think) the tools I need: Audio Denoising, dereverb, and declick.
2) I had to send them proof I was an educator to get a 50% discount
3) After 3 days, they acknowledged and I could buy the product.
5) To actually access the product, I was sent to use their product manager.
6) This app-wrapped website forced me to create a login through Google.
7) I signed in with my google email, only to be told I need to use my “passkey”, which never appears ANYWHERE on my computer or phone.
8) I could hit “try another way” and get to a screen that lets me use my passkey (what?) and my password. OK, I have a password, I tried that.
9) I can type in my password (can I use my password holder tool, no. That was disabled!)
10) The password did not allow you to see what you type – and there is no option to do so.
11) I typed in my password. I was still forced to use “2 step verification”.
12) But I could tap “yes” on my phone instead of using a passkey that doesn’t exist.
13) I tap “yes” on my phone. And then I was redirected to the start screen!
14) I reopened the app and clicked login with google again …
15) This time, it brought me to the elements page.
16) I clicked authorize, and it crashed to desktop.
17) I then tried to re-open the app, and it allowed me to “authorize”, and then it opened to your website where … I … could … purchase … it?
18) I contacted Native Instruments support.
19) Support told me to use a new tool, Native Access 2, which they did NOT put face-forward on their website, to install RX 11 Elements..
20) I clicked their link on Native Access 2: 404 file not found.
21) I found a direct download instead.
22) I opened the link to download, and it began to download the Native Instruments software, which I could use to download RX 11 Elements.
23) I could not find any application in my application folder to run Elements.
24) I went online and found out this program isn’t even an application, but a set of plugins that works with other products (Unnamed)
25) This information was not at all obvious from the website, to put it mildly.
26) I contacted support again.
27) They send a new and better link to Native Access 2:
28) the install process was the same as the old one, complete with a request for a “passkey” that never pops up and never tells you where it is meant to be. I did steps 7-13 again minus the crash to desktop.
29) I went to the library and reinstalled RX 11 ELEMENTS.
30) I downloaded the DaVinchi video editing tool, which I found out on my own is meant to support Elements 11 as plugins.
31) I opened DaVinchi, tried to use the plugins, and it crashed to desktop.
32) This crash happened with VST and AU versions.
33) I contacted support again.
34) they asked me to download and use a Native Instruments tool to collect bug information.
35) I downloaded and installed the tool.
36) I got a bug report as a zip file.
37) I tried to email it to support as requested.
38) Email bounced: Native Instruments does not allow the emailing of zip files.
39) I emailed support again to inform them of the issue.
40) They had me put the zip file on a Google drive and send them a link that they are allowed to access.
41) They told me the original install had been incomplete, told me to try again.
42) I found and used their uninstall tool to delete product portal.
43) I used their uninstall tool to delete the RX 11 Audio Editor.
44) I then reinstalled RX 11 Elements.
45) I then reopened Davinci resolve 20
46) Same error:
47) I contacted support again.
48) Support replied back with a list of compatible systems. This does NOT include the current Mac OS X – Tahoe
49) I’m thoroughly (censored) until they support the current Mac OS, which was released September 15, 2025.
50) I contacted support to ask when Tahoe would be supported.
51) 10 days later: message from iZotope

We’d love to hear what you think of our customer service. Please take a moment to answer one simple question by clicking either link below:

How would you rate the support you received?

Good, I’m satisfied
Bad, I’m unsatisfied

My reply:

Bad, I’m unsatisfied:

The issue was not resolved

You still have not updated iZotope to work with a many-months-old OS from Apple (Tahoe). I am quite certain the reason for these issue has to do with your DRM system as every other piece of software I own transitioned seamlessly from the previous OS to Tahoe, and your DRM is the only difference I can imagine.

52) Waited until January 9, 2026 when iZotope RX Elements 11 was made compatible with the new Mac OS.
53) Reinstalled it.
54) It failed.
55) Reinstalled Davinci Resolve 20
56) It worked.
57) Task completed in 6 weeks flat.

Do not use LLM-based AI to select reviewers!

This is an open letter to all my fellow journal editors out there. You can, and should, use deterministic artificial intelligence (improved versions of kind we’ve used for decades in spell-checking applications.) to help you find reviewers when you are overwhelmed by the possible selection. These tools will tell you what various potential reviewers have published, and give you reasonably sound recommendations for reviewers that you can easily accept or reject based on sound and accurate editorial guidelines.

This is NOT true for Large Language Model (LLM) based artificial intelligence. They will recommend reviewers that are not competent to review articles. In a particularly egregious case, I was asked to review an article on the effectiveness of cancer treatments. I am completely incompetent to review such an article as I have not, at any time in my life, studied how to assess the efficacy of cancer treatments. But the LLM sees my clinical and COVID-related work and guesses that I am competent to do so!

The editor should have caught that, but might have not even been given enough information to catch the error, and might not have had time to do their own vetting after the recommendation. This kind of error tells me that no one should be using LLMs for this purpose. The risk to science itself is too high. Don’t do it!

Donald Derrick

A phonetic description of Káínai Blackfoot

Natalie Weber (first author) and I recently published “A phonetic description of Káínai Blackfoot” in Language Documentation & Conservation. The article details the phonotactics, syllable structure, and prosody of Blackfoot as spoken by Tootsinam, a speaker of the “new dialect” of Káínai’powahsin (the Blood Nation).

This article was a true Odyssey: It took 7230 days from first audio recording to final publication — 19.78 years. That is the approximate length of Odysseus’ journey away from home, including both the Trojan war (Illiad) and the Odyssey. The process concluded with serial submissions to two journals spanning 5 years. The conclusion of dealing with the first journal was, speaking for myself, easily the worst year of my life. I will be forever grateful to the good and kind people of Language Documentation & Conservation and the National Foreign Language Resource Center of the Language school in Honolulu, Hawaii.

However, the glory of completing this article is immense. We dedicate this article to the memories of Tootsinam (Beatrice Bullshields), our Blackfoot consultant who provided the recordings used in this description, to Abigail Scott who worked on Blackfoot research with us, and to Donald G. Frantz who authored so much excellent work on Blackfoot and who was personally helpful to both of us. You will all be missed!


Violin plots of duration in milliseconds for types of Blackfoot [s]. Top numbers = token counts. Bottom numbers = mean (standard deviation) in milliseconds. (Figure 3, Weber & Derrick, 2025).

The abstract is as follows: “This paper presents the Blackfoot (Algonquian) phonetic system from data provided by Tootsinam (Beatrice Bullshields, 1945–2015), a native speaker of Káínai’powahsin, the Blackfoot dialect associated with the Blood Nation.
There are relatively few phonetic studies of underdocumented languages, and Blackfoot is no exception. We fill this gap by providing a general articulatory description of the segmental, prosodic, and suprasegmental properties of the language, with an aim to provide a starting point for future targeted studies. Blackfoot is an interesting case study because many of the basic phonetic and phonological facts of the language are still highly contested, and because there are several typologically distinctive characteristics compared to well-documented languages, such as the unusual distribution of /s/. Within each section, we summarize all previous research on Blackfoot up to this point and explain which properties are well understood and which require further research. We also present some novel observations of Tootsinam’s speech that differ from existing documentation, including the distribution of short centralized vowels outside of closed syllables, and an allophonic falling tone on word-final stressed syllables.


Phonemic long and short Blackfoot vowels (based on 1089 vowel tokens: [i] = 303, [iː] = 144, [ɛː] 65, [a] = 157, [aː] 184, [ɔː] = 32, o = 145, [oː] = 59; 1.5 standard deviation outlines shown for clarity). Solid outlines are phonemic long vowels and dashed/dotted lines are phonemic short vowels (Figure 13, Weber & Derrick, 2025).

For those who are curious, here is A brief point-form timeline of this article

October 16, 2005 – first recording of Tootsinam (Beatrice Bullshields).
March 28, 2006: Last recording of Tootsinam (Beatrice Bullshields).
2007-2013: Early drafts of Blackfoot phonetic description.
2014-2020: Natalie Weber joined effort (and in the end well and truly earned first authorship!)
November 16, 2020: Submitted phonetic description to first journal.
December 1, 2020: Initial reformatting complete.
December 2020: Received a revise & resubmit (R&R).
February 14, 2021: Submitted first revision.
August 2022: Received second R&R.
February 2023: Submitted second revision.
July 2023: Rejected.
September 29, 2023: Submitted revised version to Linguistic Documentation and Conservation (LD&C).
February 7, 2024: Submitted first revision.
August 17, 2024: Article accepted.
August 1, 2025: Article published.

Human Aeroecology

Myself, Bryan Gick (as co-first author) and Mark Jermy recently published “Human Aeroecology” in Frontiers in Ecology and Evolution. To quote: “Airspace has been recognized as habitat for at least a decade (Diehl, 2013). However, the ecology of airspace has generally been defined with respect to airborne lifeforms such as birds and insects (e.g., Chilson et al., 2017). Humans are as much creatures of the air as lifeforms that walk the ocean floor are creatures of the sea. Yet, little is understood about the full scope of human interaction with the airspace, much of which is normally invisible and intangible. Topics relating to human aeroecology have long remained isolated at the periphery of many disparate fields.”

“Here we identify five broad areas within human aeroecology that researchers have developed over the past years, and which we argue would benefit from focused collaboration. These include but are not limited to: Airscape Design; Air Quality for Comfort, Health, Education and Productivity (Air Quality for CHEaP); Shared Airspaces for Social Connection; Auditory, Aerotactile, Olfactory, and Visual Communication; and Pathogen Transmission, as seen in Figure 1.”

Uniformity in speech: The economy of reuse and adaptation across contexts

Myself, Connor Mayer, and Bryan Gick recently published “Uniformity in speech: The economy of reuse and adaptation across contexts” with Glossa. This article compares how Kiwis and North Americans produce flap sequences like “editor” in North America, or “added a” in New Zealand. Kiwis produce these similarly during slow and fast speech, North Americans often have two different methods for slow and fast speech. We show that difference likely stems from the extreme variability built into the “r”s of rhotic dialects of English reaching flaps because of reuse and adaptation of motor “chunks”.

To illustrate our claim: In the image below showing tongue tip frontness for the second vowel in 3-vowel-sequences for words like “editor”, you can see that for faster speech (6-7 syllables/second), there is a jump where the tongue tip is not nearly as fronted, but only for North American English (NAE) vowels, not for New Zealand English (NZE) vowels or NAE rhotic vowels. Here the high variability intrinsic to NAE rhotic (and commonly seen in other contexts) is visible in adjacent NAE non-rhotic vowels, but NZE has no access to rhotic vowels at all, so the non-rhotic vowels do not have a source of such motor control variability, even though such variability would provide mechanical advantage.

The abstract for this article, which explains in more technical but also more accurate terms, can be seen here:

“North American English (NAE) flaps/taps and rhotic vowels have been shown to exhibit extreme variability that can be categorized into subphonemic variants. This variability provides known mechanical benefits in NAE speech production. However, we also know languages reuse gestures for maximum efficiency during speech production; this uniformity of behavior reduces gestural variability. Here we test two conflicting hypotheses: Under a uniformity hypothesis in which extreme variability is inherent to rhotic vowels only, that variability can still transfer to flaps/taps and non-rhotic vowels due to adaptation across similar speech contexts. But because of the underlying reliance on extreme variability from rhotic vowels, this uniformity hypothesis does not predict extreme variability in flaps/taps within non-rhotic English dialects. Under a mechanical hypothesis in which extreme variability is inherent to all segments where it would provide mechanical advantage, including flaps/taps, such variability would appear across all English dialects with flaps/taps, affecting adjacent non-rhotic vowels through coarticulation whenever doing so would provide mechanical advantage. We test these two hypotheses by comparing speech-rate-varying NAE sequences with and without rhotic vowels to sequences from New Zealand English (NZE), which has flaps/taps, but no rhotic vowels at all. We find that NZE speakers all use similar tongue-tip motion patterns for flaps/taps across both slow and fast speech, unlike NAE speakers who sometimes use two different stable patterns, one for slow and another fast speech. Results show extreme variability is not inherent to flaps/taps across English dialects, supporting the uniformity hypothesis.”

Hearing, seeing, and feeling speech: the neurophysiological correlates of trimodal speech perception

Doreen Hansmann, myself, and Catherine Theys recently published a partially null-result article on the neurophysiological correlates of trimodal speech in Frontiers in Human Neuroscience: Hearing: Speech and Language. The short form is that while we saw behavioural differences showing integration of audio, visual, and tactile speech in closed-choice experiments, we could not extend that result to show an influence of tactile speech on brain activity – the effect is just to small:

Figure 3. Accuracy data for syllable /pa/ for auditory-only (A), audio-visual (AV), audio-tactile (AT), and audio-visual-tactile (AVT) conditions at each SNR level (–8, –14, –20 dB). Error bars are based on Binomial confidence intervals (95%).

The abstract for this article is below:

Introduction: To perceive speech, our brains process information from different sensory modalities. Previous electroencephalography (EEG) research has established that audio-visual information provides an advantage compared to auditory-only information during early auditory processing. In addition, behavioral research showed that auditory speech perception is not only enhanced by visual information but also by tactile information, transmitted by puffs of air arriving at the skin and aligned with speech. The current EEG study aimed to investigate whether the behavioral benefits of bimodal audio-aerotactile and trimodal audio-visual-aerotactile speech presentation are reflected in cortical auditory event-related neurophysiological responses.

Methods: To examine the influence of multimodal information on speech perception, 20 listeners conducted a two-alternative forced-choice syllable identification task at three different signal-to-noise levels.

Results: Behavioral results showed increased syllable identification accuracy when auditory information was complemented with visual information, but did not show the same effect for the addition of tactile information. Similarly, EEG results showed an amplitude suppression for the auditory N1 and P2 event-related potentials for the audio-visual and audio-visual-aerotactile modalities compared to auditory and audio-aerotactile presentations of the syllable/pa/. No statistically significant difference was present between audio-aerotactile and auditory-only modalities.

Discussion: Current findings are consistent with past EEG research showing a visually induced amplitude suppression during early auditory processing. In addition, the significant neurophysiological effect of audio-visual but not audio-aerotactile presentation is in line with the large benefit of visual information but comparatively much smaller effect of aerotactile information on auditory speech perception previously identified in behavioral research.

Confirming authorship on papers

Recently, I was working on a paper where we all made mistakes regarding authorship. We withdraw the paper in question before publication, and we have all been writing new guidelines for our labs in order to prevent similar mistakes in the future.

Our new guidelines require that active authors on a paper ensure that everyone who has touched any of the data or intellectual contributions on the paper read and respond to the email message below. Responses are then stored on the authors’ computers to document who does and does not wish to be an author.

(We will have later blog post on authorship order – those policies are currently being rewritten.)

Progress on the paper does not occur until all are in agreement on authorship AND authorship order:

Dear {name}, potential author on {article/project}

A paper on the above topic is currently in preparation. You are receiving this email because you may have had some contact with some aspect of this project.

According to what is sometimes called the Vancouver Convention, there are four key components to justify authorship on a given poster, proceedings paper, journal article, or project:

1)         Substantial contributions* to the conception or design of the work, or the acquisition, analysis, or interpretation of data for the work; AND 

2)         Drafting the work or revising it critically for important intellectual content; AND 

3)         Final approval of the version to be published; AND 

4)         Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. 

*we consider the term “substantial contributions” here to be equivalent to “substantive intellectual contributions” as described in the Vancouver Convention protocols as well as to “substantial professional contributions” as described in section 8.12 of the “Ethical Principles of Psychologists and Code of Conduct” of the American Psychological Association: https://www.apa.org/ethics/code.

Details on the Vancouver Convention protocols can be found in this website of the international committee of medical journal editors: https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html

All those designated as authors should meet all four criteria for authorship, and all who meet the four criteria should be identified as authors.  Those who meet some but not all four criteria should be acknowledged.

In view of the above considerations, we are asking you as an individual for a statement of your contributions relative to the four points above.

Referring to the above points, please answer the following questions regarding your own contributions:

1)         Do you consider your contributions to satisfy the requirement of “substantial contributions” as described above? If so, please describe your contributions here:

2)         Have you or will you contribute to drafting the work or revising it critically for important intellectual content (yes or no)? If so, please describe:

3)         Have you or will you commit to providing final approval of the version to be published? (Yes or No): 

4)         Do you agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved (Yes or No)?: 

Phonological conditioning of affricate variability in Emirati Arabic

Today, Marta Szreder (first author) and I published an article on Phonological conditioning of affricate variability in Emirati Arabic. The article studies the [k∼tʃ] and [dʒ∼j] alternations in Emirati Arabic. In the article, we show that coronal obstruents [t,d] and coronal postalveolar fricatives [ʃ] inhibit production of the fricative variant of [dʒ] in the [dʒ∼j] alternation, but not the fricative variant of [k] in the [k∼tʃ] alternation, as seen in Figure 5 from the paper (below). The results suggest the [k∼tʃ] alternation is a completed phonemic change, while the [dʒ∼j] alternation is a an ongoing process.

Figure 5: Interaction graph showing relative affrication in /k/ and /dʒ/ phonemes based on whether there was a /t,d,ʃ/ within one vowel, somewhere further away in the word, or completely absent from the word.

The full abstract is quoted below:

This study investigates the conditioning effects of neighbouring consonants on the realisation of the phonemes /k/ and /dʒ/ in Emirati Arabic (EA), which are optionally realised as [tʃ] and [j], respectively. Based on previous accounts of EA and other Gulf Arabic (GA) dialects, we set out to test the prediction that proximity of other, phonetically similar coronal (COR) obstruents [COR, −son, −cont] and coronal postalveolar fricatives [COR, −ant] inhibit the surface realisation of the affricate variants of these phonemes. We examine elicitation data from twenty young female native speakers of EA, using stimuli with the target segment in the presence of a similar neighbour, as compared to words with the neighbour at a longer distance or with another coronal consonant. The results point to an asymmetry in the behaviour of the voiced and voiceless targets, such that the predicted inhibitory effect is confirmed for the voiced, but not the voiceless target. We argue that this finding, coupled with a consideration of the intra-participant and lexical trends in the data, is compatible with an approach that treats the two processes as being at different stages of development, where the [k∼tʃ] alternation is a completed phonemic change, while the [dʒ∼j] alternation is a synchronic phonological process.

Szreder, Marta & Derrick, Donald (2023) Phonological conditioning of affricate variability in Emirati Arabic, Journal of the International Phonetic Association. 1-19.

Red Wolf: My new video game

Red Wolf

My first commercial video game is now available on the Android Google Play Store, and you can see an advertisement on my YouTube Channel. This game is inspired by the fairy-tale of Little Red Riding Hood. It revises the story through the eyes of a farmer named Crimson who is trying to protect his cows, sheep, and chicken.

Will Crimson hear the call to protect his animals?

Will he rush foolishly into battle, ignore the plight of his animals and go back to sleep, or visit the local town of Wolfville?

There are 27 endings to this game, and upon completion, you can replay the game to see each of the memorials to Crimson’s possible lives in the cemetery of the possible.

Exploring how speech air flow may impact the spread of airborne diseases

I am participating on an American Association for the Advancement of Science (AAAS) 2022 meeting panel on “Transmission of Airborne Pathogens through Expiratory Activities” on Friday, February 18th from 6:00 to 6:45 AM Greenwich mean time. You can register for the meeting by clicking here. In advance of that meeting, the University of British Columbia asked me some Q&A questions exploring how speech air flow may impact the spread of airborne diseases.

The AAAS meeting itself is hosted by Prof. Bryan Gick of the University of British Columbia. It has individual talks by Dr. Sima Asadi on “Respiratory behavior and aerosol particles in airborne pathogen transmission”, Dr. Nicole M. Bouvier on “Talking about respiratory infectious disease transmission”, and myself on “Human airflow while breathing, speaking, and singing with and without masks”.

Dr. Sima Asadi’s talk focuses on the particles emitted during human speech, and the efficacy of masks in controlling their outward emission. For this work, Sima received the Zuhair A. Munir Award for the Best Doctoral Dissertation in Engineering from UC Davis in 2021. She is currently a postdoctoral associate in Chemical Engineering at MIT (Boston).

Dr. (Prof) Nicole M. Bouvier is an associate professor of Medicine and Infectious Diseases and Microbiology at the Icahn School of Medicine at Mount Sinai (New York). Nichole discusses how we understand the roots by which respiratory microorganisms, like viruses and bacteria, transmit between humans, which is fundamental in how we develop both medical and public health countermeasures to reduce or prevent their spread. However, much of what we think we know is based on evidence that is incomplete at best, and full of confusing terminology, as the current COVID-19 pandemic has made abundantly clear.

I myself am new to airborne transmission research, coming instead from the perspective that visual and aero-tactile speech help with speech perception, and so masks would naturally interfere with clear communication. They would do this by potentially muffling some speech sounds, but mostly by cutting off the perceiver form visual and even tactile speech signals.

However, since my natural interests involve speech air flow, I was ideally suited to move into research studying how these same air flows may be reduced or eliminated by face masks. I conduct this research with a Mechanical Engineering team at the University of Canterbury, and some of their results are featured in my individual presentation. Our most recent publication on Speech air flow with and without face masks was highlighted in previous posts on Maps of Speech, and in a YouTube video found here.