I am going to be submitting an article entitled “Tri-modal Speech: Audio-Visual-Tactile integration in Speech Perception”, along with my co-authors Doreen Hansmann and Catherine Theys, within the month. The article was, in the end, a success, demonstrating that visual and tactile speech can, separately and jointly, enhance or interfere with accurate auditory syllable identification in two-way forced-choice experiments.
However, I am writing this short post to serve as a warning to anyone who wishes to combine visual, tactile, and auditory speech perception research into one experiment. Today’s technology makes that exceedingly difficult:
The three of us have collective experience with electroencephalography, magnetic resonance imaging, and with combining ultrasound imaging of the tongue with electromagnetic articulometry. These are complex tasks that require a great deal of skill and training to complete successfully. Yet this paper’s research was the most technically demanding and error-prone task we have ever encountered. The reason is that despite all of the video you see online today, modern computers do not easily allow for research-grade, synchronized video within experimental software. Due to today’s multi-core central processing, it was in fact easier to do such things a 15 years ago than it is now. The number and variety of computer bugs in the operating system, video and audio library codecs, and experimental software presentation libraries were utterly overwhelming.
We programmed this experiment in PsychoPy2, and after several rewrites and switching between a number of visual and audio codecs, we were forced to abandon the platform entirely due to unfixable intermittent crashes, and switch to MatLab and PsychToolBox. PsychToolBox also had several issues, but with several days of system debugging effort by Johnathan Wiltshire, programmer analyst at the University of Canterbury’s psychology department, these issues were at least resolvable. We cannot thank Johnathan enough! In addition, electrical issues with our own air flow system made completion of this research a daunting task, requiring a lot of help and repairs from Scott Lloyd of Electrical engineering. Scott did a lot of burdensome work for us, and we are grateful.
All told, I alone lost almost 100 working days to debugging and repair efforts during this experiment. We therefore recommend all those who follow up on this research make sure that they have collaborators with backgrounds in both engineering and information technology, work in labs with technical support, and have budgets and people who can and will build electrically robust equipment. We also recommend not just testing, debugging, and piloting experiments, but also the generation of automated iterative tools that can identify and allow the resolution of uncommon intermittent errors.
Your mental health depends on you following this advice.