Speech remains HCI's "holy grail", yet it is the most difficult modality to be understood by machines -- its processing resulting in error rates of high variability, especially under adverse conditions. The aim of this course is to inform the HCI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for HCI researchers and practitioners to learn more about how speech recognition and synthesis work, what are their limitations, and how these could be used to enhance current interaction paradigms.
Please join us at CHI 2017 if you want to learn how speech recognition and synthesis works, what are its limitations and usability challenges, how can it enhance interaction paradigms, and what is the current research and commercial state-of-the-art.
The course will be beneficial to all HCI researchers or practitioners without a strong expertise in ASR or TTS, who still believe in fulfilling HCI's goal of developing methods and systems that allow humans to naturally interact with the ever increasingly ubiquitous mobile technology, but are disappointed with the lack of success in using speech and natural language to achieve this goal. During the past CHI instances of this course many of the attendees were from the speech processing community, especially from relevant industries attending the course (and CHI) due to their interests in learning about how to incorporate their own engineering advances into better user-centred designs.