During face-to-face conversations, adults with autism frequently use atypical rhythms and sounds in their speech (prosody), which can result in misunderstandings and miscommunication. Atypical prosody is one of the most noticeable characteristics of autism —making speakers inadvertently sound angry, bored, or tired. It is also one of the most difficult social skills to change over a lifetime, often requiring intensive and extensive intervention. Prosody includes the rhythm and sounds in speech, and refers to the acoustic way words are spoken to convey meaning through changes in pitch, volume, and rate of speech. Atypical prosodic patterns can heavily limit the opportunities for individuals with autism to establish social connections during face-to-face conversations. These challenges appear to contribute negatively to peer’s perceptions of the speaker and the overall social interaction experience . These characteristics are evident in the struggles of some individuals with autism to develop peer relationships, recognize emotions, and generally develop social skills for interactive communication.

In this project, we were interested in understanding how wearable assistive technologies might provide adults with autism with awareness of prosody without disrupting the social interaction.  We followed a user-centered design approach to uncover key design guidelines for such a solution. Building on these guidelines, we developed and evaluated SayWAT, a wearable assistive technology that uses Google Glass to display visual information in the wearer’s peripheral vision. SayWAT  provides feedback to wearers about their prosody during face-to-face conversations.

SayWhat encourages micro-interactions using a hands-free and heads up Google Glass display, in two modes:

  • Volume mode, users receive an alert when their volume is “too high” ; and
  • Pitch mode, users receive an alert when their pitch range is “flat” .

To support this rapid interpretation—and potential dismissal—SayWAT provides either iconography or a single word for rapid processing of the feedback (see Figure 2). For example, when SayWAT detects that the user is substantially louder than the ambient sound, it displays a voice meter animation with a color spectrum from green to yellow to red (Figure  left). Similarly, users receive an alert when their pitch range is atypically small in the form of the single word “flat” in white text on top of a black background (Figure right).  To ensure that alerts are understood but not bothersome, SayWAT uses thresholds both for when to provide the information and for how long to display it, with three seconds as the maximum time any alert is shown. Alerts are dismissed in less than three seconds if the user corrects the speech in that time.

To address whether SayWAT can accurately detect atypical prosody and intervene when detected, we conducted a laboratory study of short conversations between individuals with autism and those without. Additionally, this experimental study provided evidence that the intervention can be efficacious in supporting prosody improvement. Our results indicate that wearable assistive technologies can automatically detect atypical prosody and deliver feedback in real time without disrupting the wearer or the conversation partner. Additionally, we provide suggestions for wearable assistive technologies for social support.

Lead researchers  LouAnne Boyd (UC Irvine), Gillian R. Hayes (UC Irvine)

Project participants


Monica Tentori, Ph.D., Assistant Professor (see more about monica …)
e: mten …

Related publications to the project

  • Boyd, L. E., Rangel, A., Tomimbang, H., Conejo-Toledo, A., Patel, K., Tentori, M. and Hayes, G. R.. 2016. SayWAT: Augmenting Face-to-Face Conversations for Adults with Autism. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 4872-4883.