Skip to Content

Search: {{$root.lsaSearchQuery.q}}, Page {{$}}

Phondi: Emily M. Provost, "Human-Centered Computing: Using Speech to Understand Behavior"

Friday, February 6, 2015
12:00 AM
473 Lorch Hall

Human-Centered Computing: Using Speech to Understand Behavior

We have a special visitor in Phondi today. Emily Provost (Computer Science and Engineering) will present on her research about how computational methods can be used to determine speakers' emotion based on properties of their speech.

Human-Centered Computing: Using Speech to Understand Behavior

Emotion has intrigued researchers for generations. This fascination has permeated the engineering community, motivating the development of affective computational models for classification. However, human emotion remains notoriously difficult to interpret in part due to the presence of complex emotions, emotions that contain shades of multiple affective classes. Proper representations of emotion would ameliorate this problem by introducing multidimensional characterizations of the data that permit the quantification and description of the varied affective components of each utterance. In this talk I will discuss methods to characterize emotion, focusing on quantifying the presence of multiple shades of affect and avoiding the need for hard-labeled assignments.

I will also discuss our ongoing speech-based assistive technology research, highlighting our work estimating speech quality for individuals with aphasia. I will describe our interactive tablet-based software, which is based on picture description tasks, and illustrate how we have used this platform to collect a new dataset of aphasic and healthy speech. We have demonstrated that we can use these data to automatically estimate speech quality at levels comparable to an average human evaluator. Finally, I will touch on our work classifying mood for individuals with bipolar disorder using naturally collected cell phone data.