Predicting Affective States from Acoustic Voice Cues Collected with Smartphones
Author(s) / Creator(s)
Koch, Timo
Schoedel, Ramona
Abstract / Description
The expression and recognition of emotions (i.e., short-lived and directed representations of affective states) through the acoustic properties of speech is a unique feature of human communication (Weninger et al., 2013). Researchers have identified acoustic features, which are predictable of affective states, and emotion detecting algorithms have been developed (Schuller, 2018). However, most studies used speech data produced by actors, who had instructions to act out a given emotion, or speech samples labelled by raters, who were instructed to add affective labels to recorded utterances (e.g., from TV shows). Both, enacted and labelled speech, come with multiple downsides since these approaches assess expressed affect rather than the experience of actual affective states through voice. In this work, we want to investigate if we can predict in-situ self-reported affective states from objective voice parameters collected with smartphones in everyday life. Further, we want to explore which acoustic features are most predictive for the prediction of the experience of affective states. Finally, we want to analyze how the affective quality of instructed spoken language (e.g., a sentence with negative affective valence) translates into objective markers in the acoustic signal, which then in turn could alter the predictions in our models.
Persistent Identifier
PsychArchives acquisition timestamp
2021-01-07 10:19:21 UTC
Publisher
PsychArchives
Citation
Koch, T., & Schoedel, R. (2021). Predicting Affective States from Acoustic Voice Cues Collected with Smartphones. PsychArchives. https://doi.org/10.23668/PSYCHARCHIVES.4454
-
Preregistration_Affective States_Voice.pdfAdobe PDF - 122.5KBMD5: 641d2aea04dffca7cfbc2d59e85dd04f
-
There are no other versions of this object.
-
Author(s) / Creator(s)Koch, Timo
-
Author(s) / Creator(s)Schoedel, Ramona
-
PsychArchives acquisition timestamp2021-01-07T10:19:21Z
-
Made available on2021-01-07T10:19:21Z
-
Date of first publication2021-01-07
-
Abstract / DescriptionThe expression and recognition of emotions (i.e., short-lived and directed representations of affective states) through the acoustic properties of speech is a unique feature of human communication (Weninger et al., 2013). Researchers have identified acoustic features, which are predictable of affective states, and emotion detecting algorithms have been developed (Schuller, 2018). However, most studies used speech data produced by actors, who had instructions to act out a given emotion, or speech samples labelled by raters, who were instructed to add affective labels to recorded utterances (e.g., from TV shows). Both, enacted and labelled speech, come with multiple downsides since these approaches assess expressed affect rather than the experience of actual affective states through voice. In this work, we want to investigate if we can predict in-situ self-reported affective states from objective voice parameters collected with smartphones in everyday life. Further, we want to explore which acoustic features are most predictive for the prediction of the experience of affective states. Finally, we want to analyze how the affective quality of instructed spoken language (e.g., a sentence with negative affective valence) translates into objective markers in the acoustic signal, which then in turn could alter the predictions in our models.en
-
Publication statusotheren
-
Review statusunknownen
-
CitationKoch, T., & Schoedel, R. (2021). Predicting Affective States from Acoustic Voice Cues Collected with Smartphones. PsychArchives. https://doi.org/10.23668/PSYCHARCHIVES.4454en
-
Persistent Identifierhttps://hdl.handle.net/20.500.12034/4033
-
Persistent Identifierhttps://doi.org/10.23668/psycharchives.4454
-
Language of contenteng
-
PublisherPsychArchivesen
-
Is related tohttps://doi.org/10.23668/psycharchives.2901
-
Dewey Decimal Classification number(s)150
-
TitlePredicting Affective States from Acoustic Voice Cues Collected with Smartphonesen
-
DRO typepreregistrationen
-
Visible tag(s)Smartphone Sensing Panel Studyen