Apr 27 2012


Roberto Tedesco

Many psychological and social studies highlighted two distinct channels we use to exchange information among us: an explicit, linguistic channel, and an implicit, paralinguistic channel. The latter contains information about the emotional state of the speaker, providing clues about the implicit meaning of the message. In particular, the paralinguistic channel can improve applications requiring human-machine interactions (for example, speech recognition systems or Conversational Agents), as well as support the analysis of human-human interactions (think, for example, of clinic or forensic applications).

PrEmA is a tool able to recognize and classify both emotions and speech style of the speaker, relying on prosodic features. In particular, speech-style recognition is, to our knowledge, new, and could be used to infer interesting clues about the state of the conversation. We selected two sets of prosodic features, and trained two classifiers, based on the Linear Discriminant Analysis.

Designed and developed by:
L. Colombo, C. Rinaldi, and L. Sbattella

Recognition of emotions

Picture 1 of 2

The source code will be released soon, as open-source software.