Questionnaire #2: Emotion Recognition

The new trends in human-computer interfaces, which have evolved from conventional mouse and keyboard to automatic speech recognition systems and special interfaces designed for disabled people, do not take complete advantage of these valuable communicative abilities, resulting often in a less than natural interaction.

Facial expressions are privileged relative to other nonverbal “channels” of communication, such as vocal inflections and body movements. Facial expressions appear to be the most subject to conscious control. The ability to recognize emotion from facial expressions appears at least partially inborn. Newborns prefer to look at faces rather than other complex stimuli, and thus may be programmed to focus on information in faces. Until now the most widely used speech cues for audio emotion recognition are global-level prosodic features such as the statistics of the pitch and the intensity.

As Lisa Feldman Barrett, Peter Salovey, John D. Mayer pint out, several aspects should be taken into account in facial recognition: such as: gender, socioesconomic status,  personality, age… That all make facial emotion recognicion so difficult.





One Response to “Questionnaire #2: Emotion Recognition”

  1. Lumi Says:

    Thank you

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: