Logo: Lea Jasper
International Workshop on
Emotion Representations and Modelling for Companion Systems 2016

 

 

 

 

 

 

 

 

Dataset


In order to study natural, user-centred interactions, to develop user-centred emotion representations and to model adequate affective system behaviour, appropriate multi-modal data comprising not just audio and video material must be available.

As a highlight, this year's workshop offers a “hands-on” session, where a dataset comprised of 10 different modalities will be made available prior to the workshop to the participants. The participants are encouraged to analyse the dataset in terms of emotion recognition, interaction studies, conversational analyses, etc. The dataset provided is a snapshot of a new multi-modal dataset (Tornow, et al., 2016). Researchers are invited to address a specific research question using this dataset and submit their results to the workshop. During the workshop, all results will be presented and discussed in a subsequent panel session.

In order to get access to the dataset please register for the “hands-on” session by sending an email with you name(s) and affiliation(s) to: igfcorpus[at]ovgu[dot]de

Details on the dataset

The aim of the present corpus is to provide multi-modal data of subjects expressing selected dispositional states (interest, cognitive underload and cognitive overload) as well as HMI-related emotional reactions (fear, frustration, joy).
One major problem in recording a dataset on natural mental states is often the missing involvement of the subjects. We resolved this issue by including the data recording into a health and fitness scenario. The actual system is introduced to the subject as a training system for human gait which should be optimized with the help of the subjects. The usage as well as the exercises and the subject task are explained by the system itself.
Subjects were presented an automatic gait analysis system, composed of two parts, a gait training course and a technical interface for planning and evaluating the gait training. The training course was physically constructed and all subjects had several runs through the changing course. The data of the gait training is not part of the corpus.
The interaction with the technical system took place in a separate area and is the basis for this corpus. The target group are subjects with the age of 50 and above but without any known problems in their gait. The course of the experiment is divided in five modules each oriented on a special topic.
During the Wizard of Oz experiments the subject uses voice commands to interact with the system that is able to answer using a Text-to-Speech system and a graphical interface providing additional information. Furthermore, the subjects are recurrently asked to rate their own emotional state along valence, arousal, and dominance dimensions using the Self Assessment Manikins.





Available Modalities and their parameters
Modality Parameter Alignment Data per Session
Video
Pike 3xRGB 1388x1039px, 25Hz 22.6μs 8 000 MiB (3x)
Webcam 1920x1080px, 30Hz 22.6μs 1 000 MiB
Screen 1920x1080px, 30Hz 22.6μs 500 MiB
Kinect2 1920x1080px, 30Hz 70ms 7 000 MiB
Body Posture
Posture 25 body points, 30Hz 70ms 300 MiB
Face 5 body points, 30Hz 70ms 100 MiB
Kinect2 3d-Data 512x424px, 30Hz 70ms 800 MiB
Kinect2 IR-Data 512x424px, 30Hz 70ms 800 MiB
Audio
Proband 4x Mono 16Bit, 44.1kHz 22.6μs 500 MiB (4x)
Wizard 1x Stereo 32Bit, 44.1kHz 22.6μs 1 000 MiB
Kinect2 1x Stereo 32Bit, 44.1kHz 70ms 400 MiB
Biophysiology
EMG 16Bit, 256kHz 4ms 10 MiB
ECG 16Bit, 256kHz 4ms 10 MiB
Annotation
Event Marker and TTS Time code and Event 22.6μs 0.1 MiB
SAM-Rating Rating 22.6μs 0.01 MiB