

- #7 days to die library for localization.txt professional#
- #7 days to die library for localization.txt tv#
Object detection and recognition Dataset NameĨ49 images taken in 75 different scenes. Video dataset for action localization and spotting Large video dataset for action classification. Recordings of a single person performing 12 actionsĨ PhaseSpace Motion Capture, 2 Stereo Cameras, 4 Quad Cameras, 6 accelerometers, 4 microphones
#7 days to die library for localization.txt tv#
Videos from 20 different TV shows for prediction social actions: handshake, high five, hug, kiss and none.īerkeley Multimodal Human Action Database (MHAD) Gender classification, face detection, face recognition, age estimation IMDB and Wikipedia face images with gender and age labels. Gender classification, face detection, face recognition, age estimation, and glasses detection

Several poses as well.ġ12 persons (66 males and 46 females) wear glasses under different illumination conditions.Ī set of synthetic filters (blur, occlusions, noise, and posterization ) with different level of difficulty.Ĥ2,592 (2,662 original image × 16 synthetic image) Up to 100 subjects, expressions mostly neutral.

Expressions neutral face, smile, frontal accentuated laugh, frontal random gesture. National Institute of Standards and Technology Expressions: anger, happiness, sadness, surprise, disgust, puffy. 3D images extracted.įacial expression recognition, classification Neutral face, and 6 expressions: anger, happiness, sadness, surprise, disgust, fear (4 levels). Institute of Automation, Chinese Academy of SciencesĮxpressions: Anger Disgust Fear Happiness Sadness SurpriseĪnnotated Visible Spectrum and Near Infrared Video captures at 25 frames per second Neutral face, 5 expressions: anger, happiness, sadness, eyes closed, eyebrows raised.Įxpressions: Anger, smile, laugh, surprise, closed eyes. Randomly sampled color values from face images.ģ4 action units and 6 expressions labeled 24 facial landmarks labeled. Images of faces with eye positions marked. Images of public figures scrubbed from image searching. Includes semantic ratings data on emotion labels. Large database of images with labels for expressions.Ģ13 images of 7 facial expressions (6 basic facial expressions + 1 neutral) posed by 10 Japanese female models. Coordinates of features given.įaces of 15 individuals in 11 different expressions. Perceptual validation ratings provided by 319 raters.Ĭlassification, face recognition, voice recognition 8 emotions each at two intensities.įiles labelled with expression.
#7 days to die library for localization.txt professional#
Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS)ħ,356 video and audio recordings of 24 professional actors. The detected faces, detected and aligned faces and annotationsĪffect recognition (valence-arousal estimation, basic expression classification, action unit detection)ġ1338 images of 1199 individuals in different positions and at different times. The detected faces, facial landmarks and valence-arousal annotationsĪffect recognition (valence-arousal estimation)ĥ58 videos of 458 individuals, ~2,800,000 manually annotated images: annotated in terms of i) categorical affect (7 basic expressions: neutral, happiness, sadness, surprise, fear, disgust, anger) ii) dimensional affect (valence-arousal) iii) action units (AUs 1,2,4,6,12,15,20,25) in-the-wild setting color database various resolutions (average = 1030圆30) In computer vision, face images have been used extensively to develop facial recognition systems, face detection, and many other projects that use images of faces.Ģ98 videos of 200 individuals, ~1,250,000 manually annotated images: annotated in terms of dimensional affect (valence-arousal) in-the-wild setting color database various resolutions (average = 640x360) These datasets consist primarily of images or videos for tasks such as object detection, facial recognition, and multi-label classification.
