Data Sets




Data Sets

The FilmGrain Dataset is a set of image pairs with and without film grain (grainy / grain-free) at five different film grain intensity levels. It is intended to be used for training a deep learning model for film grain removal and synthesis, film grain detection, film grain intensity prediction, etc. A detailed description of the benchmark can be found on our Data Description page. The license conditions are mentioned on the Download page. ACKNOWLEDGEMENTS This...
Creation of the sequences: 10 sequences were created using the Unity game engine to load and play several games. We captured the frame buffers using Unity Recorder and custom post-processing shaders in both the built-in render pipeline and the High-Definition Render Pipeline. We captured the color buffer into an RGBA16 Signed Float texture as well as the depth and optical flow in a second similar texture (with the linear depth info encoded into the R...
The Long Range (LoRa) protocol for low-power wide-area networks (LPWANs) is a strong candidate to enable the massive roll-out of the Internet of Things (IoT) because of its low cost, impressive sensitivity (-137dBm), and massive scalability potential. As tens of thousands of tiny LoRa devices are deployed over large geographic areas, a key component to the success of LoRa will be the development of reliable and robust authentication mechanisms. We publicly share waveform data from...
The LaFin: Large-scale Flickr interestingness dataset (hereafter “the Dataset”) is a collection of Flickr image IDs corresponding to about 123k Flickr images, equally balanced between interesting and non-interesting images, and their corresponding metadata. In addition to the images, their binary labels, and associated metadata, some precomputed features are provided: CNNs, semantic features that derived from image captioning and Word2Vec representations of Flickr tags.  It is intended to be used for analyzing socially-driven image interestingness and...
In contrast to existing datasets with very few video resources and limited accessibility due to copyright constraints, LIRIS-ACCEDE consists of videos with a large content diversity annotated along affective dimensions. All excerpts are shared under Creative Commons licenses and can thus be freely distributed without copyright issues. The dataset (the video clips, annotations, features and protocols) is publicly available.
The recent VR/AR applications require some way to evaluate the Quality of Experience (QoE) which can be described in terms of comfort, acceptability, realism, and ease of use. In order to assess all these different dimensions, it is necessary to take into account the user’s senses and in particular, for A/V content, the vision. Understanding how users watch a 360° image, how they scan the content, where they look and when is thus necessary to...
The Interestingness Dataset is a collection of movie excerpts and key-frames and their corresponding ground-truth files based on the classification into interesting and non-interesting samples. It is intended to be used for assessing the quality of methods for predicting the interestingness of multimedia content.   The data has been produced by the MediaEval 2016 Predicting Interestingness and the MediaEval 2017 Predicting Interestingness Tasks' organizers and was used in the context of this benchmark. A detailed description of the benchmark...
The VSD benchmark is a collection of ground-truth files based on the extraction of violent events in movies and web videos, together with high-level audio and video concepts. It is intended to be used for assessing the quality of methods for the detection of violent scenes and/or the recognition of some high level, violence-related, concepts in movies and web videos. The data was produced by Technicolor for the 2012 subset and by the Fudan University and the Ho Chi Minh University...
Automatic extraction of face tracks is a key component of systems that analyzes people in audio-visual content such as TV programs and movies. Due to the lack of annotated content of this type, popular algorithms for extracting face tracks have not been fully assessed in the literature. To help fill this gap, we introduce a new dataset, based on the full audio-visual person annotation of a feature movie. Thanks to this dataset, state-of-art tracking metrics...
The automatic recognition of human emotions is of great interest in the context of multimedia applications and brain-computer interfaces. While users’ emotions can be assessed based on questionnaires, the results may be biased because the answers could be influenced by social expectations. More objective measures of emotions can be obtained by studying the users' physiological responses. The present database has been constructed in particular to evaluate the usefulness of electroencephalography (EEG) for emotion recognition in the context...