ConfLab: A Rich Multimodal Multisensor Dataset of Free-Standing Social Interactions In-the-Wild

Posted on 20.06.2022 - 07:59 authored by Chirag Raman

ConfLab is a multimodal multisensor dataset of in-the-wild free-standing social conversations. It records a real-life professional networking event at the international conference ACM Multimedia 2019. Involving 48 conference attendees, the dataset captures a diverse mix of status, acquaintance, and networking motivations. Our capture setup improves upon the data fidelity of prior in-the-wild datasets while : 8 overhead-perspective videos (1920 x 1080, 60fps), and custom personal wearable sensors with onboard recording of body motion (full 9-axis IMU), privacy-preserving low-frequency audio (1250Hz), and Bluetooth-based proximity. Additionally, we developed custom solutions for distributed hardware synchronization at acquisition, and time-efficient continuous annotation of body keypoints and actions at high sampling rates. 


---------------------

General information: 

The dataset contains:

  1. EULA: (DOI: 10.4121/20016194, required for access): End-User License Agreement needs to be filled out in order to request access because data contains pseudonymized data. Once completed, please return to SPCLabDatasets-insy@tudelft.nl. Private links to download the data will be sent to you when your credentials are reviewed and approved. Note for reviewers: please follow the same procedure described above. TUDelft Human Research Ethics Committee or a member of the Admin staff will handle your access requests for the review period to ensure single blind standard. 
  2. Datasheet (DOI: 10.4121/20017559): data sheet summary of the ConfLab Data
  3. Samples (DOI: 10.4121/20017682): sample data of the ConfLab dataset 
  4. Raw-Data (DOI: 10.4121/20017748): raw video and wearable sensor data of the ConfLab dataset. 
  5. Processed-Data (DOI: 10.4121/20017805): processed video and wearable sensor data of the ConfLab dataset. Used for annotations and processed for usability. 
  6. Annotations (DOI: 10.4121/20017664): annotations of pose, speaking status, and F-formations. 

Please scroll down the page to see and access all the components in the ConfLab dataset. For more information, please see the respective readme.


---------------------

Baseline tasks

Baseline tasks include: keypoint pose estimation, speaking status estimation, and F-formation (conversation group) estimation. 


Code related to the baseline tasks can be found here:  

https://github.com/TUDelft-SPC-Lab/conflab

---------------------

Annotation tool

The annotation tool developed and used for annotating keypoints and speaking status of the ConfLab dataset, is provided here: https://github.com/josedvq/covfee


More information can be found here: Quiros, Jose Vargas, et al. "Covfee: an extensible web framework for continuous-time annotation of human behavior." Understanding Social Behavior in Dyadic and Small Group Interactions. PMLR, 2022.

---------------------

Wearable sensor (Midge) hardware

The wearable sensor developed and used for collecting data for the ConfLab dataset. More details can be found: https://github.com/TUDelft-SPC-Lab/spcl_midge_hardware


Please contact SPCLabDatasets-insy@tudelft.nl if you have any inquiries.  




CITE THIS COLLECTION

Raman, Chirag; Vargas Quiros, Jose; Tan, Stephanie; Islam, Ashraful; Gedik, Ekin; Hung, Hayley (2022): ConfLab: A Rich Multimodal Multisensor Dataset of Free-Standing Social Interactions In-the-Wild. 4TU.ResearchData. Collection. https://doi.org/10.4121/c.6034313.v2
or
Select your citation style and then place your mouse over the citation text to select it.

FUNDING

NWO 639.022.606.

Aspasia grant associated with vidi grant 639.022.606.

Publisher

4TU.ResearchData

Organizations

TU Delft, Faculty of Electrical Engineering, Mathematics and Computer Science, Intelligent Systems

SHARE

email
need help?