%0 Generic
%A Raman, Chirag
%A Vargas Quiros, Jose
%A Tan, Stephanie
%A Islam, Ashraful
%A Gedik, Ekin
%A Hung, Hayley
%D 2022
%T Raw Data for ConfLab: A Data Collection Concept, Dataset, and Benchmark for Machine Analysis of Free-Standing Social Interactions in the Wild
%U https://data.4tu.nl/articles/dataset/Raw-Data_for_ConfLab_A_Rich_Multimodal_Multisensor_Dataset_of_Free-Standing_Social_Interactions_In-the-Wild/20017748/2
%R 10.4121/20017748.v2
%K data-raw
%K conflab
%K cameras
%K wearables
%K multimodal
%K ConfLab
%X <p>This file contains raw data for cameras and wearables of the ConfLab dataset. </p>
<p><br></p>
<p><strong>./cameras </strong></p>
<p>contains the overhead video recordings for 9 cameras (cam2-10) in MP4 files. </p>
<p>    These cameras cover the whole interaction floor, with camera 2 capturing the</p>
<p>    bottom of the scene layout, and camera 10 capturing top of the scene layout.</p>
<p>    Note that cam5 ran out of battery before the other cameras and thus the recordings </p>
<p>    are cut short. However, cam4 and 6 contain significant overlap with cam 5, to </p>
<p>    reconstruct any information needed.</p>
<p><br></p>
<p>    Note that the annotations are made and provided in 2 minute segments. </p>
<p>    The annotated portions of the video include the last 3min38sec of x2xxx.MP4 </p>
<p>    video files, and the first 12 min of x3xxx.MP4 files for cameras (2,4,6,8,10), </p>
<p>    with "x" being the placeholder character in the mp4 file names. If one wishes </p>
<p>    to separate the video into 2 min segments as we did, the "video-splitting.sh"</p>
<p>    script is provided. </p>
<p><br></p>
<p>./camera-calibration contains the camera instrinsic files obtained from </p>
<p>    https://github.com/idiap/multicamera-calibration. Camera extrinsic parameters can </p>
<p>    be calculated using the existing intrinsic parameters and the instructions in the </p>
<p>    multicamera-calibration repo. The coordinates in the image are provided by the </p>
<p>    crosses marked on the floor, which are visible in the video recordings. </p>
<p>    The crosses are 1m apart (=100cm).  </p>
<p><br></p>
<p><strong>./wearables</strong></p>
<p>subdirectory includes the IMU, proximity and audio data from each </p>
<p>    participant at the Conflab event (48 in total). In the directory numbered </p>
<p>    by participant ID, the following data are included:</p>
<p>        1. raw audio file</p>
<p>        2. proximity (bluetooth) pings (RSSI) file (raw and csv) and a visualization </p>
<p>        3. Tri-axial accelerometer data (raw and csv) and a visualization </p>
<p>        4. Tri-axial gyroscope data (raw and csv) and a visualization </p>
<p>        5. Tri-axial magnetometer data (raw and csv) and a visualization</p>
<p>        6. Game rotation vector (raw and csv), recorded in quaternions. </p>
<p><br></p>
<p>    All files are timestamped.</p>
<p>    The sampling frequencies are:</p>
<p>        - audio: 1250 Hz</p>
<p>        - rest: around 50Hz. However, the sample rate is not fixed</p>
<p>        and instead the timestamps should be used. </p>
<p><br></p>
<p>    For rotation, the game rotation vector's output frequency is limited by the </p>
<p>    actual sampling frequency of the magnetometer. For more information, please refer to </p>
<p>    https://invensense.tdk.com/wp-content/uploads/2016/06/DS-000189-ICM-20948-v1.3.pdf</p>
<p><br></p>
<p>    Audio files in this folder are in raw binary form. The following can be used to convert</p>
<p>    them to WAV files (1250Hz):</p>
<p><br></p>
<p>        ffmpeg -f s16le -ar 1250 -ac 1 -i /path/to/audio/file</p>
<p><br></p>
<p><strong>Synchronization of cameras and werables data</strong></p>
<p>    Raw videos contain timecode information which matches the timestamps of the data in</p>
<p>    the "wearables" folder. The starting timecode of a video can be read as:</p>
<p>        ffprobe -hide_banner -show_streams -i /path/to/video</p>
<p><br></p>
<p><strong>./audio</strong></p>
<p>./sync: contains wav files per each subject</p>
<p>./sync_files: auxiliary csv files used to sync the audio. Can be used to improve the synchronization. </p>
<p>The code used for syncing the audio can be found here:   </p>
<p>https://github.com/TUDelft-SPC-Lab/conflab/tree/master/preprocessing/audio </p>
%I 4TU.ResearchData