Data and Results for the Benchmark of Gap Acceptance Models
doi:10.4121/21334548.v3
The doi above is for this specific version of this dataset, which is currently the latest. Newer versions may be published in the future.
For a link that will always point to the latest version, please use
doi: 10.4121/21334548
doi: 10.4121/21334548
Datacite citation style:
Julian Schumann (2022): Data and Results for the Benchmark of Gap Acceptance Models. Version 3. 4TU.ResearchData. dataset. https://doi.org/10.4121/21334548.v3
Other citation styles (APA, Harvard, MLA, Vancouver, Chicago, IEEE) available at Datacite
Dataset
This dataset allows the addition of the missing parts of the framework for benchmarking gap accetpance models (https://github.com/julianschumann/Framework-for-benchmarking-gap-acceptance). This are:
- The raw data of the CoR dataset
- The results at every stage of the framework for the implemented modules. The results shown in the corresponding paper are based thereon.
The information on how to include this data in the corresponding code base can be found on the github project README
history
- 2022-10-17 first online
- 2022-12-05 published, posted
publisher
4TU.ResearchData
format
Two .zip files, including *.csv and *.npy files
organizations
TU Delft, Faculty of Mechanical, Maritime and Materials Engineering (3mE), Department of Cognitive Robotics
DATA
files (2)
- 109,245,983 bytesMD5:
63c50f2b6ad79b10f0150ce3762a3fc0
CoR_data_raw.zip - 4,858,262,212 bytesMD5:
fd6a461215fdaa42e1ad83f6fe425524
Framework_results.zip -
download all files (zip)
4,967,508,195 bytes unzipped