Mined Object and Relational Data for Sets of Locations

Datacite citation style:
Balint, J. Timothy (2019): Mined Object and Relational Data for Sets of Locations. Version 1. 4TU.ResearchData. dataset. https://doi.org/10.4121/uuid:1fbfd4a0-1b7f-4dec-8097-617fea87cde5
Other citation styles (APA, Harvard, MLA, Vancouver, Chicago, IEEE) available at Datacite
Dataset
Mined Location Object and Relation data Overview This data-set contains the mined objects and relationships from a few different data-sets comprising of annotated images (SUNRGBD) and annotated virtual environments (SUNCG). It is split up into pairwise distance/angle relationships (PAIRWISE) and higher level semantic relationships. For distance/angle relationships, the name of the file (Fisher12 or Kermani) represent the way in which they were parsed. Fisher uses Gaussian Mixture Models, and Kermani uses K-Means clustering to figure out the number of different relationships that there are. Note that there are a few changes between Kermani et al.'s original implementation and how this data-set was mined. Specifically: 1) We change the probabilities for symmetry. For scenes that have only a few examples, 0.005 is too low to be salient for anything. 2) We require a location type to have more than one example, and to have salient objects be more than one. This is not explictely stated in Kermani et al., because most scene generation methods only consider rooms that have many examples (order of 100 at least). We make having at least one location a requirement and mine on the examples that have less objects. This cuts out a few locations that have very few rooms in general. 3) We preprocess the nodes for the min-spanning tree to only consider objects whose count is above the threshold. This has the effect of making our connections more salient in general, and cleans up a bit of noise. Citations: If you find this data-set useful, please cite the original data-sets that the information came from: NYU:N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor segmentation and support inference from rgbd images,” Computer Vision–ECCV 2012, pp. 746–760, 2012. SUN-RGBD (Note I do not include the datasets in sun-rgbd, but you should):S. Song, S. P. Lichtenberg, and J. Xiao, “Sun rgb-d: A rgb-d scene understanding benchmark suite,” presented at the Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 567–576. SUNCG:S. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. Funkhouser, “Semantic Scene Completion from a Single Depth Image,” IEEE Conference on Computer Vision and Pattern Recognition, 2017. As well as the methods that they were obtained from: Fisher12-PairWise: M. Fisher, D. Ritchie, M. Savva, T. Funkhouser, and P. Hanrahan, “Example-based synthesis of 3D object arrangements,” ACM Transactions on Graphics (TOG), vol. 31, no. 6, p. 135, 2012. Kermani: Z. S. Kermani, Z. Liao, P. Tan, and H. Zhang, “Learning 3D Scene Synthesis from Annotated RGB‐D Images,” Computer Graphics Forum, vol. 35, no. 5, pp. 197–206, 2016. SceneSuggest (This paper contains the equations used in scene suggest): M. Savva, A. X. Chang, and P. Hanrahan, “Semantically-enriched 3D models for common-sense knowledge,” presented at the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2015, pp. 24–31.
history
  • 2019-02-13 first online, published, posted
publisher
4TU.Centre for Research Data
format
media types: application/pdf, application/zip, text/csv
funding
  • Netherlands Organization for Scientific Research, 314-99-104
organizations
TU Delft, Faculty of Electrical Engineering, Mathematics and Computer Science, Department of Intelligent Systems
contributors
  • Bidarra, R. (Rafael) orcid logo

DATA

files (2)