Mined Object and Relational Data for Sets of Locations
datasetposted on 13.02.2019 by J. Timothy Balint
Datasets usually provide raw data for analysis. This raw data often comes in spreadsheet form, but can be any collection of data, on which analysis can be performed.
Mined Location Object and Relation data Overview This data-set contains the mined objects and relationships from a few different data-sets comprising of annotated images (SUNRGBD) and annotated virtual environments (SUNCG). It is split up into pairwise distance/angle relationships (PAIRWISE) and higher level semantic relationships. For distance/angle relationships, the name of the file (Fisher12 or Kermani) represent the way in which they were parsed. Fisher uses Gaussian Mixture Models, and Kermani uses K-Means clustering to figure out the number of different relationships that there are. Note that there are a few changes between Kermani et al.'s original implementation and how this data-set was mined. Specifically: 1) We change the probabilities for symmetry. For scenes that have only a few examples, 0.005 is too low to be salient for anything. 2) We require a location type to have more than one example, and to have salient objects be more than one. This is not explictely stated in Kermani et al., because most scene generation methods only consider rooms that have many examples (order of 100 at least). We make having at least one location a requirement and mine on the examples that have less objects. This cuts out a few locations that have very few rooms in general. 3) We preprocess the nodes for the min-spanning tree to only consider objects whose count is above the threshold. This has the effect of making our connections more salient in general, and cleans up a bit of noise. Citations: If you find this data-set useful, please cite the original data-sets that the information came from: NYU:N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor segmentation and support inference from rgbd images,” Computer Vision–ECCV 2012, pp. 746–760, 2012. SUN-RGBD (Note I do not include the datasets in sun-rgbd, but you should):S. Song, S. P. Lichtenberg, and J. Xiao, “Sun rgb-d: A rgb-d scene understanding benchmark suite,” presented at the Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 567–576. SUNCG:S. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. Funkhouser, “Semantic Scene Completion from a Single Depth Image,” IEEE Conference on Computer Vision and Pattern Recognition, 2017. As well as the methods that they were obtained from: Fisher12-PairWise: M. Fisher, D. Ritchie, M. Savva, T. Funkhouser, and P. Hanrahan, “Example-based synthesis of 3D object arrangements,” ACM Transactions on Graphics (TOG), vol. 31, no. 6, p. 135, 2012. Kermani: Z. S. Kermani, Z. Liao, P. Tan, and H. Zhang, “Learning 3D Scene Synthesis from Annotated RGB‐D Images,” Computer Graphics Forum, vol. 35, no. 5, pp. 197–206, 2016. SceneSuggest (This paper contains the equations used in scene suggest): M. Savva, A. X. Chang, and P. Hanrahan, “Semantically-enriched 3D models for common-sense knowledge,” presented at the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2015, pp. 24–31.