Push-the-Boundary

Datacite citation style:
Du, Shenglan; Ibrahimli, Nail; Jantien Stoter; kooij, Julian; Nan, Liangliang (2023): Push-the-Boundary. Version 1. 4TU.ResearchData. collection. https://doi.org/10.4121/f78cce55-dffd-4681-b458-1830c8b14525.v1
Other citation styles (APA, Harvard, MLA, Vancouver, Chicago, IEEE) available at Datacite
Collection
choose version: version 2 - 2024-02-12 (latest)
version 1 - 2023-12-07

Feedforward fully convolutional neural networks currently dominate in semantic segmentation of 3D point clouds. Despite their great success, they suffer from the loss of local information at low-level layers, posing significant challenges to accurate scene segmentation and precise object boundary delineation. Prior works either address this issue by post-processing or jointly learn object boundaries to implicitly improve feature encoding of the networks. These approaches often require additional modules which are difficult to integrate into the original architecture. To improve the segmentation near object boundaries, we propose a boundary-aware feature propagation mechanism. This mechanism is achieved by exploiting a multitask learning framework that aims to explicitly guide the boundaries to their original locations. With one shared encoder, our network outputs (i) boundary localization, (ii) prediction of directions pointing to the object’s interior, and (iii) semantic segmentation, in three parallel streams. The predicted boundaries and directions are fused to propagate the learned features to refine the segmentation. We conduct extensive experiments on the S3DIS and SensatUrban datasets against various baseline methods, demonstrating that our proposed approach yields consistent improvements by reducing boundary errors.

history
  • 2023-12-07 first online, published, posted
publisher
4TU.ResearchData
funding
  • Delft AI Initiative

DATASETS