Code underlying the publication: A Hybrid Spatial-temporal Deep Learning Architecture for Lane Detection

DOI:10.4121/ba5805cb-a909-4185-97b0-296739df7def.v1
The DOI displayed above is for this specific version of this dataset, which is currently the latest. Newer versions may be published in the future. For a link that will always point to the latest version, please use
DOI: 10.4121/ba5805cb-a909-4185-97b0-296739df7def
Datacite citation style:
Dong, Yongqi; Patil, Sandeep; van Arem, Bart; Haneen Farah (2025): Code underlying the publication: A Hybrid Spatial-temporal Deep Learning Architecture for Lane Detection. Version 1. 4TU.ResearchData. dataset. https://doi.org/10.4121/ba5805cb-a909-4185-97b0-296739df7def.v1
Other citation styles (APA, Harvard, MLA, Vancouver, Chicago, IEEE) available at Datacite

Dataset

Codes for the paper

Dong, Y., Patil, S., van Arem, B., & Farah, H. (2023). A Hybrid Spatial-temporal Deep Learning Architecture for Lane DetectionComputer-Aided Civil and Infrastructure Engineering38(1), pp.67–86.


Accurate and reliable lane detection is vital for the safe performance of lane-keeping assistance and lane departure warning systems. However, under certain challenging circumstances, it is difficult to get satisfactory performance in accurately detecting the lanes from one single image as mostly done in current literature. Since lane markings are continuous lines, the lanes that are difficult to be accurately detected in the current single image can potentially be better deduced if information from previous frames is incorporated. This study proposes a novel hybrid spatial–temporal (ST) sequence-to-one deep learning architecture. This architecture makes full use of the ST information in multiple continuous image frames to detect the lane markings in the very last frame. Specifically, the hybrid model integrates the following aspects: (a) the single image feature extraction module equipped with the spatial convolutional neural network; (b) the ST feature integration module constructed by ST recurrent neural network; (c) the encoder–decoder structure, which makes this image segmentation problem work in an end-to-end supervised learning format. Extensive experiments reveal that the proposed model architecture can effectively handle challenging driving scenes and outperforms available state-of-the-art methods.


History

  • 2025-02-20 first online, published, posted

Publisher

4TU.ResearchData

Format

py; txt; jpg; avi

Funding

  • Safe and efficient operation of AutoMated and human drivEN vehicles in mixed traffic (grant code 17187) [more info...] Applied and Technical Sciences (TTW), a subdomain of the Dutch Institute for Scientific Research (NWO)

Organizations

Delft University of Technology, Faculty of Civil Engineering and Geosciences, Department of Transport and Planning

DATA

Files (3)