Supplementary data for the paper ‘Stopping by looking: A driver-pedestrian interaction study in a coupled simulator using head-mounted displays with eye-tracking'

doi: 10.4121/20005565.v1
The doi above is for this specific version of this dataset, which is currently the latest. Newer versions may be published in the future. For a link that will always point to the latest version, please use
doi: 10.4121/20005565
Datacite citation style:
Mok, Chun Sang; Bazilinskyy, Pavlo; de Winter, Joost (2022): Supplementary data for the paper ‘Stopping by looking: A driver-pedestrian interaction study in a coupled simulator using head-mounted displays with eye-tracking'. Version 1. 4TU.ResearchData. dataset. https://doi.org/10.4121/20005565.v1
Other citation styles (APA, Harvard, MLA, Vancouver, Chicago, IEEE) available at Datacite
Dataset

 Automated vehicles (AVs) can perform low-level control tasks but are not always capable of proper decision-making. This paper presents a concept of eye-based maneuver control for AV-pedestrian interaction. Previously, it was unknown whether the AV should conduct a stopping maneuver when the driver looks at the pedestrian or looks away from the pedestrian. A two-agent experiment was conducted using two head-mounted displays with integrated eye-tracking. Seventeen pairs of participants (pedestrian and driver) each interacted in a road crossing scenario. The pedestrians’ task was to hold a button when they felt safe to cross the road, and the drivers’ task was to direct their gaze according to instructions. Participants completed three 16-trial blocks: (1) Baseline, in which the AV was pre-programmed to yield or not yield, (2) Look to Yield (LTY) in which the AV yielded when the driver looked at the pedestrian, and (3) Look Away to Yield (LATY) in which the AV yielded when the driver did not look at the pedestrian. The driver’s eye movements in the LTY and LATY conditions were visualized using a virtual light beam. A performance score was computed based on whether the pedestrian held the button when the AV yielded and released the button when the AV did not yield. Furthermore, the pedestrians’ and drivers’ acceptance of the mappings was measured through a questionnaire. The results showed that the LTY and LATY mappings yielded better crossing performance than Baseline. Furthermore, the LTY condition was best accepted by drivers and pedestrians. Eye-tracking analyses indicated that the LTY and LATY mappings attracted the pedestrian’s attention, but pedestrians adequately distributed their attention between the AV and a second vehicle approaching from the other direction. In conclusion, LTY control may be a promising means of AV control at intersections before full automation is technologically feasible. 

history
  • 2022-06-23 first online, published, posted
publisher
4TU.ResearchData
format
.txt; .pdf; .xlsx; .m; ;.mat; .pdf
funding
  • This research is supported by grant 016.Vidi.178.047 (“How should automated vehicles communicate with other road users?”), which is financed by the Netherlands Organisation for Scientific Research (NWO).
organizations
Department of Cognitive Robotics, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology

DATA

files (2)