TY - DATA
T1 - Supplementary data for the paper 'Putting ChatGPT Vision (GPT-4V) to the test: Risk perception in traffic images'
PY - 2024/04/16
AU - Tom Driessen
AU - Dimitra Dodou
AU - Pavlo Bazilinskyy
AU - Joost de Winter
UR - 
DO - 10.4121/dfbe6de4-d559-49cd-a7c6-9bebe5d43d50.v2
KW - ChatGPT
KW - GPT-4V
KW - vision-language models
KW - risk assessment
KW - traffic
N2 - <p>Vision-language models are of interest in various domains, including automated driving, where computer vision techniques can accurately detect road users, but where the vehicle sometimes fails to understand context. This study examined the effectiveness of GPT-4V in predicting the level of ‘risk’ in traffic images as assessed by humans. We used 210 static images taken from a moving vehicle, each previously rated by approximately 650 people. Based on psychometric construct theory and using insights from the self-consistency prompting method, we formulated three hypotheses: 1) repeating the prompt under effectively identical conditions increases validity, 2) varying the prompt text and extracting a total score increases validity compared to using a single prompt, and 3) in a multiple regression analysis, the incorporation of object detection features, alongside the GPT-4V-based risk rating, significantly contributes to improving the model’s validity. Validity was quantified by the correlation coefficient with human risk scores, across the 210 images. The results confirmed the three hypotheses. The eventual validity coefficient was r = 0.83, indicating that population-level human risk can be predicted using AI with a high degree of accuracy. The findings suggest that GPT-4V must be prompted in a way equivalent to how humans fill out a multi-item questionnaire.</p>
ER -