Subjective and objective data from the master thesis "Exploring the Effect of Automation Failure on the Human's Trustworthiness in Human-Agent Teamwork"

doi: 10.4121/21701216.v1
The doi above is for this specific version of this dataset, which is currently the latest. Newer versions may be published in the future. For a link that will always point to the latest version, please use
doi: 10.4121/21701216
Datacite citation style:
Nikki Bouman (2023): Subjective and objective data from the master thesis "Exploring the Effect of Automation Failure on the Human's Trustworthiness in Human-Agent Teamwork". Version 1. 4TU.ResearchData. dataset. https://doi.org/10.4121/21701216.v1
Other citation styles (APA, Harvard, MLA, Vancouver, Chicago, IEEE) available at Datacite
Dataset

Contains the answers to the questionnaire from qualtrics and the results from the analysed objective measurements. Questions are given at the top row.

- scenario: 1 means no automation failure, 2 means automation failure (during "post")

- trust: 1-5 Likert

- liking: 1-5 Likert

- propensity: 1-5 Likert

- trustworthiness: -100-100 sliding scale


The participant first answers the questions about their gender, age and gaming experience, where they receive the scenario number. Then they play the tutorial. Then they play the first game. This game is the same for every scenario: they have to move boxes in the correct order to the dropzone. There are three types of boxes: light, medium, and heavy. Light can only be carried alone, medium can be carried either alone or together, but carrying it alone slows you down, heavy can only be carried together. An automation walks around trying to help. Whenever you as a human need help or the automation needs help, you can press the help button and the robot comes to you, or you see that the robot needs help by the red exclamation mark next to its head. 


During the second game, the robot fails if you have scenario 2. In scenario 1, the robot behaves the same as in the first game.


Measurements are taking after the first game ("mid"-) and after the second game ("post"-). Only during the second game is there automation failure. This happens in terms of the automation breaking boxes (letting a box go while either holding it with the human or without), picking up the wrong box, dropping a box into the wrong location, asking for help at the wrong box. Objective measures are:

- broken boxes (how many boxes they broke) per robot or participant

- ask for help (how many times an agent asked for help) per robot or participant

- respond to help (how many times an agent responded to the call for help) per robot or participant

- carried together (how many times they carried a box together)

- carried alone (how many times the participant carried a box alone)

- response (how long it took in seconds for the participant to respond to the call for help of the robot)



history
  • 2023-01-03 first online, published, posted
publisher
4TU.ResearchData
format
.xlsx
organizations
TU Delft, Faculty of Electrical Engineering, Mathematics and Computer Science, Department of Intelligent Systems

DATA

files (1)