Identifying and Evaluating Components for Human Trust in AI-Automated Service Encounters
Information
Författare: Joakim Eklund, Fred IsakssonBeräknat färdigt: 2019-06
Handledare: Maria Mattsson Mähl
Handledares företag/institution: AlphaCE
Ämnesgranskare: Kristiaan Pelckmans
Övrigt: -
Presentationer
Presentation av Joakim EklundPresentationstid: 2019-06-10 16:15
Presentation av Fred Isaksson
Presentationstid: 2019-06-10 17:15
Opponenter: Karin Eklann, Linn Ansved
Abstract
The uprise of AI-powered services has not gone anyone unnoticed, and it is plausible to assume that we are only just scratching the service on what their encroachment will mean, both for practitioners and researchers. The intensifying idea that AI will be a part of our everyday life allows for dreams about the complex relationship we one day could have with non-human social intelligence. However, there is a number of pressing questions to be answered for this vision to become a reality. One of them being, what factors will engender respectively counteract human trust in socially oriented AI-automated processes? What aspects of a human-machine-interaction generates the trustworthiness needed for us to reveal personal information and to be influenced to the point where our image of AI is transformed from tools to teammates in all aspects of life.
This masters thesis tackles the seemingly ambiguous concept of trust in automation by identifying and evaluating components that affect trust in a confined and contextualised setting. Practically, we design, construct and test an AI-automated chatbot, Ava, that contains socially oriented questions and feedback about vocational guidance and career advice. Through a comparative study of different system versions, including both quantitative and qualitative data, we contribute to the framework for identifying, defining, measuring and evaluating human trust in AI-Automated service encounters. More specifically, we account for how integrity, benevolence and ability of a system, through alterations of transparency, biases and system reliability respectively, affect trust