学術誌論文

The importance of the assurance that “humans are still in the decision loop” for public trust in artificial intelligence: Evidence from an online experiment

Aoki, N. (2021). The importance of the assurance that “humans are still in the decision loop” for public trust in artificial intelligence: Evidence from an online experiment. Computers in Human Behavior, 114. https://doi.org/10.1016/j.chb.2020.106572

Abstract

This study investigates the public’s initial trust in an artificial intelligence (AI) decision aid utilized in the delivery of public services. Amidst societal anxiety surrounding AI, the study posited that the information communicated to the public about the use of AI matters to the public’s initial trust in AI. More specifically, the study hypothesized that an assurance that “humans are still in the decision loop” (HDL) makes a difference in the public’s initial trust (H1), which might also depend on the stated purposes for using AI (H2). This article reports on the results from an online experiment testing these hypotheses in the context of Japan’s long-term nursing care sector, based on the responses of care users and their families (N = 1542). The study did not find strong evidence to support H2. However, it found some support for H1: the proportion of those who trusted a care plan prepared with AI assistance more than a care plan not involving AI was higher by 8.95 percentage points with the HDL assurance than without. This highlights the importance of the HDL assurance and reveals respondents’ reservations about a complete AI takeover in care planning.