Artificial Intelligence (AI) chatbots are increasingly replacing human administrators, which seems to be an inevitable development. However, do the public trust them? In this paper, I investigate the public’s initial trust in chatbots when they hear that the government is about to introduce “AI” chatbots to respond to their inquiries. First, inspired by ergonomics studies, I proposed sources of human trust in chatbots used in the public sector (see the table below).
I performed an experimental study to test whether the public’s initial trust in chatbots alone, and their trust in chatbots compared to human administrators, depends on (i) the area of enquiry (because people’s confidence in the ability of chatbots to perform competently differ) and (ii) the purposes that governments communicate to the public for introducing the use of chatbots. This study involved an online survey administered in early 2019 to 8,000 individuals, aged 18-79, who were living in Japan. Based on the findings, the study draws some policy implications, including the following:
Expect Different Degrees of the Public’s Initial Trust in Chatbot Responses Across Areas of Enquiry.
The study found that the public’s initial trust in AI chatbots’ responses would be lower in the area of parental support than in the area of waste separation. This could be because providing a trustworthy response in the area of parental support would be much more demanding than in the area of waste sorting; trustworthy responses would have to provide enquirers with the information they want in both areas, but the responses in parental support would require more situational judgement and have to be framed in a socially proper and more empathetic manner than responses in waste sorting. Studies have generally found that people tend not to use machines if they do not trust them; it is possible that the differences in the public’s initial trust across areas of enquiry point to differences in the chatbot usage public government offices can expect if they were to introduce it.
Communicate Purposes for a Chatbot That Directly Benefit Citizens
Quite often, AI is said to help humans by reducing their workload. Japanese municipalities, too, justify the use of chatbots by saying that they are being used to reduce the burden on municipal staff and to fill in for labor shortages, as well as to afford staff more time to do other tasks. I did not find strong evidence to support the idea that communicating these purposes, which sound as though the chatbot would directly benefit municipal staff, has a positive impact on the public’s initial trust in the machine. Instead, the study found that other purposes communicated by Japanese municipalities that indicate the chatbot would directly benefit citizens would slightly enhance the public’s initial trust. These purposes are to achieve uniformity in response quality and timeliness in responding.
This paper is available at https://www.sciencedirect.com/science/article/pii/S0740624X1930406X?dgcid=author.