JOURNAL ARTICLES

Explainable AI for government: Does the type of explanation matter to the accuracy, fairness, and trustworthiness of an algorithmic decision as perceived by those who are affected?

Aoki, N., Tatsumi, T., Naruse, Go., & Maeda, K. (2024). Explainable AI for government: Does the type of explanation matter to the accuracy, fairness, and trustworthiness of an algorithmic decision as perceived by those who are affected? Government Information Quarterly, 41(4), 101965. https://doi.org/10.1016/j.giq.2024.101965

Abstract

Amidst concerns over biased and misguided government decisions arrived at through algorithmic treatment, it is important for members of society to be able to perceive that public authorities are making fair, accurate, and trustworthy decisions. Inspired in part by equity and procedural justice theories and by theories of attitudes towards technologies, we posited that the perception of these attributes of decisions is influenced by the type of explanation offered, which can be input-based, group-based, case-based, or counterfactual. We tested our hypotheses with two studies, each of which involved a pre-registered online survey experiment conducted in December 2022. In both studies, the subjects (N = 1200) were officers in high positions at stock companies registered in Japan, who were presented with a scenario consisting of an algorithmic decision made by a public authority: a ministry’s decision to reject a grant application from their company (Study 1) and a tax authority’s decision to select their company for an on-site tax inspection (Study 2). The studies revealed that offering the subjects some type of explanation had a positive effect on their attitude towards a decision, to various extents, although the detailed results of the two studies are not robust. These findings call for a nuanced inquiry, both in research and practice, into how to best design explanations of algorithmic decisions from societal and human-centric perspectives in different decision-making contexts.