747
Views
0
CrossRef citations to date
0
Altmetric
Research Article

The SAGE Framework for Explaining Context in Explainable Artificial Intelligence

ORCID Icon, , &
Article: 2318670 | Received 31 Mar 2023, Accepted 01 Feb 2024, Published online: 22 Feb 2024

References

  • Adadi, A., and M. Berrada. 2018. Peeking inside the black-box: A survey on explainable artificial intelligence (xai). IEEE Access 6:52138–32. doi:10.1109/ACCESS.2018.2870052.
  • Arrieta, A. B., N. Daz-Rodrguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garca, S. Gil-López, D. Molina, R. Benjamins, et al. 2020. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion 58:82–115. doi:10.1016/j.inffus.2019.12.012.
  • Arya, V., R. K. Bellamy, P.-Y. Chen, A. Dhurandhar, M. Hind, S. C. Hoffman, S. Houde, Q. V. Liao, R. Luss, A. Mojsilović, et al. 2019. One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv Preprint arXiv 1909:03012.
  • Ashraf, A., J. Vermeulen, D. Wang, B. Y. Lim, and M. Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI Conference on human factors in computing systems, Montreal, Canada. 1–18.
  • Atakishiyev, S., H. Babiker, N. Farruque, R. Goebel, M. Y. Kima, M. Hossein Motallebi, J. Rabelo, T. Syed, and O. R. Zaane. 2020. A multi-component framework for the analysis and design of explainable artificial intelligence. arXiv Preprint arXiv 2005:01908.
  • Bahalul Haque, A. K. M., A. K. M. Najmul Islam, and P. Mikalef. 2023. Explainable artificial intelligence (xai) from a user perspective: A synthesis of prior literature and problematizing avenues for future research. Technological Forecasting and Social Change 186:122120. doi:10.1016/j.techfore.2022.122120.
  • Biran Or and Courtenay Cotton. 2017. Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI), Melbourne, Australia, 8, 8–13.
  • Bobadilla, J., F. Ortega, A. Hernando, and A. Gutiérrez. 2013. Recommender systems survey. Knowledge-Based Systems 46:109–32. doi:10.1016/j.knosys.2013.03.012.
  • Brown, P. J. 1995. The stick-e document: A framework for creating context-aware applications. Electronic Publishing-Chichester- 8:259–72.
  • Buchanan, B. 2019. Artificial Intelligence in Finance. London, UK: The Alan Turing Institute. doi:10.5281/zenodo.2612537.
  • Bunt, A., M. Lount, and C. Lauzon. 2012. Are explanations always important? A study of deployed, low-cost intelligent interactive systems. In Proceedings of the 2012 ACM international conference on intelligent user interfaces, Lisboa, Portugal, 169–78.
  • Byrne, R. M. 1991. The construction of explanations, AI and Cognitive Science '90: University of Ulster, Jordanstown, 337–51. London: Springer.
  • Cai, H., C. Gan, T. Wang, Z. Zhang, and S. Han. 2019. Once-for-all: Train one network and specialize it for efficient deployment. arXiv Preprint arXiv 1908:09791.
  • Carenini, G., and J. D. Moore. 1993. Generating explanations in context. In Proceedings of the 1st international conference on intelligent user interfaces, Orlando, Florida, USA, 175–82.
  • Centre for Data. Ethics and innovation. 2020. Review into bias in algorithmic decision-making. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/957259/Review_into_bias_in_algorithmic_decision-making.pdf.
  • Chami, R., T. F. Cosimano, and C. Fullenkamp. 2002. Managing ethical risk: How investing in ethics adds value. Journal of Banking & Finance 26 (9):1697–718. doi:10.1016/S0378-4266(02)00188-7.
  • Chesterman, S. 2021. Through a glass, darkly: Artificial intelligence and the problem of opacity. The American Journal of Comparative Law 69 (2):271–94. doi:10.1093/ajcl/avab012.
  • Christopher Bouch, D., and J. P. Thompson. 2008. Severity scoring systems in the critically ill. Continuing Education in Anaesthesia Critical Care & Pain 8 (5):181–85. doi:10.1093/bjaceaccp/mkn033.
  • Chromik, M., M. Eiband, F. Buchner, A. Krüger, and A. Butz. 2021. I think i get your point, ai! the illusion of explanatory depth in explainable AI. In 26th International conference on intelligent user interfaces, College Station, Texas, USA, 307–17.
  • Clancey, W. J. 1986. From guidon to neomycin and heracles in twenty short lessons. AI Magazine 7 (3):40–40.
  • Daly, E. M., F. Lecue, and V. Bicer. 2013. Westland row why so slow? Fusing social media and linked data sources for understanding real-time traffic conditions. In Proceedings of The 2013 international conference on intelligent user interfaces, Santa Monica, California, USA, 203–12.
  • Dey, A. K. 2001. Understanding and using context. Personal and Ubiquitous Computing 5 (1):4–7. doi:10.1007/s007790170019.
  • Doshi-Velez, F., and B. Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv Preprint arXiv 1702:08608.
  • Dourish, P. 2004. What we talk about when we talk about context. Personal and Ubiquitous Computing 8 (1):19–30. doi:10.1007/s00779-003-0253-8.
  • Edwards, L., and M. Veale. 2017. Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke Law & Technology Review 16:18.
  • Eriksson, T., A. Bigi, and M. Bonera. 2020. Think with me, or think for me? On the future role of artificial intelligence in marketing strategy formulation. The TQM Journal 32 (4):795–814. doi:10.1108/TQM-12-2019-0303.
  • Freitas, A. A. 2014. Comprehensible classification models: a position paper. ACM SIGKDD Explorations Newsletter 15 (1):1–10. doi:10.1145/2594473.2594475.
  • Fursin, G. 2020. Enabling reproducible ml and systems research: The good, the bad, and the ugly. August. https://doi.org/10.5281/ZENODO, 4005773.
  • Gilpin, L. H., D. Bau, B. Z. Yuan, A. Bajwa, M. Specter, and L. Kagal. 2018. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th international conference on data science and advanced analytics (DSAA), Turin, Italy, 80–89. IEEE.
  • Guidotti, R., A. Monreale, and D. Pedreschi. 2019. The AI black box explanation problem. ERCIM News 116:12–13.
  • Guidotti, R., A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi. 2018. A survey of methods for explaining black box models. ACM Computing Surveys (CSUR) 51 (5):1–42. doi:10.1145/3236009.
  • Hall, P., N. Gill, and N. Schmidt. 2019. Proposed guidelines for the responsible use of explainable machine learning. arXiv Preprint arXiv 1906:03533.
  • Hall, W., and J. Pesenti. 2017. Growing the artificial intelligence industry in the UK. UK: Department for science, innovation and technology. UK Government. https://assets.publishing.service.gov.uk/media/5a824465e5274a2e87dc2079/Growing_the_artificial_intelligence_industry_in_the_UK.pdf.
  • Haque, A. B., A. N. Islam, and P. Mikalef. 2023. Explainable Artificial Intelligence (XAI) from a user perspective: A synthesis of prior literature and problematizing avenues for future research. Technological Forecasting and Social Change 186:122120. doi: 10.1016/j.techfore.2022.122120.
  • Haynes, S. R., M. A. Cohen, and F. E. Ritter. 2009. Designs for explaining intelligent agents. International Journal of Human-Computer Studies 67 (1):90–110. doi:10.1016/j.ijhcs.2008.09.008.
  • Hoffman, R. R., G. Klein, and S. T. Mueller. 2018. Explaining explanation for “explainable ai”. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 62, 197–201. Los Angeles, CA, SAGE Publications Sage CA.
  • Hoffman, R. R., S. T. Mueller, G. Klein, and J. Litman. 2018. Metrics for explainable AI: Challenges and prospects. arXiv Preprint arXiv 1812:04608.
  • Holliday, D., S. Wilson, and S. Stumpf. 2016. User trust in intelligent systems: A journey over time. In Proceedings of the 21st international conference on intelligent user interfaces, California, USA, 164–68.
  • Hotten, R. 2015. Volkswagen: The scandal explained. BBC News 10:12.
  • Hull, R., P. Neaves, and J. Bedford-Roberts. 1997. Towards situated computing. In Digest of papers. first International Symposium on Wearable Computers, Cambridge, Massachusetts, USA, 146–53. IEEE.
  • Huysmans, J., K. Dejaeger, C. Mues, J. Vanthienen, and B. Baesens. 2011. An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decision Support Systems 51 (1):141–54. doi:10.1016/j.dss.2010.12.003.
  • Ibáñez Molinero, R., and J. Antonio Garca-Madruga. 2011. Knowledge and question asking. Psicothema 23 (1):26–30.
  • James Murdoch, W., C. Singh, K. Kumbier, R. Abbasi-Asl, and B. Yu. 2019. Definitions, methods, and applications in interpretable machine learning. In Proceedings of the National Academy of Sciences 116 (44):22071–80. doi:10.1073/pnas.1900654116.
  • Jens Riegelsberger, M. A. S., J. D. McCarthy, and J. D. McCarthy. 2005. The mechanics of trust: A framework for research and design. International Journal of Human-Computer Studies 62 (3):381–422. doi:10.1016/j.ijhcs.2005.01.001.
  • Jobin, A., M. Ienca, and E. Vayena. 2019. The global landscape of ai ethics guidelines. Nature Machine Intelligence 1 (9):389–99. doi:10.1038/s42256-019-0088-2.
  • Kaushal, R., K. G. Shojania, and D. W. Bates. 2003. Effects of computerized physician order entry and clinical decision support systems on medication safety: A systematic review. Archives of Internal Medicine 163 (12):1409–16. doi:10.1001/archinte.163.12.1409.
  • Kizilcec, R. F. 2016. How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI conference on human factors in computing systems, San Jose, California, USA, 2390–95.
  • Klein, G., B. Shneiderman, R. R. Hoffman, and K. M. Ford. 2017. Why expertise matters: A response to the challenges. IEEE Intelligent Systems 32 (6):67–73. doi:10.1109/MIS.2017.4531230.
  • Kononenko, I. 2001. Machine learning for medical diagnosis: History, state of the art and perspective. Artificial Intelligence in Medicine 23 (1):89–109. doi:10.1016/S0933-3657(01)00077-X.
  • Kulesza, T., S. Stumpf, M. Burnett, S. Yang, I. Kwan, and W.-K. Wong. 2013. Too much, too little, or just right? Ways explanations impact end users’ mental models. In 2013 IEEE symposium on visual languages and human centric computing, San Jose, California, USA, 3–10, IEEE.
  • Lamy, J.-B., K. Sedki, and R. Tsopra. 2020. Explainable decision support through the learning and visualization of preferences from a formal ontology of antibiotic treatments. Journal of Biomedical Informatics 104:103407. doi:10.1016/j.jbi.2020.103407.
  • Langer, M., D. Oster, T. Speith, H. Hermanns, L. Kästner, E. Schmidt, A. Sesing, and K. Baum. 2021. What do we want from explainable artificial intelligence (XAI)?–A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence 296:103473. doi:10.1016/j.artint.2021.103473.
  • Lee, W. J., H. Wu, H. Yun, H. Kim, M. B. Jun, and J. W. Sutherland. 2019. Predictive maintenance of machine tool systems using artificial intelligence techniques applied to machine condition data. Procedia Cirp 80:506–11. doi:10.1016/j.procir.2018.12.019.
  • Li, B.-H., B.-C. Hou, W.-T. Yu, X.-B. Lu, and C.-W. Yang. 2017. Applications of artificial intelligence in intelligent manufacturing: A review. Frontiers of Information Technology & Electronic Engineering 18 (1):86–96. doi:10.1631/FITEE.1601885.
  • Lipton, Z. C. 2018. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 16 (3):31–57. doi:10.1145/3236386.3241340.
  • Lombrozo, T. 2006. The structure and function of explanations. Trends in Cognitive Sciences 10 (10):464–70. doi:10.1016/j.tics.2006.08.004.
  • Lombrozo, T. 2007. Simplicity and probability in causal explanation. Cognitive Psychology 55 (3):232–57. doi:10.1016/j.cogpsych.2006.09.006.
  • Lombrozo, T., and S. Carey. 2006. Functional explanation and the function of explanation. Cognition 99 (2):167–204. doi:10.1016/j.cognition.2004.12.009.
  • Matus, K. J., and M. Veale. 2022. Certification systems for machine learning: Lessons from sustainability. Regulation & Governance 16 (1):177–96. doi:10.1111/rego.12417.
  • Meske, C., E. Bunde, J. Schneider, and M. Gersch. 2022. Explainable artificial intelligence: Objectives, stakeholders, and future research opportunities. Information Systems Management 39 (1):53–63. doi:10.1080/10580530.2020.1849465.
  • Mikalef, P., K. Conboy, J. Eriksson Lundström, and A. Popovič. 2022. Thinking responsibly about responsible AI and ‘the dark side’ of AI. European Journal of Information Systems 31 (3):257–68. doi:10.1080/0960085X.2022.2026621.
  • Miller, R. A. 1994. Medical diagnostic decision support systems–past, present, and future: A threaded bibliography and brief commentary. Journal of the American Medical Informatics Association 1 (1):8–27. doi:10.1136/jamia.1994.95236141.
  • Miller, T. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267:1–38. doi:10.1016/j.artint.2018.07.007.
  • Miller, T., P. Howe, and L. Sonenberg. 2017. Explainable AI: Beware of inmates running the asylum or: How i learnt to stop worrying and love the social and behavioural sciences. arXiv Preprint arXiv 1712:00547.
  • Mill, E., W. Garn, and N. Ryman-Tubb. 2022. Managing sustainability tensions in artificial intelligence: Insights from paradox theory. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, Oxford, England, UK, 491–98.
  • Mill, E., W. Garn, N. Ryman-Tubb, and C. Turner. 2023. Opportunities in real time fraud detection: An explainable artificial intelligence (XAI) research agenda. International Journal of Advanced Computer Science & Applications 14 (5). doi:10.14569/IJACSA.2023.01405121.
  • Mittelstadt, B., C. Russell, and S. Wachter. 2019. Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency, Atlanta, Georgia, USA, 279–88.
  • Mohseni, S., N. Zarei, and E. D. Ragan. 2021. A multidisciplinary survey and framework for design and evaluation of explainable AI systems. Acm Transactions on Interactive Intelligent Systems (Tiis) 11 (3–4):1–45. doi:10.1145/3387166.
  • Moore, J. D., and W. R. Swartout. 1989. A reactive approach to explanation, 11th International Joint Conference on Artificial Intelligence, Detroit, Michigan, USA. 1504–10.
  • Nielsen, A. B., H.-C. Thorsen-Meyer, K. Belling, A. P. Nielsen, C. E. Thomas, P. J. Chmura, M. Lademann, P. L. Moseley, M. Heimann, L. Dybdahl, et al. 2019. Survival prediction in intensive-care units based on aggregation of long-term disease history and acute physiology: A retrospective study of the danish national patient registry and electronic patient records. Lancet Digital Health 1(2):e78–e89. doi:10.1016/S2589-7500(19)30024-X.
  • Nikolaou, V., S. Massaro, M. Fakhimi, and W. Garn. 2022. Using machine learning to detect theranostic biomarkers predicting respiratory treatment response. Life 12 (6):775. doi:10.3390/life12060775.
  • Nikolaou, V., S. Massaro, M. Fakhimi, L. Stergioulas, and W. Garn. 2021. Covid-19 diagnosis from chest x-rays: Developing a simple, fast, and accurate neural network. Health Information Science and Systems 9 (1):1–11. doi:10.1007/s13755-021-00166-4.
  • Nozaki, N., E. Konno, M. Sato, M. Sakairi, T. Shibuya, Y. Kanazawa, and S. Georgescu. 2017. Application of artificial intelligence technology in product design. Fujitsu Scientific & Technical Journal 53 (4):43–51.
  • Panda, S. 2018. Impact of AI in Manufacturing Industries. International Research Journal of Engineering & Technology (IRJET) 5 (11): 1765–1767.
  • Pazzani, M. J. 2000. Knowledge discovery from data? IEEE Intelligent Systems and Their Applications 15 (2):10–12. doi:10.1109/5254.850821.
  • Pearl, P., and C. Li. 2007. Trust-inspiring explanation interfaces for recommender systems. Knowledge-Based Systems 20 (6):542–56. doi:10.1016/j.knosys.2007.04.004.
  • Poulin, B., R. Eisner, D. Szafron, P. Lu, R. Greiner, D. S. Wishart, A. Fyshe, B. Pearcy, C. MacDonell, and J. Anvik. 2006. Visual explanation of evidence with additive classifiers. In Proceedings of the national conference on artificial intelligence, 21, 1822. Menlo Park, CA; Cambridge, MA; London, AAAI Press; MIT Press.
  • Preece, A., D. Harborne, D. Braines, R. Tomsett, and S. Chakraborty. 2018. Stakeholders in explainable ai. arXiv Preprint arXiv 1810:00184.
  • Prokopenko, O., L. Shmorgun, V. Kushniruk, M. Prokopenko, M. Slatvinska, and L. Huliaieva. 2020. Business process efficiency in a digital economy. International Journal of Management (IJM) 11 (3):122–132.
  • Putnam, H. 1978. Meaning and the moral sciences (routledge revivals). London: Routledge.
  • Ras, G., M. van Gerven, and P. Haselager. 2018. Explanation methods in deep learning: Users, values, concerns and challenges. In Explainable and interpretable models in computer vision and machine learning. The Springer Series on Challenges in Machine Learning, ed. H. Escalante, I. Guyon and S. Escalera, 19–36. New York, NY, USA: Springer, Cham. doi:10.1007/978-3-319-98131-4-2.
  • Redden, J., J. Brand, I. Sander, H. Warne, and Data Justice Lab. 2022. Automating Public Services - learning from cancelled systems.Dunfermline, Fife, Scotland: Carnegie UK. https://carnegieuktrust.org.uk/publications/automating-public-services-learning-from-cancelled-systems/.
  • Ribeiro, M. T., S. Singh, and C. Guestrin. 2016. Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, San Francisco, California, USA, 1135–44.
  • Ribera, M., and A. Lapedriza. 2019. Can we do better explanations? A proposal of user-centered explainable AI. Joint Proceedings of the ACM IUI 2019 Workshops, Los Angeles, USA, 2327:38.
  • Ruben, D.-H. 2012. Explaining explanation. Routledge. doi:10.4324/9781315634739.
  • Rudin, C. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1 (5):206–15. doi:10.1038/s42256-019-0048-x.
  • Rüping, S. 2006. Learning interpretable models. PhD thesis, University of Dortmund.
  • Ryan, N. S., J. Pascoe, and D. R. Morse. 1998. Enhanced Reality Fieldwork: the Context Aware Archaeological Assistant. In Archaeology in the Age of the Internet - CAA97. Computer Applications and Quantitative Methods in Archaeology. Proceedings of the 25th Anniversary Conference, ed. S.Dingwall, S. Exon, V.Gaffney, S. Laflin and M.van Leusen, 269–274. Oxford, UK: Archaeopress.
  • Ryman-Tubb, N. F., P. Krause, and W. Garn. 2018. How artificial intelligence and machine learning research impacts payment card fraud detection: A survey and industry benchmark. Engineering Applications of Artificial Intelligence 76:130–57. doi:10.1016/j.engappai.2018.07.008.
  • Saleema, A., M. Chickering, S. M. Drucker, B. Lee, P. Simard, and J. Suh. 2015. Modeltracker: Redesigning performance analysis tools for machine learning. In Proceedings of the 33rd annual ACM conference on human factors in computing systems, Seoul, Republic of Korea, 337–46.
  • Samek, W., G. Montavon, A. Vedaldi, L. Kai Hansen, and K.-R. Müller. 2019. Explainable AI: Interpreting, explaining and visualizing deep learning, vol. 11700, Lecture Notes in Computer Science. Switzerland: Springer Nature.
  • Sanneman, L., and J. A. Shah. 2020. ”The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems.” International Journal of Human–Computer Interaction 38 , 1772–1788. doi:10.1080/10447318.2022.2081282.
  • Schilit, B. N., and M. M. Theimer. 1994. Disseminating active map information to mobile hosts. IEEE Network 8 (5):22–32. doi:10.1109/65.313011.
  • Schwartz, R., J. Dodge, N. A. Smith, O. Etzioni, and G. Ai. 2020. Green AI. Communications of the ACM 63 (12):54–63. doi:10.1145/3381831.
  • Sinha, R., and K. Swearingen. 2002. The role of transparency in recommender systems. CHI ‘02 Human Factors in Computing Systems. Extended Abstracts on Human Factors in Computing Systems, Minnesota, Minneapolis, USA. 830–31.
  • Sokol, K., and P. Flach. 2020. Explainability fact sheets: A framework for systematic assessment of explainable approaches. In Proceedings of the 2020 conference on fairness, accountability, and transparency, Barcelona, Spain, 56–67.
  • Sørmo, F., J. Cassens, and A. Aamodt. 2005. Explanation in case-based reasoning–perspectives and goals. Artificial Intelligence Review 24 (2):109–43. doi:10.1007/s10462-005-4607-7.
  • Spinner, T., U. Schlegel, H. Schafer, and M. El-Assady. 2019. Hanna Schäfer, and Mennatallah El-Assady. explainer: A visual analytics framework for interactive and explainable machine learning. IEEE Transactions on Visualization and Computer Graphics 26 (1):1064–74. doi:10.1109/TVCG.2019.2934629.
  • Strubell, E., A. Ganesh, and A. McCallum. 2019. Energy and policy considerations for deep learning in NLP. arXiv Preprint arXiv 1906:02243.
  • Swartout, W. R. 1983. Xplain: A system for creating and explaining expert consulting programs. Artificial Intelligence 21 (3):285–325. doi:10.1016/S0004-3702(83)80014-9.
  • Tomsett, R., D. Braines, D. Harborne, A. Preece, and S. Chakraborty. 2018. Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. arXiv Preprint arXiv 1806:07552.
  • Tulio Ribeiro, M., S. Singh, and C. Guestrin. 2016. Model-agnostic interpretability of machine learning. arXiv Preprint arXiv 1606:05386.
  • Vaccaro, M. A. 2019. Algorithms in human decision-making: A case study with the compas risk assessment software. PhD thesis, Harvard College.
  • Van den Berg, R., E. Awh, and W. Ji Ma. 2014. Factorial comparison of working memory models. Psychological Review 121 (1):124. doi:10.1037/a0035234.
  • van Wynsberghe, A. 2021. Sustainable AI: Ai for sustainability and the sustainability of ai. AI and Ethics 1 (3):213–18. doi:10.1007/s43681-021-00043-6.
  • Wachter, S., B. Mittelstadt, and L. Floridi. 2017. Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law 7 (2):76–99. doi:10.1093/idpl/ipx005.
  • Wang, W., and I. Benbasat. 2007. Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs. Journal of Management Information Systems 23 (4):217–46. doi:10.2753/MIS0742-1222230410.
  • Wang, D., E. Churchill, P. Maes, X. Fan, B. Shneiderman, Y. Shi, and Q. Wang. 2020. From human-human collaboration to human-AI collaboration: Designing AI systems that can work together with people. In CHI ‘20: CHI Conference on Human Factors in Computing Systems. Extended abstracts of the 2020 CHI conference on human factors in computing systems, Honolulu, Hawaii, USA, 1–6.
  • Watson, D. S., and L. Floridi. 2021. The explanation game: a formal framework for interpretable machine learning.Synthese 198:9211–9242. doi:10.1007/s11229-020-02629-9.
  • Weller, A. 2019. Transparency: Motivations and challengesExplainable AI: Interpreting, explaining and visualizing deep learning, ed. W. Samek, G. Montavon, A. Vedaldi, L. K. Hansen and K-R. Muller, 23–40. Berlin, Heidelberg: Springer-Verlag.
  • Wick, M. R. 1992. Expert system explanation in retrospect: A case study in the evolution of expert system explanation. Journal of Systems and Software 19 (2):159–69. doi:10.1016/0164-1212(92)90068-U.
  • Wick, M. R., P. Dutta, T. Wineinger, and J. Conner. 1995. Reconstructive explanation: A case study in integral calculus. Expert Systems with Applications 8 (4):463–73. doi:10.1016/0957-4174(94)E0036-T.
  • Wick, M. R., and W. B. Thompson. 1992. Reconstructive expert system explanation. Artificial Intelligence 54 (1–2):33–70. doi:10.1016/0004-3702(92)90087-E.
  • Wixom, B. H., and P. A. Todd. 2005. A theoretical integration of user satisfaction and technology acceptance. Information Systems Research 16 (1):85–102. doi:10.1287/isre.1050.0042.
  • Wolff Anthony, L. F., B. Kanding, and R. Selvan. 2020. Carbontracker: Tracking and predicting the carbon footprint of training deep learning models. arXiv Preprint arXiv 2007:03051.
  • Wuest, T., A. Kusiak, T. Dai, and S. R. Tayur. 2020. Impact of COVID-19 on manufacturing and supply networks–the case for ai-inspired digital transformation. Available at SSRN 3593540.
  • Zhang, Y., K. Song, and Y. Sun. 2019. Sarah Tan, and Madeleine Udell. “Why should you trust my explanation?” Understanding uncertainty in lime explanations. arXiv Preprint arXiv 1904:12991.
  • Zintgraf, L. M., T. S. Cohen, T. Adel, and M. Welling. 2017. Visualizing deep neural network decisions: Prediction difference analysis. arXiv Preprint arXiv 1702:04595.