1,297
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Engineering Responsible And Explainable Models In Human-Agent Collectives

ORCID Icon & ORCID Icon
Article: 2282834 | Received 20 Nov 2022, Accepted 28 Oct 2023, Published online: 05 Dec 2023

References

  • Abeywickrama, D. B., N. Bicocchi, M. Mamei, and F. Zambonelli. 2020. The SOTA approach to engineering collective adaptive systems. International Journal on Software Tools for Technology Transfer 22 (4):399–56. doi:10.1007/s10009-020-00554-3.
  • Abeywickrama, D. B., C. Cirstea, and S. D. Ramchurn. 2019. Model checking human-agent collectives for responsible AI. In Proceedings of the 28th ieee international conference on robot and human interactive communication (RO-MAN), New Delhi, India: IEEE. doi:10.1109/RO-MAN46459.2019.8956429.
  • Adegoke, O., A. Ab Aziz, and Y. Yusof. 2016. Formal analysis of an agent support model for behaviour change intervention. International Journal on Advanced Science, Engineering and Information Technology 6 (6):1074–80. doi:10.18517/ijaseit.6.6.1470.
  • Akintunde, M. E., A. Kevorchian, A. Lomuscio, and E. Pirovano. 2019. Verification of RNN-based neural agent- environment systems. Proceedings of the AAAI Conference on Artificial Intelligence 33 (1):6006–13. 01. doi:10.1609/aaai.v33i01.33016006.
  • Awad, E., S. Dsouza, R. Kim, J. Schulz, J. Henrich, A. Shariff, J. F. Bonnefon, and I. Rahwan. 2018. The moral machine experiment. Nature 563 (7729):59–64. doi:10.1038/s41586-018-0637-6.
  • Barredo Arrieta, A., N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcia, S. Gil-Lopez, D. Molina, R. Benjamins, et al. 2020. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58:82–115. doi:10.1016/j.inffus.2019.12.012.
  • Bentzen, M. M. 2016. The principle of double effect applied to ethical dilemmas of social robots. In Proceedings of robophilosophy 2016/transor 2016, 268–79. IOS Press. doi:10.3233/978-1-61499-708-5-268.
  • Bjørgen, E. P., S. Madsen, T. S. Bjørknes, F. V. Heimsæter, R. Håvik, M. Linderud, P.-N. Longberg, L. A. Dennis, and M. Slavkovik. 2018. Cake, death, and trolleys: Dilemmas as benchmarks of ethical decision-making. In Proceedings of the AAAI/ACM Conference on Artificial Intelligence, and Society, New Orleans, USA, February. doi:10.1145/3278721.3278767.
  • Bremner, P., L. A. Dennis, M. Fisher, and A. F. Winfield. 2019. On proactive, transparent, and verifiable ethical reasoning for robots. Proceedings of the IEEE 107 (3):541–61. doi:10.1109/JPROC.2019.2898267.
  • Casimiro, M., D. Garlan, J. Cámara, L. Rodrigues, and P. Romano. 2021. A probabilistic model checking approach to self-adapting machine learning systems. In Proceedings of the Third International Workshop on Automated and verifiable Software System Development (ASYDE). doi:10.1007/978-3-031-12429-7_23.
  • Choi, J., S. Kim, and A. Tsourdos. 2015. Verification of heterogeneous multi-agent system using MCMAS. International Journal of Systems Science 46 (4):634–51. doi:10.1080/00207721.2013.793890.
  • Clarke, E. M., O. Grumberg, and D. A. Peled. 1999. Model checking. Cambridge, MA, USA: MIT Press. ISBN: 0-262-03270-8.
  • Clarke, E. M., W. Klieber, M. Nováček, and P. Zuliani. 2012. Model checking and the state explosion problem. In Tools for practical software verification: LASER, international Summer school 2011, Elba Island, Italy, revised tutorial lectures, B. Meyer and M. Nordio ed., 1–30. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN: 978-3-642-35746-6. doi:10.1007/978-3-642-35746-6_1.
  • Cointe, N., G. Bonnet, and O. Boissier. 2016. Ethical judgment of agents’ behaviors in multi-agent systems. In Proceedings of the aamas ’16 conference, 1106–14. International Foundation for Autonomous Agents/Multiagent Systems. doi:10.5555/2936924.2937086.
  • Conitzer, V., W. Sinnott-Armstrong, J. S. Borg, Y. Deng, and M. Kramer. 2017. Moral decision making frameworks for artificial intelligence. In Proceedings of the 31st AAAI conference on artificial intelligence, San Francisco, California, USA. February 4-9, 2017. doi:10.5555/3297863.3297907.
  • Cummings, M. M. 2014. Man versus machine or man + machine? IEEE Intelligent Systems 29 (5):62–69. doi:10.1109/MIS.2014.87.
  • Cummings, M. L., C. Mastracchio, K. M. Thornburg, and A. Mkrtchyan. 2013. Boredom and distraction in multiple unmanned vehicle supervisory control. Interacting with Computers 25 (1):34–47. doi:10.1093/iwc/iws011.
  • Dennis, L. A., M. M. Benzen, F. Lindner, and M. Fisher. 2021. Verifiable machine ethics in changing contexts. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI 2021) . https://ojs.aaai.org/index.php/AAAI/article/view/17366.
  • Dennis, L. A., and M. Fisher. 2020. Verifiable self-aware agent-based autonomous systems. Proceedings of the IEEE 108 (7):1011–26. doi:10.1109/JPROC.2020.2991262.
  • Dennis, L., and M. Fisher. 2021. Verifiable autonomy and responsible robotics. In Software engineering for robotics, A. Cavalcanti, B. Dongol, R. Hierons, J. Timmis, and J. Woodcock ed., 189–217. Cham: Springer International Publishing. ISBN: 978-3-030-66494-7. doi:10.1007/978-3-030-66494-7_7.
  • Dennis, L. A., M. Fisher, N. K. Lincoln, A. Lisitsa, and S. M. Veres. 2016. Practical verification of decision-making in agent-based autonomous systems. Automated Software Engineering 23 (3):305–59. doi:10.1007/s10515-014-0168-9.
  • Dennis, L., M. Fisher, M. Slavkovik, and M. Webster. 2016. Formal verification of ethical choices in autonomous systems. Robotics and Autonomous Systems 77:1–14. doi:10.1016/j.robot.2015.11.012.
  • Dignum, V. 2017. Responsible artificial intelligence: Designing AI for human values. ITU Journal: ICT Discoveries 1:1–8. Accessed 2022, March 10. doi:10.1145/3278721.3278745.
  • Dignum, V., M. Baldoni, C. Baroglio, M. Caon, R. Chatila, L. Dennis, G. Génova, G. Haim, M. S. Kließ, M. Lopez-Sanchez, and R. Micalizio. 2018. Ethics by design: Necessity or curse?. In Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society, New Orleans, USA.
  • Dogan, E., R. Chatila, S. Chauvier, K. Evans, P. Hadjixenophontos, and J. Perrin. 2016. Ethics in the design of automated vehicles: The AVEthics project. In Proc. Of the 1st workshop on ethics in the design of intelligent agents, 10–13. The Netherlands: The Hague. August. http://ceur-ws.org/Vol-1668/paper2.pdf.
  • Elkholy, W., M. El-Menshawy, J. Bentahar, M. Elqortobi, A. Laarej, and R. Dssouli. 2020. Model checking intelligent avionics systems for test cases generation using multi-agent systems. Expert Systems with Applications 156:156. doi:10.1016/j.eswa.2020.113458.
  • Fisher, M., L. Dennis, and M. Webster. 2013. Verifying autonomous systems. Communications of the ACM 56 (9):84–93. doi:10.1145/2494558.
  • Fisher, M., V. Mascardi, K. Yvonne Rozier, B.-H. Schlingloff, M. Winikoff, and N. Yorke-Smith. 2021. Towards a framework for certification of reliable autonomous systems. Autonomous Agents and Multi-Agent Systems 35 (1). doi:10.1007/s10458-020-09487-2.
  • Goebel, R., A. Chander, K. Holzinger, F. Lecue, Z. Akata, S. Stumpf, P. Kieseberg, and A. Holzinger. 2018. Explainable AI: The new 42? In Proceedings of the 2nd International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), ed. A. Holzinger, A. M. T. Peter Kieseberg, and E. Weippl, vol. LNCS- 11015, 295–303. Hamburg, Germany: Springer, August. doi:10.1007/978-3-319-99740-7_21.
  • Greene, J., F. Rossi, J. Tasioulas, K. B. Venable, and B. Williams. 2016. Embedding ethical principles in collective decision support systems. In Proceeding of the AAAI’16 Conference, 4147–51. AAAI Press, February. doi:10.5555/3016387.3016503.
  • Halilovic, A., and F. Lindner. 2022. Explaining local path plans using LIME. In Advances in service and industrial robotics, ed. A. Müller and M. Brandstötter, 106–13. Cham: Springer International Publishing. doi:10.1007/978-3-031-04870-8_13.
  • Harbers, M., K. van den Bosch, and J. Meyer. 2010. Design and evaluation of explainable BDI agents. 2010 ieee/wic/acm international conference on web intelligence and intelligent agent technology 2:125–32. 10.1109/WI-IAT.2010.115.
  • Koeman, V. J., L. A. Dennis, M. Webster, M. Fisher, and K. Hindriks. 2020. The “why did you do that?” button: Answering why-questions for end users of robotic systems. In Engineering multi-agent systems, ed. L. A. Dennis, R. H. Bordini, and Y. Lespérance, 152–72. Cham: Springer International Publishing. doi:10.1007/978-3-030-51417-4_8.
  • Kouvaros, P., and A. Lomuscio. 2016. Parameterised verification for multi-agent systems. Artificial Intelligence 234:152–89. doi:10.1016/j.artint.2016.01.008.
  • Kouvaros, P., A. Lomuscio, E. Pirovano, and H. Punchihewa 2019. Formal verification of open multi-agent systems. In Proceedings of the 18th international conference on autonomous agents and multiagent systems, AAMAS ’19, ed. E. Elkind, M. Veloso, N. Agmon, and M. E. Taylor, 179–87. Montreal, QC, Canada: International Foundation for Autonomous Agents/Multiagent Systems. doi:10.5555/3306127.3331691.
  • Krarup, B., S. Krivic, F. Lindner, and D. Long. 2020. Towards contrastive explanations for comparing the ethics of plans. In ICRA workshop against robot dystopias: thinking through the ethical, legal and societal issues of robotics and automation (AGAINST-20). https://against-20.github.io/.
  • Kripke, S. A. 1963. Semantical considerations on modal logic. Acta Philosophica Fennica 16 (1963):83–94.
  • Kulesza, T., S. Stumpf, M. Burnett, and I. Kwan. 2012. Tell me more? the effects of mental model soundness on personalizing an intelligent agent, 1–10. CHI ’12. Austin, Texas, USA: Association for Computing Machinery. doi:10.1145/2207676.2207678.
  • Kulesza, T., S. Stumpf, M. Burnett, S. Yang, I. Kwan, and W. Wong. 2013. Too much, too little, or just right? ways explanations impact end users’ mental models. In 2013 ieee symposium on visual languages and human centric computing, 3–10. doi:10.1109/VLHCC.2013.6645235.
  • Leslie, D. 2019. Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. SSRN Electronic Journal. June. doi:10.2139/ssrn.3403301.
  • Li, N., S. Adepu, E. Kang, and D. Garlan. 2020. Explanations for human-on-the-loop: A probabilistic model checking approach. In Proceeding of the IEEE/ACM 15th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, 181–87. New York, NY, USA: Association for Computing Machinery. doi:10.1145/3387939.3391592.
  • Li, N., J. Cámara, D. Garlan, and B. Schmerl. 2020. Reasoning about when to provide explanation for human-involved self-adaptive systems. In 2020 ieee international conference on autonomic computing and self-organizing systems (acsos), 195–204. doi:10.1109/ACSOS49614.2020.00042.
  • Lin, P., K. Abney, and G. A. Bekey. 2014. Robot ethics: The ethical and social implications of robotics. Cambridge, Massachusetts: The MIT Press.
  • Lindner, F., and M. M. Bentzen. 2017. The hybrid ethical reasoning agent IMMANUEL. In Proceedings of the companion of the 2017 acm/ieee international conference on human-robot interaction, 187–88. Vienna: ACM. doi:10.1145/3029798.3038404.
  • Lindner, F., M. M. Bentzen, and B. Nebel. 2017. The HERA approach to morally competent robots. In Proceedings of the iros 2017 conference, 6991–97. doi:10.1109/IROS.2017.8206625.
  • Lindner, F., B. Kuhnert, L. Wächter, and K. Möllney. 2019. Perception of creative responses to moral dilemmas by a conversational robot. In Social robotics, ed. M. A. Salichs, S. S. Ge, E. I. Barakova, J.-J. Cabibihan, A. R. Wagner, Á. Castro-González, and H. He, 98–107. Springer International Publishing. doi:10.1007/978-3-030-35888-4_10.
  • Lindner, F., R. Mattmüller, and B. Nebel. 2019. Moral permissibility of action plans. In Proceedings of the 33rd AAAI conference on artificial intelligence. January 27-February 1, 2019, honolulu, hawaii, USA. doi:10.1609/aaai.v33i01.33017635.
  • Lindner, F., R. Mattmüller, and B. Nebel. 2020. Evaluation of the moral permissibility of action plans. Artificial Intelligence 287:287. doi:10.1016/j.artint.2020.103350.
  • Lomuscio, A., H. Qu, and F. Raimondi. 2017. MCMAS: An open-source model checker for the verification of multi-agent systems. International Journal on Software Tools for Technology Transfer 19 (1):9–30. doi:10.1007/s10009-015-0378-x.
  • Loreggia, A., N. Mattei, F. Rossi, and K. B. Venable. 2018. Preferences and ethical principles in decision making. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 222. AIES ’18. New Orleans, LA, USA: Association for Computing Machinery. doi:10.1145/3278721.3278723.
  • Loreggia, A., N. Mattei, F. Rossi, and K. B. Venable. 2020. Modeling and reasoning with preferences and ethical priorities in ai systems. In Ethics of artificial intelligence, ed. S. Matthew Liao, Oxford Scholarship Online. doi:10.1093/oso/9780190905033.003.0005.
  • Luckcuck, M., M. Farrell, L. A. Dennis, C. Dixon, and M. Fisher. 2019. Formal specification and verification of autonomous robotic systems: A survey. ACM Computing Surveys 52 (5):1–41. doi:10.1145/3342355.
  • Madumal, P., T. Miller, L. Sonenberg, and F. Vetere. 2019. Explainable reinforcement learning through a causal lens. Proceedings of the AAAI Conference on Artificial Intelligence 34 (3):2493–500. doi:10.1609/aaai.v34i03.5631.
  • Mermet, B., and G. Simon. 2016. Formal verification of ethical properties in multiagent systems. In Proceedings of the 1st workshop on ethics in the design of intelligent agents, 26–31. The Netherlands: The Hague. http://ceur-ws.org/Vol-1668/paper5.pdf.
  • Miller, T. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267:1–38. doi:10.1016/j.artint.2018.07.007.
  • Molnar, L., and S. M. Veres. 2009. System verification of autonomous underwater vehicles by model checking. In Proceedings of the oceans 2009 - europe conference, 1–10. IEEE. doi:10.1109/OCEANSE.2009.5278284.
  • Mualla, Y., I. Tchappi, T. Kampik, A. Najjar, D. Calvaresi, A. Abbas-Turki, S. Galland, and C. Nicolle. 2022. The quest of parsimonious XAI: A human-agent architecture for explanation formulation. Artificial Intelligence 302:103573. doi:10.1016/j.artint.2021.103573.
  • Naiseh, M., C. M. Bentley, and S. Ramchurn. 2022. Trustworthy autonomous systems (TAS): Engaging TAS experts in curriculum design. In Proceedings of the 2022 IEEE Global Engineering Education Conference (EDUCON). Published IEEE. doi:10.48550/ARXIV.2202.07447.
  • Papavassiliou, S., E. E. Tsiropoulou, P. Promponas, and P. Vamvakas. 2021. A paradigm shift toward satisfaction, realism and efficiency in wireless networks resource sharing. IEEE Network 35 (1):348–55. doi:10.1109/MNET.011.2000368.
  • Porter, T. 2004. Interpreted systems and kripke models for multiagent systems from a categorical perspective. Theoretical Computer Science 323 (1–3):235–66. doi:10.1016/j.tcs.2004.04.005.
  • Qu, H., and S. M. Veres. 2016. Verification of logical consistency in robotic reasoning. Robotics and Autonomous Systems 83:44–56. doi:10.1016/j.robot.2016.06.005.
  • Raimondi, F. 2006. Model checking multi-agent systems. PhD diss., University College London.
  • Ramchurn, S. D., T. D. Huynh, F. Wu, Y. Ikuno, J. Flann, L. Moreau, J. E. Fischer, W. Jiang, T. Rodden, E. Simpson, et al. 2016. A disaster response system based on human-agent collectives. The Journal of Artificial Intelligence Research 57 (September):661–708. doi:10.1613/jair.5098.
  • Ramchurn, S. D., S. Stein, and N. R. Jennings. 2021. Trustworthy human-AI partnerships. iScience 24 (8):102891. doi:10.1016/j.isci.2021.102891.
  • Ramchurn, S. D., F. Wu, W. Jiang, J. E. Fischer, S. Reece, S. Roberts, T. Rodden, C. Greenhalgh, and N. R. Jennings. 2016. Human–agent collaboration for disaster response. Autonomous Agents and Multi-Agent Systems 30 (1):82–111. doi:10.1007/s10458-015-9286-4.
  • Rossi, F. 2015. Safety constraints and ethical principles in collective decision making systems. In Ki 2015: Advances in artificial intelligence, ed. S. Hölldobler, R. Peñaloza, and S. Rudolph, 3–15. Cham: Springer. doi:10.1007/978-3-319-24489-1_1.
  • Rossi, F., and A. Loreggia. 2019. Preferences and ethical priorities: Thinking fast and slow in AI. In Proceeding of the 18th international conference on autonomous agents and multiagent systems, 3–4. AAMAS ’19, Montreal QC, Canada: International Foundation for Autonomous Agents/Multiagent Systems.
  • Rossi, F., and N. Mattei. 2019. Building ethically bounded ai. Proceedings of the AAAI Conference on Artificial Intelligence 33 (1):9785–89. doi:10.1609/aaai.v33i01.33019785.
  • Sheh, R. 2017. Why did you do that? explainable intelligent robots. In The workshops of the the thirty-first AAAI conference on artificial intelligence, saturday, february 4-9, 2017, vol. WS-17. San francisco, California, USA: AAAI Workshops. AAAI Press.
  • Standardization, International Organization for. 2017. Iso/Iec/Ieee 24765: 2017 Systems and Software Engineering — Vocabulary. Online. https://www.iso.org/standard/71952.html.
  • Thomson, J. J. 1985. The trolley problem. The Yale Law Journal 94 (6):1395–415. doi:10.2307/796133.
  • UAViators. Humanitarian UAV Code of Conduct. 2021. Accessed March 10, 2022. https://uavcode.org/code-of-conduct.
  • Webster, M. P., N. Cameron, M. Fisher, and M. Jump. 2014. Generating certification evidence for autonomous unmanned aircraft using model checking and simulation. Journal of Aerospace Information Systems 11 (5):258–79. doi:10.2514/1.I010096.
  • Webster, M., M. Fisher, N. Cameron, and M. Jump. 2011. Formal methods for the certification of autonomous unmanned aircraft systems. In Computer safety, reliability, and security, ed. F. Flammini, S. Bologna, and V. Vittorini, 228–42. Berlin, Heidelberg: Springer. doi:10.1007/978-3-642-24270-0_17.
  • Wikipedia. 2020. Thales Watchkeeper WK450. Accessed March 10, 2022. https://en.wikipedia.org/wiki/Thales_Watchkeeper_WK450.
  • Winfield, A., S. Booth, L. A. Dennis, T. Egawa, H. Hastie, N. Jacobs, R. I. Muttram, J. I. Olszewska, F. Rajabiyazdi, A. Theodorou, et al. 2021. IEEE P7001: A proposed standard on transparency. Frontiers in Robotics and AI 8:225. doi:10.3389/frobt.2021.665729.
  • Wortham, R. H., and A. Theodorou. 2017. Robot transparency, trust and utility. Connection Science 29 (3):242–48. doi:10.1080/09540091.2017.1313816.
  • Yazdanpanah, V., E. H. Gerding, S. Stein, M. Dastani, C. M. Jonker, and T. J. Norman. 2021. Responsibility research for trustworthy autonomous systems. In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, 57–62. AAMAS ’21. Virtual Event, United Kingdom: International Foundation for Autonomous Agents/Multiagent Systems. doi:10.5555/3463952.3463964.
  • Yazdanpanah, V., S. Stein, E. Gerding, and N. R. Jennings. 2021. Applying strategic reasoning for accountability ascription in multiagent teams. In Proceedings of the Workshop on Artificial Intelligence Safety 2021 co-located with IJCAI 2021 Conference, ed. H. Espinoza, J. McDermid, X. Huang, M. Castillo-Effen, X. C. Chen, J. Hernández-Orallo, S. Ó. hÉigeartaigh, R. Mallah, and G. Pedroza, vol. 2916. CEUR-WS.org. http://ceur-ws.org/Vol-2916/paper_18.pdf.
  • Yazdanpanah, V., S. Stein, E. H. Gerding, and M. C. Schraefel. 2021. Multiagent strategic reasoning in the iov: A logic-based approach. In Acm collective intelligence conference 2021 (ci-2021) (29/06/21 - 30/06/21). https://eprints.soton.ac.uk/448210/.
  • Yu, H., Z. Shen, C. Miao, C. Leung, V. R. Lesser, and Q. Yang. 2018. Building ethics into artificial intelligence. In Proceedings of the IJCAI-18 conference, 5527–33. AAAI Press, July. doi:10.24963/ijcai.2018/779.