Candidate Solutions for Defining Explainability Requirements of AI SystemsCANCELLED
[Context and Motivation] Many recent studies highlight explainability as an important requirement that supports in building transparent, trustworthy, and responsible AI systems. As a result, there is an increasing number of solutions that researchers have developed to assist the definition of explainability requirements. [Question] We conducted a literature study to analyze what kind of candidate solutions are proposed for defining explainability requirements of AI systems. The focus of this literature review is especially on the field of requirements engineering (RE). [Results] The proposed solutions for defining explainability requirements such as, approaches, frameworks, and models are comprehensive. They can be used not only for RE activities but also for testing and evaluating AI systems. In addition to the comprehensive solutions, we identified 30 practices that sup-port the development of explainable AI systems. The literature study also revealed that most of the proposed solutions have not been evaluated in re-al projects and there is a need for empirical studies. [Contribution] For re-searchers, the study provides an overview of the candidate solutions and their characteristics. For practitioners, the paper summarizes potential practices that can help them define and evaluate the explainability requirements of AI systems.
Thu 11 AprDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
14:00 - 15:30 | Explainability with and in RE (R9)Research Track at Vorhangsaal Conference room MA-E0.46 Chair(s): Michael Unterkalmsteiner Blekinge Institute of Technology | ||
14:00 40mTalk | Candidate Solutions for Defining Explainability Requirements of AI Systems CANCELLED Research Track P: Nagadivya Balasubramaniam Aalto University, A: Marjo Kauppinen Aalto University, A: Hong-Linh Truong Aalto University, A: Sari Kujala Aalto University, D: Rebekka Wohlrab Chalmers University of Technology | ||
14:40 40mTalk | What Impact do my Preferences Have? A Framework for Explanation-Based Elicitation of Quality Objectives for Robotic Mission PlanningTechnical design Research Track P: Rebekka Wohlrab Chalmers University of Technology, A: Michael Vierhauser University of Innsbruck, A: Erik Nilsson Chalmers | University of Gothenburg, D: Nagadivya Balasubramaniam Aalto University |
This talk is cancelled due to sickness.