Responsible & Explainable AI
An introductory tutorial @ ECAI 2023
This tutorial presents ongoing research in the field of Responsible Artificial Intelligence (RAI) by introducing core concepts and means of operationalising AI ethics. Attendees will—through both lectures and problem-based learning exercises—get experience in implementing and testing systems for policy compliance.
Here, we introduce the fundamental aspects of RAI by providing a holistic multidisciplinary view. The course structure is such as to introduce to the attendees to the impact intelligent and autonomous systems have on societies and individuals and ongoing state-of-the-art discussions related to ethical, legal, and social aspects of AI. This introduction will be followed by a critical discussion of where accountability and responsibility lie for ethical, legal, and social impacts of these systems, considering decision points throughout the development and deployment pipeline. With this knowledge in mind, students will be introduced to socio-technical approaches the governance, monitoring and control of intelligent systems as tools for incorporating constraints into intelligent system design. Finally, participants apply these skills on a simulated responsible design problem.
The tutorial is divided into following thematic modules:
- Introduction to Responsible AI (lecture – 45 min): Establishes the motivation behind the field of Responsible AI by using real-world use cases related to autonomous operation, algorithmic biases, generation of disinformation, and attempts to escape accountability.
- Responsible AI in practice (exercise – 45 min): Participants will be split up into groups and engage in a value deliberation exercise for a particular problem statement. Given a set of possible solutions to the question, participants should vote which values are promoted or undermined, rank solutions based on this, then discuss the valuation. Finally, the participants are to reconsider their votes after deliberation, and evaluate the new ranking, if changed.
- Responsible AI and ethical dilemmas (interactive lecture – 60 min): A continuation of what Responsible AI looks like in practice, and what exists in terms of standards, governance, and other tooling for responsible development of AI. Participants will take part in deliberating over a number of ethical dilemmas presented, with real-world inspired use-cases where a black-box AI system resulted in discriminatory decisions.
- Explainable AI (lecture – 20 min): Establishes the motivation behind Explainable AI (XAI), what it means for XAI to be human-centred, and how it contributes to the development of Responsible AI.
Materials & References
- Value Deliberation Toolbox – Delft Design for Values Institute
- Dignum, V. (2019). Responsible artificial intelligence: how to develop and use AI in a responsible way (Vol. 2156). Cham: Springer.
Professor of Responsible Artificial Intelligence at Umeå University, Sweden
Virginia is Professor at Umeå University and Directory of WASP-HS, the Wallenberg Program on Humanities and Society for AI, Autonomous Systems and Software, the largest Swedish national research program on fundamental multidisciplinary research on the societal and human impact of AI. She is a member of the Royal Swedish Academy of Engineering Sciences (IVA), and a Fellow of the European Artificial Intelligence Association (EURAI). She is member of the Global Partnership on AI (GPAI), World Economic Forum’s Global Artificial Intelligence Council, Executive Committee of the IEEE Initiative on Ethically Aligned Design, of ALLAI, the Dutch AI Alliance, EU’s High Level Expert Group on Artificial Intelligence, leader of UNICEF’s guidance for AI and children, and member of UNESCO expert group on the implementation of AI recommendations. She is author of “Responsible Artificial Intelligence: developing and using AI in a responsible way”.
Research Fellow of Responsible Artificial Intelligence at Umeå University, Sweden
Andreas is the leader of the Wallenberg AI, Autonomous Systems and Software Program research cluster on Ethical, Legal and Societal Aspects of AI. He is looking into the techniques and tools needed for the responsible design, implementation, and deployment of intelligent systems. Outside academic research, Dr. Theodorou is the CEO of the research spin-off VeRAI AB. Andreas is also an active member of various AI Governance initiatives such as the IEEE SA’ P7001 series, ISO JTC1/42, and BSI ART/1.
PhD Student of Explainable Artificial Intelligence at Umeå University, Sweden
Leila is a WASP-HS affiliated PhD student at the Department of Computing Science and member of the
Responsible Artificial Intelligence research group at Umeå University. Her research focuses on explainable AI and human-centric approaches to realising explainable AI in industry. She is an organising committee member of the upcoming workshop in Ethics of Game Artificial Intelligence (EGAI) at ECAI 2023.