Simone Stumpf
How to Empower Stakeholders to Assess AI systems
Abstract:
Assessing AI to make sure it is ethical and responsible has, until now, been left solely in the hands of
AI experts. There is no agreement of what counts as ‘good enough’ to be acceptable to stakeholders,
including ones who ultimately use the AI or are affected by its outputs. PHAWM, one of three RAI UK-funded
keystone projects, is driving change in AI testing and evaluation through the novel concept of
participatory AI auditing, where a diverse set of stakeholders without a technical background in AI, such
as domain experts, regulators, decision subjects and end-users, are empowered to undertake audits of
predictive and generative AI, either individually or collectively. This talk will describe the landscape
of AI assessment, particularly from a stakeholder perspective.
Dr. Simone Stumpf is a Professor of Responsible and Interactive AI at the School of Computing Science at the University of Glasgow. She has a long-standing research focus on user interactions with AI systems. Her research includes self-management systems for people living with long-term conditions, developing teachable object recognizers for people who are blind or have low vision, and investigating AI fairness. Her work has contributed to shaping Explainable AI (XAI) through the Explanatory Debugging approach for interactive machine learning, providing design principles for enabling better human-computer interaction and investigating the effects of greater transparency. The prime aim of her work is to empower all users to use AI systems effectively.