Dr. Simone Stumpf on Designing "Human-in-the-loop" AI

By Bill Reeve, Staff Writer | Product Inclusion

Dr. Simone Stumpf is a computer scientist and senior lecturer at City University of London in the Centre for Human-Computer Interaction Design . Her current research focuses on user interactions with machine learning and algorithmic decision-making systems. She spoke on designing intelligibility of smart systems at Google’s recent People + AI Research (PAIR) Symposium in Zurich, Switzerland.

Simone received her BSc in Computer Science and Cognitive Science, and her PhD in Computer Science from University College London. She previously worked as a Research Fellow and Manager at Oregon State University and University College London. Her current projects include designing user interfaces for smart heating systems, and smart home self-care systems for people with dementia or Parkinson’s disease.

She is also interested in personal information management and end-user development to empower all users to use intelligent machines effectively. Here, she answers the first installment of the new PAIR Questionnaire (modeled after the classic Proust Questionnaire ), in which we ask design practitioners around the world for their views on how to design human-centered AI systems. All opinions expressed are those of Dr. Stumpf.


Can design thinking open up new applications for AI? How might we ensure inclusion so that everyone can benefit from breakthroughs in AI?

It's an interesting notion to think about who designs and builds AI applications. For other technology, especially in healthcare, there has been a push for co-design approaches in which potential users have an active role in shaping systems. Similarly, there has been a push for end-user development of software as well as hardware. I don't think we have thought enough about new design approaches and techniques for AI applications. Will we (computer scientists, AI researchers, and UX designers) be able to hand over the design and implementation of AI systems to end-users or domain experts eventually? Currently we seem to be stuck at personalization and customization, but I believe that all people should be empowered to build individual solutions that fulfill their own specific needs.


AI is built by people. How might we make it easier for engineers to build and understand machine learning systems?

Intelligibility of AI systems is on a spectrum. At one end of the spectrum we want to make these systems understandable to end-users or domain experts who do not have a background in machine learning systems; at the other, we want to also help engineers better understand the behaviour of what they are building. The answer for both types of users is appropriate explanations that can help them use and debug these systems.

Debugging machine learning systems has not really been looked at sufficiently, especially how to do this easily. Human-in-the-loop learning and interactive machine learning, in my opinion, both hold great promise to support all kinds of users to do this effectively, if these users understand sufficiently how the system works.


How can AI aid and augment professionals in their jobs? How might AI systems amplify the expertise of doctors, technicians, designers, farmers, musicians, and more?

I think the number of solutions is virtually endless! AI is a tool that can be useful to so many people in so many domains. It's important to realize, though, that AI needs to support and augment human capability, not supplant it.

Possibly the greatest challenge is to integrate AI in fields which are considered "creative." There is some interesting research in creative writing, for example, but they still seem very far off to be truly useful.

Photo credit: Duncan Phillips Photography

Contact Us

Stay in touch. We want to hear from you. Email us at Acceleratewithgoogle@google.com

For more on Google's Diversity and Inclusion programs, please visit google.com/diversity

Follow us on Google+: @acceleratewithgoogle

Follow us on Twitter: @accelwithgoogle