Insights on inclusive, human-centered AI: Meet PAIR co-founder Jess Holbrook

By Devki Trivedi, Product Inclusion Fellow, Google | Product Inclusion

Jess Holbrook is a UX Researcher and UX Manager at Google, based in Seattle. He is also one of the founders and leads of Google’s People + AI Research initiative (PAIR). Prior to joining Google, Jess was a UX Researcher at Amazon and Microsoft. He received his Ph.D in psychology at the University of Oregon and his undergraduate degree in psychology at the University of Washington.

As a short-term research assistant and fellow on Google’s Product Inclusion team, I had the pleasure of speaking with Jess about the significance of inclusive design in human-centered artificial intelligence. Here’s our exchange.


How does your background shape your current role at Google?

My background is in psychology, so I cared about people long before I cared about tech. In grad school I focused on social psychology and wrote my dissertation on how people make very fast decisions about other people's behaviors. During that time in school, I interned at a local start-up, where I got into human-computer interaction, design, etc. I was really interested in the intersection of psychology, technology, and design. So UX research became a perfect fit.


Why do you think product inclusion is important?

There are two big reasons. One is that building inclusively makes a lot of really good business sense. With inclusive design, you’ll expand your market, your products will work better for more people, you’ll probably sell more of your product, but you’ll also get feedback from a more diverse group, which tends to improve your product in ways that benefit all your customers.

Also, I just think that products tend to suck if they’re not inclusive! If you don’t practice inclusive design proactively, you’re kind of relying on dumb luck or being really good at being reactive. Neither of those are really great bets to make. It’s like throwing darts at a dart board and then being surprised when one hits, or just reacting when something bad happens. If I’m running a business, why would I take risks like those? Those just sound like really bad business strategies.

Second, there are ethical reasons, depending on your ethical framing of the world. For example,  think about the concept of equality of opportunity. There are many moral and ethical systems, which vary on the details, but across society, many of the most prominent ones refer to some notion of equality of opportunity: that everyone should have the ability to have a fair shot in life.

Inclusive design and equality of opportunity are ways to get us closer to a more prosperous society as you’re essentially unlocking a bunch of human potential and goodness that you otherwise had bottled up.

Also, on the flip side, technology, and Artificial Intelligence (AI) in particular, can do harm, even if unintentionally. It's been interesting for me to see all the inclusion, safety, and bias conversations center around AI recently, because I think AI is kind of a canary in the coal mine. This is a conversation about technology and social institutions in general, but it’s really interesting how AI has created this focus for everyone to start to talk about these issues.

Imagine I took all the computing, storage, and tech in the world and I put it in a big pile. The amount of that which is AI is tiny, yet it’s the focus of this conversation. It’s fascinating that it's really brought to the floor a lot of conversation for UX professionals, engineers, business leaders, etc. around inclusive design, as well as technologies that promote well-being rather than only business metrics like engagement. It’s great that these topics in AI have actually sharpened a lot of these larger conversations and brought them to a public conversation.


How do you think both design and engineering has evolved recently to become more inclusive?

One, from the UX research perspective, we are trying to diversify our research approaches to move away from just talking to people who live on the coasts of America, which is typically how UX research has been done. You get some convenient sample, typically of people who live in the Bay Area, maybe Seattle, maybe New York, because that’s where many design firms and tech companies are headquartered, and you get their reactions to things. If the feedback is good, you say it’s good and move forward. Sometimes all you need to do is go an hour away from a big city and you can get a completely different response. It’s important to invest in  international research efforts as well.

At Google, we get feedback both from our global scale mechanisms, such as Google Consumer Surveys, as well as more local ways of recruiting. UX Researchers at Google are expanding and breaking new grounds in research. For one, we’re engaging with at-risk populations, meaning populations that have a chronic risk situation, and trying to understand things from their perspective by testing products through the lens of how they could potentially harm these people.

One of the strengths and challenges is that tech has a lot of people who are optimistic dreamers, which is really good and drives things forward, but they tend to sometimes ignore the harsh daily realities of a lot of people in real life. We need to think about the potential good and bad in these situations.

There’s also a lot of small things that are happening. For example, up until maybe three years ago, if you saw a phone mockup with an image of a hand holding a phone, the hands were all white no matter what. It’s not that people intentionally wanted to put white hands up, but to them it was what they already had and I’m guessing seemed “normal” to them. Now there are a lot of publicly available resources for more diverse assets online like Facebook’s Diverse Device Hands or Diverse UI or guidelines like Google’s tips on designing for global accessibility.

Furthermore, we’ve done the awareness wave in a lot of ways at Google around bias and inclusion, and we’ve had bias busting that’s become part of the conversation. I personally really like the changes from the Google Walkout for Real Change about increasing transparency in the workplace. I think we’ve hit the awareness wave and now it’s all about the action wave. People always really want the action wave sooner, but people don’t change their minds in a snap and they don’t act instantly after they change their minds, so what’s happening now is kind of the natural pace of things (though we’re definitely always looking for ways to speed things up).


What do you think still needs to be done?

What still needs to be done is to build the tools to help this change happen, which is where I put a lot of my time. We can tell people they should be doing certain things, however if we don’t help them do it, that's not really fair to them. At PAIR, we’ve developed and launched a few tools to help make ML and AI more accessible to everyone, like Tensorflow.js, Facets, and the What-if Tool, but of course there is a lot more to do. One of the bars I try to set for our teams is to imagine you’re a product developer and you weren’t building specifically for inclusion at all. In the end, you’d still pick our tool because it’s both easiest and the fastest development method for you.

There's empathy and the human side of things as well, where we are still nowhere near where we need to be in terms of proactive inclusive design within product teams. Typically, users are “known” through summarized analytics in a graph or maybe by going to a user study once. But there’s a big difference between knowing somebody and caring about them. You want to think about the user and hope they’re doing well, and not just that your product is helping them, but that they’re genuinely doing well. That's my view of what real inclusion looks like. It’s not just following rules, it’s going beyond the rules. It’s the connection, the intimacy, the caring about other people. And that’s a long way coming, but that’s the big goal.


What is Google’s approach to inclusive, human-centered AI?

Google is at its best when it operates transparently, when it admits its mistakes and shows people what it learned from those mistakes and the changes made from them. I think we are also people who have put code where our mouths are: we've released Tensorflow and a lot of research papers on topics like bias, we’ve done a lot of outreach efforts to get more people in AI, and we’re publishing our principles and responsible AI practices. Additionally, as I’ve mentioned, PAIR has been putting out tools and articles focusing on fairness and inclusion in AI. I think we’re at this phase where efforts are dispersed and people are trying a lot of different things. The artifacts we produce are reflections of the people who build them and what they understand at that time. However, it feels like we are on the cusp of uniting a lot of the thinking into a more coherent system.


What are your future hopes when it comes to building inclusive products?

I love commodification, the idea of making things super cheap, and think it’s an undervalued concept in the space. When things get cheaper, more people can use them and you get more experimentation, but also you give people a means to articulate their needs. The concept of jugaad is really fascinating to me and has inspired my hopes are around co-creation and the idea of people being able to solve their own problems. I think a healthy model for solving AI challenges is one where more people have access to it and can solve their own local problems for themselves and their communities in a much more rapid and personalized way.


What advice do you have for both designers and engineers that are trying to build inclusive products?

This might be one of my more open-to-debate statements, but I think AI is revealing that we may be at a place to move to something that is beyond human centered design and focus on what comes after. This idea has been called a few different things, but one of the terms I like for it is “relational design.” To achieve this, we must have a couple shifts. A big one is we need a lot more co-creation and participatory design with people. This means including all kinds of stakeholders from the beginning of a process all the way through to make sure the designs are useful, usable, and desirable to a diverse set of people and do not create unnecessary harm to groups because they were not considered.

Additionally, it’s important not to focus all our thinking on one person doing one thing that is contained to their world because the systems that we’re building are expansive in relation to other systems. The idea is that many millions, if not billions, of actors that are feeding into a system and are affecting each other, creating some kind of emergent output in some of these systems. We have to actually think of all of the actors in multi-agent systems: the ecosystem in which they engage in, what is contained within that system, and what is externalized. A lot of environmental issues are understood in this context and the design firm Ideo and the Ellen MacArthur Foundation have called this concept circular design. It means thinking about what the system ingests from whom, and how, what it does with it, and what it exports.

You can create a product that everyone is happy with, that meets their needs, and creates a high mean happiness score, but it’s actually causing some subgroups to have a horrible experience, such as pollution in the physical world. And that's the nature of AI -- it's an ongoing experience with the nature of technology rather than a one-time transactional thing. So I encourage people to start thinking about AI from a relational design angle.

Inspired by Jess? Apply for a job at Google to create, code, design and build for everyone.


Contact Us

Stay in touch. We want to hear from you. Email us at Acceleratewithgoogle@google.com

Please note that this site and email address is not affiliated with any former Google program named Accelerator.