Building fairness, trust and safety into machine learning: Meet Googler Jamila Smith-Loud

By Bill Reeve | Staff Writer | Product Inclusion

Jamila Smith-Loud is a User Researcher on Google’s Trust and Safety team. She uses research to advocate for diverse needs and perspectives. Her work helps shape how Google puts our AI Principles for fairness and inclusion into action.

Prior to joining Google, Jamila was the Manager of Strategic Initiatives at a Los Angeles-based civil rights nonprofit, Advancement Project, where she supported the development of racial equity initiatives through research, analysis, and advocacy. Jamila also participated in the Political and Legal Anthropology Review Fellowship Program, where her research focused on the intersections of law, power, identity, and cultural change. Jamila was born and raised in Los Angeles, and is a graduate of UC Berkeley and Howard University School of Law.


How and why did you begin working at Google?

I began working at Google a year ago. At that time I was looking for a change. I wanted to do something different and challenge myself, but I couldn't imagine how I could fit into Google, which I only knew as an engineering company.

Then, at a conference, I met a woman who worked for Google Public Policy. We had a conversation that sparked my interest, about how Google engages on social issues and about her particular work focused on criminal justice. Around the same time, I also remember reading about Google.org specifically about its vision around philanthropy and its commitment and focus to education and racial justice organizations. These two experiences gave me a better idea of the company’s values and culture and a sense that this could be a place where I could fit in.

I did apply, and I joined the Trust and Safety team.


What is your role and mission at Google?

I am a researcher working on ethical machine learning. We work specifically on machine learning-related fairness issues across Google’s products, processes, and models, and we’re part of the Trust and Safety organization.

Our role is to support product teams who are thinking through the unintended consequences of design choices. The mission of our group is to think critically about issues of fairness, bias, and representation and work with engineering teams to come up with solutions.

I provide primary research that utilizes qualitative and exploratory methods, to teams that are thinking about fairness issues. The questions we deal with range from being product specific to a more broad focus on questions like the unintended consequences or impacts that different communities may face as a result of stereotypes, or even structural inequity that may be amplified as a result of certain technology.

Our team is thinking through concepts of justice and race, and class and gender, and how they engage in Google products. Our mission is to think through those big concepts and then work with engineering teams to develop solutions that increase fairness and inclusion.


How does your background influence your work?

My background as an African-American woman influences my work every day. It influences why this type of work is, and always has been, a priority for me, and on a more tactical level it allows me to place a critical lens on my work that is often very much based on my own lived experience.

What is so unique about the machine learning fairness work is that it really is a cross-disciplinary exploration. The issues we tackle have not only an obvious technical and data science approach, but also raise questions that are social, legal, and ethical. So my background in law, advocacy, and social policy really prepared me to think about these issues holistically.

While I was in law school, I worked for research and think-tank organizations. After law school, I worked at NGOs, and most recently I worked at a civil rights nonprofit doing research and advocacy. My work there focused on advocacy for racial equity in sectors like education, health care, and housing. This background in advocacy prepared me to not only advocate for users, but also to advocate for communities that have been historically and systematically marginalized, and to further develop research practices that both examine and honor those experiences as we try to understand the potential impacts of our products.

My experience working for social justice organizations and experience doing applied research prepared me not just to conduct data-driven research, but to think differently about how to validate my research and to utilize different sources of knowledge as evidence.


Can you share some of your insights regarding fairness in machine learning?

One of the things that we are trying to operationalize, and talk about with product teams, is context. Whether it is cultural, historical, economic, or social, context is one of the most important ways to engage questions of machine learning fairness.

Sometimes, we think about solutions to machine learning fairness in mathematical terms. One of the things that we have been missing is the ability to access and analyze social implications, and how these have developed over time in relation to technological innovation.

Our team advocates for a systems-based process to examine and influence decisions regarding machine learning fairness. We are building a process for thinking about questions of fairness and equity that is systematic and data informed.


What inspires you most about this work?

I am inspired and challenged by the fact that there are so many unanswered questions in this new and emerging field, while acknowledging the incredible amount of knowledge and insights that already exists in other areas and sectors about how to understand these issues.

I am also inspired by our Trust and Safety team at Google. Everyone is committed to work through these hard issues together. I am inspired by how smart and dedicated the folks I work with are.

I am also inspired by the opportunity to collaborate with nonprofits and civil rights advocates to develop collaborative, creative, and inclusive process to expand our understanding.


What advice can you share for others interested in applying a fairness lens to their work?

The advice I would give anyone thinking about applying a fairness lens is to first acknowledge that it is really hard to recognize the implicit assumptions in design and practice that may lead to biased or unfair outcomes.

It is important to spend time talking to people outside your silo. It is easy to replicate our own viewpoints with people in our immediate groups. It is important to spend time listening to people from diverse backgrounds with diverse points of view.

It is important to realize that difficult and complex social issues may not have technical solutions. It may be necessary to think differently about who the experts are, and how knowledge related to these issues is generated. It may be that the communities most impacted will be best at analyzing these issues and the greatest source of potential solutions.

Inspired by Jamila? Apply for a job at Google to create, code, design and build for everyone.

Contact Us

Stay in touch. We want to hear from you. Email us at Acceleratewithgoogle@google.com

Please note that this site and email address is not affiliated with any former Google program named Accelerator.