Building inclusion, fairness, and ethics into machine learning: Meet Googler Andrew Zaldivar

By Bill Reeve | Staff Writer | Product Inclusion

Andrew Zaldivar is a Developer Advocate for Google AI. His job is to help to bring the benefits of AI to everyone. Andrew develops, evaluates, and promotes tools and techniques that can help communities build responsible AI systems, writing posts for the Google Developers blog and speaking at a variety of conferences.

Before joining Google AI, Andrew was a Senior Strategist in Google’s Trust & Safety group and worked on protecting the integrity of some of Google’s key products by using machine learning to scale, optimize and automate abuse-fighting efforts. Prior to joining Google, Andrew completed his Ph.D. in Cognitive Neuroscience from the University of California, Irvine and was an Insight Data Science fellow.

Here, Andrew shares details on his role at Google, his personal and professional passions, and how he applies his academic background to his work creating and sharing tools that help teams build more inclusive products and user experiences.


How and why did you begin working at Google?

Before I joined Google, I had just completed my Ph.D. in Cognitive Neuroscience. I was, and still am, fascinated by how we acquire knowledge and understanding through our experiences and senses.

I was also interested in data science, so I applied to Google. I didn't hear back for months, and during that time I was in a postdoctoral training fellowship program at Insight Data Science. There, I applied many of the technical and analytical skills I had developed during grad school to data science. I was well prepared for my transition into the tech industry when Google finally reached out to me.

I accepted an analyst position within Google’s Trust & Safety organization, and that’s how I began my career here at Google.


Can you give an overview of your role and mission at Google?

Following my time in Trust & Safety, I transferred over to Google AI as a Developer Advocate. Specifically, I’m working on ethics, AI and machine intelligence research. Our focus is on developing, evaluating, and promoting tools and techniques that can help AI evolve ethically over time—reaching beneficial goals now, and in the distant future.

Ethical AI includes many things, such as fairness (see my Introduction to Fairness in ML on the Google Developers blog), transparency, legality, privacy, security, accountability, and reproducibility. The role that I play centers around responsibly democratizing AI. I spend a large part of my time identifying the best tools, techniques, and approaches that we at Google develop in the effort to build responsible AI systems.

I also reach out to communities beyond Google to guide them towards responsible AI. You can think of me as a servant to the public's interest in developing ethical AI systems. I gather the feedback, thoughts, and perspectives from communities, whether they are nonprofits, startups, industry professionals, academics or people in policy. I question how the tools and techniques that we've helped develop are working for them. I then take that community feedback to Google engineers, designers, and researchers, and we use that to guide the future of AI.

But why am I doing this? Google has a commitment to advocate for the responsible development of AI. That means we take our latest findings and turn them into guidance and frameworks for other developers to create AI systems responsibly—but that also means we have to understand the challenges larger communities face. I try to create that two-way street because we’re in this together.


How does your background influence your work?

Often when people talk about the roles that psychology and neuroscience play in AI, they discuss these subjects in the context of breakthroughs that have seismically shifted AI. Reinforcement learning, deep neural networks—these are some of the ideas that came out of the experimental and theoretical neuroscience labs. It’s no question that understanding the neurobiological substrates involved in the processes that underlie cognition is a great source of inspiration for AI—a source I continue to draw from when looking for direction.

But there’s another angle here. You can sort of view AI as an extension of humanity. That’s because AI carries traces of data and input that come from human behavior and intelligence. Through our thoughts, experiences, and senses, our actions are shaping the development of AI—but those very thoughts, experiences, and senses are also encompassed in data used in AI that we interact with day-to-day, which in turn shape our actions. How we see things visually, how we hear things audibly, how we smell things through our somatosensory system, how we conjure up experiences in our minds—everything that comes in and out of our sensory modalities is encapsulated in AI.

Acknowledging that our perceptions, actions, and the environment influence the development and use of AI has helped soften my eyes when gazing at the ethical landscape of AI.

My upbringing also heavily influences my work. Being Latino and Black, first generation, raised by a single father, struggling bilinguist, and from a near-poverty household income all shaped who I am and why I do what I do. Growing up on welfare, food stamps, rent subsidies and in a neighborhood where the idea of attending college and getting a decent education seemed unfathomable—I just sort of accepted the adverse and challenging reality. I had nothing else to go off of.

Through strong fortitude and good fortune, I made it to Google. I’m proud of where I came from. But as I reflect back, I realize that AI could put those very same low-income families from my neighborhood, where the odds are already against us, at further risk instead of empowering us all.

I may be one of those kids that got that opportunity to make something of myself despite tough surroundings, but that is the exception. There’s no guarantee that the next generation of kids coming from similar neighborhoods will have similar opportunities.

AI has the untapped potential to improve these very neighborhoods and communities all over the world—like the way access to computers and the Internet at public libraries helped me when I was growing up. But if we don’t stop to consider the ethical consequences, then AI could worsen their lives. That’s why I commit myself to doing all that I can to bring the benefits of AI to all—because I’ve lived through the struggle.


What inspires you most about your work?

I wouldn’t be a Developer Advocate if I weren’t inspired by all the people out there doing all that they can to harness the potential of AI for good. Any time I'm giving a presentation at a meet-up, leading a workshop at a conference, coding up a tutorial, working on a research paper or recording online lectures, it’s an opportunity to help people build socially responsible AI systems. The hundreds of personal messages I’ve received from people around the world expressing their gratitude for what insight and wisdom they gained from my efforts are reminders to myself that AI can be a force for good.


What advice can you share for others interested in applying a fairness lens to their work?

This is challenging, but if we can celebrate an understanding of other people’s perspectives, then I think we can make better strides in inclusivity.

My dad spent years leading into his retirement (due to heart problems and Alzheimer's) working for a nonprofit that serves thousands of children and adults with intellectual and other developmental disabilities. When he spent his days with people who can't function for themselves—functions that some of us may at times take for granted—he developed a level of empathy that went beyond understanding, becoming a lot more compassionate towards people with developmental disabilities. After all, they and their loved ones are just trying to live the good life. My dad took great pride in providing these developmentally disabled people with enjoyable experiences that they would otherwise not be able to create on their own.

Practicing an appreciation of who we are, why we think the way we think and why we behave the way we behave, I feel, could lead to addressing the risks and opportunities in the advancement in AI.

Inspired by Andrew? Apply for a job at Google to create, code, design, and build for everyone.

Photo credit: One Tree Studio, Inc.

Contact Us

Stay in touch. We want to hear from you. Email us at Acceleratewithgoogle@google.com

For more on Google's Diversity and Inclusion programs, please visit google.com/diversity

Follow us on Twitter: @accelwithgoogle