Bringing Project Respect to women-in-tech events in 2019
Recently, at a March 2019 Google Digital Coaches event that took place in Austin, Texas during SXSW, minority business owners and entrepreneurs gathered at the Google office for workshops on inclusive product and business development strategies. We were excited to demo Project Respect, a website that offers people who are currently underrepresented in the tech industry an opportunity to contribute to an open, global research effort to help train ML models with inclusive information.
In line with Google’s AI Principles and recommended best practices for fairness in machine learning, Google first presented Project Respect at Pride events around the world in 2018, and at the March Google Digital Coaches event in Austin, we shared that our focus this year is to bring Project Respect to women-in-tech events nationally and globally. This was especially timely during Women’s History Month and in the context of SXSW, an event where the tech community is focused on the future of user experiences.
Help researchers build more accurate ML models
When women of all backgrounds, the LGBTQIA+ community, and other members of groups who are historically underrepresented online choose to contribute their points of view to Project Respect, they can help researchers build machine learning models that can more accurately understand what is and isn’t toxic language within online comments.
One example is Perspective API, used to identify harmful or toxic comments on the New York Times website. More inclusive training data and labels can help teach machine learning systems to improve how they identify this type of online abuse.
For Kirstin Sillitoe, the Creative Lab team lead for Project Respect, the initiative has both deep professional and personal meaning.
“I grew up in the UK, in an era when the word ‘queer’ was being reclaimed. And because of the people who’ve gone before me and fought that fight, I can now use that word with pride!” Kirstin says.
“By contributing my own statement to Project Respect, I can make sure my data helps machine learning models understand how I talk about myself; the words I use with dignity, that represent me. The more underrepresented groups who contribute to these datasets, the more accurately we can tackle bias, and build machine learning models that truly represent everyone.”
All of the datasets that are created by contributors who choose to participate in Project Respect will be made available under a Creative Commons license, so that any researcher seeking to train a range of machine learning models will be able to access it. In line with Google's AI Principles, we seek to reduce how historical biases may be experienced online – not only during Pride or Women’s History Month, but also every single day.