Patrick Hebron on how flexible AI systems will lead to inclusive AI

By Bill Reeve | Staff Writer | Product Inclusion

Patrick Hebron is a teacher, software developer, designer and author. He is a Senior Lead Designer for Artificial Intelligence and Machine Learning at Adobe as well as an Adjunct Graduate Professor at NYU’s Interactive Telecommunications Program. His research focuses on the intersection of machine learning, design tools, programming languages and operating systems. He spoke at a recent People + AI Research (PAIR) symposium at Google’s Zurich office.

Patrick is the creator of Foil, a next-generation design and programming environment that aims to extend the creative reach of its user through the assistive capacities of machine learning. Patrick has worked as a software developer and design consultant for numerous corporate and cultural institution clients including Google, Oracle, Guggenheim / BMW Labs and the Edward M. Kennedy Institute.

Patrick is probably best known as the author of the book Machine Learning for Designers and the monograph Rethinking Design Tools in the Age of Machine Learning. Here, he answers the PAIR Questionnaire, which we pose to academics and thought leaders around the world, to address the inclusive potential of AI systems (the opinions expressed are his).

Can design thinking open up new applications for AI? How might we ensure inclusion so that everyone can benefit from breakthroughs in AI?

The true promise of machine learning lies in tools that will enable people from a range of backgrounds and professional interests to see ways in which machine learning can aid their specific needs and help them develop solutions themselves. There is no one mechanism, no singular approach or interface that will be perfect for every user, or every use case.

The only universal solution is to provide flexibility. Provide users with some starting points. Foster experimentation. Help people to structure their thinking. Let them explore and encounter some complicating factors. When they do, provide a range of possible solutions. But above all, provide the infrastructure that allows them to produce their own solutions and to share them with others. Enable users to adapt the tool and to be adapted by it, to grow with it.

AI is built by people. How might we make it easier for engineers to build and understand machine learning systems?

Early-generation machine learning tools focused on aiding expert practitioners. These tools abstracted the conceptual demands of machine learning into conventional programmatic structures – despite an inherent incongruity between the deductive mentality of conventional code and the inductive mentality of machine learning systems. These early ML tools do not accentuate the skill of curriculum design, or the process of curating a learner’s experience of a complex concept by distilling and bringing forward its most meaningful elements.

As machine learning algorithms and mechanisms for automated architecture and hyperparameter search become more robust, some of the technical skills currently necessary to machine learning work will become less important, while the skill of clearly articulating a learning problem will become vital.

Ideally, the next generation of machine learning tools will not only help novices become acquainted with these problem-solving mechanisms, but they will also help experts explore machine learning concepts more effectively because these tools will have been designed specifically around a machine learning mindset.

Historically, the process of finding the optimal sequence of mathematical operations to effectively learn the concepts implicit within a particular dataset has required expert knowledge of machine learning techniques and a great deal of human effort.

Automated architecture search involves the use of an auxiliary mechanism to automatically discover an effective sequence of operations for a given dataset and learning task. This auxiliary mechanism would generally take the form of a machine learning, mathematical optimization or simulated evolution system that is used to search through a large number of possible sequences and evaluate their effectiveness in relation to the given learning task.

Automated hyperparameter search uses similar mechanisms to determine the ideal configuration settings for the chosen sequence of operations. These settings might include things like the neural network's 'learning rate' or the extent to which the neural network's perception is changed by any one example it is trained upon.

Google has done a great deal of pioneering work in this area. Some examples are the Google Research Blog post, "Using Evolutionary AutoML to Discover Neural Network Architectures" and the technical paper, "Neural Architecture Search with Reinforcement Learning."

There are many other examples from Google's work on this, but I think the above two are a great place to start.

How can AI aid and augment professionals in their jobs? How might AI systems amplify the expertise of doctors, technicians, designers, farmers, musicians, and more?

When humans solve problems, we can’t help ourselves: we look to conventional wisdom, the ways in which similar problems have been solved before. When AI systems navigate complex search spaces, they aren’t necessarily locked to, or even aware of, the preceding approaches to solving a given problem.

Particularly in the area of Reinforcement Learning, where the system learns to perform a difficult task through trial-and-error in relation to a human-defined metric of success, it is possible for machine learning systems to arrive at novel approaches to complex tasks that deviate radically from all previous human-developed approaches.

One recent example of this is DeepMind's AlphaGo system, which uses a reinforcement learning strategy to develop effective techniques for winning the game of Go. Several of the techniques developed by this system redefined the conventional wisdom of human players - wisdom that developed over the course of the game's three thousand year history. AlphaGo's novel "fifth line move" in game 2 of its historic match with human champion, Lee Sedol, was not only instrumental in the machine's winning of that game, but has also led human experts to reevaluate their approach to Go.

AI systems can help us see beyond conventional wisdom. But they can’t fully apply this on their own. They need us to reconcile their insights to the interests, sensibilities and ethos of the human world. This is only possible if we are willing to entertain the validity of insights that appear to contradict our conventional wisdom. AI can only aid and augment our work if we can learn to collaborate with these ahistoric thinkers.

Contact Us

Stay in touch. We want to hear from you. Email us at

For more on Google's Diversity and Inclusion programs, please visit

Follow us on Google+: @acceleratewithgoogle

Follow us on Twitter: @accelwithgoogle