Like what? Using TCAV to find out what matters

By David Weinberger, Writer in Residence, PAIR | AI for Everyone

To make information on Artificial Intelligence more useful and accessible to everyone, from students to non-technical audiences, we’ve teamed up with Google’s People + AI Research (PAIR) initiative, whose mission is to make partnerships between people and AI more productive, engaging, and fair.

AI Outside In is a blog by PAIR Writer in Residence, independent tech writer David Weinberger, who offers his outsider perspective on key developments in AI research. He explains central concepts in the field of machine learning and he looks at the technology within a broader context of social issues and ideas. His opinions are his own and do not necessarily reflect those of Google. 


How would you say these two images are alike?



Left,Français: Monument aux morts de Saint-Jean-de-Bassel
(Creative Commons: CC0 1.0 Universal Public Domain Dedication)

Right, Français: Raymond Vieussens
(Wikimedia Commons, US Public Domain: https://commons.wikimedia.org/wiki/Template:PD-US)


They both seem to be old, or at least not new. They both might be of European historical interest. They both depict a human being. Both humans have a good head of hair. If you read the captions, you have learned that both are in some sense French. The images are also longer vertically than horizontally. If you read the small print, you can see that both images come from Wikimedia Commons, and both images are in the public domain, although the expression of this is different. What you cannot tell from the images or their labels is that they’re alike in that I chose them by clicking on Wikimedia’s “Random File” button. Also: both are visible records of the past, not the future, neither are my hallucinations, and both were created by Earthlings.  Also, neither can carry a tune.


Nothing is natural in machine learning

We usually don’t have to be told the relevant ways in which two things are alike because their context makes that clear.  For example, if your friend asks for a recommendation for a movie like “The Taming of the Shrew” and you recommend “The Silence of the Lambs,” she’s likely going to come back disappointed and confused. Explaining that they are both five-word titles that end with an animal probably means your friend is not going to ask you for any more movie recommendations.

The problem is that you looked at one particular aspect of movies —  a pattern in their titles — and your friend expected you to look at other aspects such as the author (Shakespeare), the mode (movie version of a classic play), the type of story (romcom, albeit one that is very pre-feminist),  and so forth. Your friend knows what’s interesting about movies, whereas you apparently do not. You should probably work on that.

What’s naturally clear to humans is not naturally clear to machine learning systems because nothing is natural to machine learning. A process called Testing with Concept Activation Vectors (TCAV) — pioneered by Been Kim and other computer scientists in the PAIR group — can help users of machine learning systems explore the likenesses that matter to them.


The ML only knows about the numbers

Machine learning needs this help because left on its own, it will note many, many different ways in which its data can be alike, most of which will be irrelevant to us. For example, if you feed thousands of photos of animals into an ML system, it will view each as a grid of colored pixels. In fact, the ML doesn’t even know about colors; they’re expressed as mere numbers, typically as the trios of red, green, and blue familiar to users of the RGB color notation system. The ML will analyze all these grids of numbers and might notice that for some images, one particular pixel has the same (or nearly the same) number. It may notice that there is a string of darker pixels in the lower left of some images, or that some images have similar distributions of very dark pixels and very light ones —  although, again, the ML only knows about the numbers, not about lightness or darkness. Most of these distinctions are of no interest or use to humans, but some might indicate that a set of images shows what we humans interpret as straight lines at right angles, or the color patterns typical of beach scenes, or diffuse colors that we read as fog.

Computer scientists refer to the different ways that images (or anything else) can be alike as dimensions. One dimension might be, say, pixel #4365’s color. Every image could be arranged in that dimension according to the number associated with that pixel. Another dimension might represent the overall contrast level of each image. Typically, image recognition systems are  set up to generate a particular number of dimensions, ranging from small (say, 128) to big (10,000), depending on the task and data at hand. No matter the number, the question is: How can the system identify the dimensions that are most useful to humans, and then translate them from machine learning’s statistical patterns into terms and concepts that humans understand?


Patterns among the edges

The truth is that the ML system doesn’t have to care about the dimensions that are useful to us in order to do its job of applying the right labels to the images users ask it to identify. An image recognition system learns how to classify images by analyzing patterns in the labeled images we train it on, whether or not those patterns are meaningful to humans, so long as the patterns enable it to more accurately match images with the labels we supplied.

For example, human perception cares a lot about the edges between things as important clues to what we count as objects —  the trees distinct from the sky behind them. But a machine learning system doesn’t start off knowing how to identify the edges of objects in images.  ML in effect teaches itself this by noticing that patterns of contiguous sets of pixels that have significantly different values — what we would interpret as an edge —  help it sort images more successfully. It will continue trying small variations on its previous pass, likely discovering that patterns among those “edges” help it sort images ever more accurately.

Sometimes, those sorts of dimensions may match the way humans differentiate things. For example, if we’re training the machine learning system to recognize the animals in images, it may notice, just as we do, that cats have pointy ears, while pandas have round ones. But it may find other similarities that we don’t notice or much care about. For example, if the panda photos are all taken outside in daylight, while the cat images are a mix of  indoors and outdoors, the machine learning might take the brightness of the image as important information for differentiating the two sorts of animals. But we humans consider animals to be the same whether they’re indoors or outdoors; that distinction, useful to ML (and sometimes misleading), doesn’t track our human ideas.

But identifying the dimensions that are useful to us humans can be crucial. For one thing, it lets users explore the data according to the sorts of likeness they care about. For another, it can let us use machine learning to explore unexpected relationships that are significant to us. Alas, the machine learning by itself doesn’t know which of those various likenesses matter to humans; it only knows which ones helped it more accurately label the training data.


Likenesses we (humans) care about

That’s where TCAV comes in. It lets users tell the system which likenesses matter to them. So, let’s say it’d be useful to you to be able to find all the striped animals —  zebras, black-striped mussels, black-and-white warblers, and striped animals you never even heard of. With TCAV you would provide the already-trained ML with images of striped animals —  or perhaps pick out some of the striped animals in the ML’s training set — and tell it that these are what we humans call “striped.” (You could do the same thing for furry animals, or even ones that we humans think are cute.) The TCAV then analyzes the patterns in the samples you’ve submitted, and sees if they match any of the patterns the ML noted while it was training itself. If so, it now knows that that’s a particular type of likeness —  striped, fuzzy, or cute — we care about.

It’s not that TCAV lets you teach an ML system a new concept, such as “striped.” Rather, you’ve taught it to associate a label —  the word “striped” — with a likeness (a dimension) it had already uncovered and found useful for differentiating among the images. If for some reason the ML during the training pattern never paid attention to patterns of alternating bands of color, then the TCAV would tell you that it’s very sorry, but this pattern wasn’t useful when it was learning how to classify animal images. As Been Kim, the TCAV project leader, says, “it’s important for a machine learning system to tell you when it can’t do something.”

Machine learning is awesome at finding the ways in which things are alike. TCAV enables humans to tell machine learning what it cannot do on its own: figure out which likenesses matter.


For more on this topic, see the original research paper about TCAV.

Illustration by Tommy Shimko

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Contact Us

Stay in touch. We want to hear from you. Email us at Acceleratewithgoogle@google.com

Please note that this site and email address is not affiliated with any former Google program named Accelerator.