Chris Noessel Discusses Inclusive Assistant and Agentive AI
Christopher Noessel is an author, consultant and public speaker. His most recent book, Designing Agentive Technology: AI That Works for People was published by Rosenfeld Media in 2017. He has previously co-authored Make It So: Interaction Design Lessons from Science Fiction (Rosenfeld Media 2012) and About Face: The Essentials of Interaction Design, 4th Edition (Wiley 2014). Chris also writes prolifically for online publications, and is probably best known for his posts on medium.com and his blog scifiinterfaces.com, most recently posting a series of articles about AI in sci-fi.
Chris has been an interaction designer for more than 20 years. He is the Design Practice Manager for the Global Travel and Transportation team with IBM. He teaches, speaks about, and evangelizes design internationally. Chris researches and speaks on topics from interactive narrative to ethnographic user research, from interaction design to generative randomness, and designing for the future.
How might we ensure inclusion so that everyone can benefit from breakthroughs in AI? Can design thinking open up new applications for AI?
It’s not the only way, but, yes, of course. And design thinking has a leg up over alternate ways because it’s defined as human-focused.
Better inclusion is the right idea, but it’s not easy. It introduces burn rate problems, can slow progress to market, introduces communication overhead for large groups, and will fall into the peril of hair-splitting over intersectionalities.
In the near term it may be just broader inclusivity in the design, engineering, and testing rooms.
In the middle term it may be fostering rich layered systems of discussion and feedback, at many points along the product life cycle: strategy, ideation, data training, development, alpha and genpop releases.
In the long term it’s about generational investment in education and continued organizational commitment to inclusion (in the Vernā Myers sense).
AI is built by people. How might we make it easier for engineers to build and understand machine learning systems?
I’m speaking out of turn since I don’t know engineering as a practice that well, and the coding details of ML even less. My gut response is more based on my graduate studies of learning systems: Make it visual, co-design smart tools, make sure there’s a tight feedback loop between hypothesis and feedback, and nurture a robust, learning community of practice.
How can AI aid and augment professionals in their jobs? How might AI systems amplify the expertise of doctors, technicians, designers, farmers, musicians, and more?
My work has primarily been on agentive AI, but I happen to have been thinking through assistant AI in preparation for a new book. I’ve found 6 broad categories of augmentations so far. 1. Detect. 2. Answer/Understand. 3. Predict/Advise. 4. Rehearse/Perform. 5. Track/Remember. 6. Review/Reflect. Each of these amplifies expertise. Note, though, that I wouldn’t presume that the concerns of AI design should end at amplifying expertise. That seems like a state-based tactic rather than an outcome-based strategy. Assistant AI should leave humans better than it found them. (But we’re back to the next book again.)
Stay in touch. We want to hear from you. Email us at Acceleratewithgoogle@google.com
Please note that this site and email address is not affiliated with any former Google program named Accelerator.