People think in problems: How Amsterdam is developing civic AI to address citizens' service requests

By Caroline Sinders, Writer in Residence, PAIR | Inclusive innovation

AI generally helps people recognize patterns at scale and can offer predictions based on those patterns. Good use cases for civic AI can be made when citizens are able to articulate a need on consistent issues, like exploring a persistent sinkhole or plot, or deteriorating house conditions. Or the need for safe, public parks. Recurring mold, and lighting fixtures shorting out? Perhaps there’s a deeper investigation that can be done into why mold keeps recurring. Is it related to faulty lighting or a recurring water leak? AI could help analyze 311 calls to help group together and analyze similar citizen complaints. The city of Amsterdam, in fact, is using AI to help sort through and triage their version of 311 calls.

The Chief Technology Office Innovation team within Amsterdam’s city government has a goal to “experiment with new technologies and link it with classic challenges of government,” Tamas Erkelens, program manager for the Data Innovation at City of Amsterdam, explained to me recently. Together with the public services and information department of the city, the innovation team designed a program called Signal, which uses algorithms to make it easier for residents to file service requests for a variety of issues, such as  litter, sinkholes, or any other urban problem.


Signaling user needs

Signal is integrated with Amsterdam’s version of 311, the free service request phone line in the United States. In Amsterdam’s version, residents can file service request about issues in public spaces through calls, social media or web form. Signal was designed to sort residents’ requests and  triage complaints, which, in more traditional systems, might categorize problems into predetermined buckets. But what happens if residents don’t understand the categorization of their problems? Does a sinkhole fall under “safety,” or maybe “traffic?” Can they miscategorize their problems? Or not categorize them at all, making it harder to understand the problems’ urgency...and, potentially, as an outcome, put citizens in dangerous situations? How can designers of AI systems balance the ways that people, and machines, might understand categories differently?

“People think in problems and not in governmental categories,” Erkelens says. Twenty-five percent of the reports in Amsterdam were put in the category "other." So Erkelens’ team started prompting users to describe the problem or take a photo. To improve how residents’ complaints are categorized, the team created a machine-learning model capable of detecting 60+ categories from the data generated by residents’ free-text inputs in web-based forms. (Here is a demo of that model,  and a related Python notebook.)

Erkelens’ team has put processes into place to innovate responsibly. “This algorithm is now audited by a third party, in order to make sure the technology contributes to digital rights of the residents of Amsterdam,” Erkelens says. For example, the audit considers outcomes for people in less privileged areas, or residents who don’t speak Dutch. If the ML model isn’t trained with data representing a variety of experiences or languages, Signal could perform better for some groups than for others. In addition, the city of Amsterdam has launched a Coalition with the cities of Barcelona and New York to protect human rights online and at the global and local levels. The Signal team abides by these commitments.

Signal has been running since August 2018, and the team expects it to help route up to 300,000 resident requests on a yearly basis, improving the quality and speed of issue-handling for the service request process. Currently, a newer version of Signal is being developed, and the team is working on improving the performance of the image recognition algorithm that identifies scenes of, say, litter or broken traffic lights, that Amsterdam residents photograph and send in with their service requests.

Above: A screenshot from the Signal app.


Data is human

Data, like that found in the texts and images that Amsterdam’s citizens submit to Signal, is the backbone of artificial intelligence. Data isn’t something cold or just quantitative, it's inherently human. This data, especially civic data, is personal, even intimate, because it represents stories about people and their lives. Information about a neighborhood — and its problems — goes beyond anonymous census data. Civic data presents stories about the people who live in the communities and cities where the data was gathered.

It’s important to safeguard against bad actors weaponizing the data being used, or collected. Every data point from a software application, from an amount of usage or from a social network or platform, is made by humans. There is value in applying AI to urban challenges, but that value has to come with explicit protections, with safeguards to privacy, and citizen safety. The Signal team’s commitments to citizens’ rights as they design with citizens in mind offer a few early proactive examples that others might learn from.

The opinions in this post are those of the author. Caroline Sinders was recently a Writer in Residence at PAIR (People + AI Research) at Google. Caroline is also a research fellow with the digital program at the Harvard Kennedy School, as well as an artist and design researcher. Her work has been shown and discussed at MoMA PS1, the Victoria and Albert Museum, the Channels Biennale, Eyeo Festival, SXSW, IxDA, Re:publica as well as others. 

Original illustrations by Mahima Pushkarna, a UX Designer on Google's People + AI Research team

Contact Us

Stay in touch. We want to hear from you. Email us at Acceleratewithgoogle@google.com

Please note that this site and email address is not affiliated with any former Google program named Accelerator.