Recent years have witnessed the widely deployment of machine learning based decision support systems, such as face recognitions, recommended systems, automatic drives, and medical treatments. Many of these systems have been constructed as black boxes and their internal logic is hidden to users. The overdependence on these uninterpretable black box models leads great potential risks and ethical issues to the human society. As the story given in the reference , where a black box classifier is applied in a military context, the military trained a classifier to recognize enemy tanks from friendly tanks. The classifier resulted in a high accuracy on the test set, but when it was used in the field had very poor performance. Later was discovered that friendly photos were taken on sunny days, while enemy photos on overcast days (see Figure 1 for an example). As discussed in the book , the black box models are making our cities to be a “black box society” that is governed by “secret algorithms protected by industrial secrecy, legal protections, obfuscation, so that intentional or unintentional discrimination becomes invisible and mitigation becomes impossible.”
From a technical perspective, developing interpretable ML models and intelligent systems is a positive way to prevent future “smart cities” becoming “black box cities”. In this area, the members of BIGSCity did many innovative works:
 Alex A. Freitas. 2014. Comprehensible classification models: A position paper. ACM SIGKDD Explor. Newslett. 15, 1 (2014), 1–10.
 Frank Pasquale. 2015. The Black Box Society: The Secret Algorithms that Control Money and Information. Harvard University Press.
 Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), 93.