Cities have emerged as test beds for digital innovation. Data-collecting devices, such as sensors and cameras, have enabled fine- grained monitoring of public services including urban transit, energy distribution, and waste management, yielding tremendous potential for improvements in efficiency and sustainability. At the same, there is a rising public awareness that, without clear guidelines or sufficient safeguards, data collection and use in both public and private spaces can lead to negative impacts on a broad spectrum of human rights and freedoms. In order to productively move forward with intelligent community projects and design them to meet their full potential in serving the public interest, a consideration of rights and risks is essential.
The most common right considered as a part of intelligent community projects is the right to privacy. Indeed, in the digital age, the right to privacy has come to be described as a “guarantor” or a “precondition” for the enjoyment of other human rights and freedoms. The complexity of data flows, however, can make it challenging for individuals to discern—much less self-manage— the range of risks and rights they engage when consenting to the use of their personal data. Inadequate privacy protection can lead to a chilling effect on the exercise of other rights, such as freedom of expression or assembly in public spaces. As cities engage in public-private partnerships (PPPs) that seek to leverage data collection and advanced analytics such as artificial intelligence (AI) to improve or augment public services, greater reliance on digital systems will require new processes for identifying and mitigating the risks they generate to human rights and freedoms.