The advent of social apps, smart phones and ubiquitous computing has brought the biggest transformation to our day-to-day life since the industrial revolution. The incredible pace with which the new and disruptive services continue to emerge, challenges our perception of privacy. We see an important part of this challenge in devising agile methods and frameworks, which keep apace with this rapidly evolving cyber reality, to develop privacy-preserving systems that align with evolving user’s privacy expectations.
Tackling this issues requires a multidisciplinary approach that brings computer scientists, formal methods experts and privacy researchers. Contextual integrity (CI) addresses this challenge by offering a model for conceptualizing privacy that is able to bridge scientific and technical approaches, on the one hand, with ethical, legal, and policy approaches, on the other. CI’s bedrock claim is that protecting privacy means protecting appropriate informational flows. It further stipulates that appropriate information flows are flows that comport with contextual informational norms (or rules), specified by the actors (senders, recipients and subjects), attributes (the type of information at hand) and transmission principles (type of constraints).
In our work we are guided by the theory of contextual integrity in exploring ways of capturing societal privacy that can be verified for consistency and enforced by the system.
Our team comprises researchers across disciplines and institutions. This is a collaboration project between Princeton University and New York University.
Designing programmable privacy logic frameworks that correspond to social, ethical, and legal norms has been a fundamentally hard problem. Contextual integrity (CI) (Nissenbaum, 2010) offers a model for conceptualizing privacy that is able to bridge technical design with ethical, legal, and policy approaches. While CI is capable of capturing the various components of contextual privacy in theory, it is challenging to discover and formally express these norms in operational terms. In the following, we propose a crowdsourcing method for the automated discovery of contextual norms. To evaluate the effectiveness and scalability of our approach, we conducted an extensive survey on Amazon’s Mechanical Turk (AMT) with more than 450 participants and 1400 questions. The paper has three main takeaways: First, we demonstrate the ability to generate survey questions corresponding to privacy norms within any context. Second, we show that crowdsourcing enables the discovery of norms from these questions with strong majoritarian consensus among users. Finally, we demonstrate how the norms thus discovered can be encoded into a formal logic to automatically verify their consistency Read More ›