A theory of digital ID premised on Helen Nissenbaum’s theory of privacy
Digital identity systems are rapidly rolling out across countries and localities around the world. Digital ID credentials are quickly replacing legacy ones, with experts predicting that nearly half of all identity credentials in circulation will be smart credentials by 2023. These systems are also increasingly integrating biometric data, including fingerprints and facial recognition technologies, with some estimating that 3.6 billion people worldwide will carry digital IDs with embedded biometrics by 2021.
These trends are in part motivated by the United Nations 2030 Agenda for Sustainable Development. Viewed as a critical tool for furthering inclusion and access, providing “legal identity for all” by 2030 is one of the core SDGs.¹ Although some form of legal identity is no doubt essential for participation in, and access to, a wide array of services in modern society, it is worth stopping to critically examine these trends. While most of the criticism levied to date has focused on conventional privacy and security concerns (particularly where biometrics are employed), there is a deeper shortcoming facing many of these digital ID schemes.
Many digital ID systems are designed and built as monolithic systems that fail to account for context. An example of this are national ID systems that require the use of a single, high-assurance (maybe even biometric-enabled) digital identity credential to access a wide array of public and private sector services.² Not only do these systems raise the full gamut of ordinary privacy and surveillance concerns, they also lose sight of the harms that an a-contextual approach has already caused in our digital lives to date. I’d like to propose an alternative approach— namely, a theory of digital ID premised on Helen Nissenbaum’s theory of privacy as contextual integrity.³
According to Nissenbaum’s theory of privacy, “contextual integrity ties adequate protection for privacy to norms of specific contexts, demanding that information gathering and dissemination be appropriate to that context and obey the governing norms of distribution within it.”⁴ A simple example of norms of appropriateness is that while I would likely find it acceptable for my doctor to ask about my weight, I likely would not deem it appropriate for my employer to do so. As for norms of distribution, an example is that while I might deem it appropriate for my doctor to share my prescription information with my pharmacist or another doctor or specialist treating me (under the condition that it remains confidential), I probably would not be ok with my doctoring share the same information with my employer (at least not without my consent). The real world is replete with such easy-to-intuit examples.
My hypothesis is that we have created a privacy disaster in our digital lives because we have treated the “online” or “digital” space as a single monolithic context. For starters, this is an oversimplification of reality. For example, we can all recognize that a medical chat bot is a wholly different context than an individual’s Twitter feed. Nevertheless, we use mostly the same tools — privacy notices, terms and conditions, security protocols— for both. This reductionist collapse of context has also resulted in dangerously inadequate reliance placed on the same deficient “notice and choice” consent-based paradigm for legitimizing data processing (a decidedly one-size-fits-none approach). Worse yet, we treat the “online” or “digital” as a single context devoid of any norms at all, tolerating behaviors and practices we would never tolerate in the “real world.”
My fear is that we will recreate this monolithic approach when it comes to implementing digital ID schemes — whether national or private-sector derived. In the same way that our approach to online consent is too simplistic, digital identity solutions that treat the “digital” as one single context are doomed to fail.⁵ They are destined to create an identity solution that is simultaneously under- and over-inclusive, either under- or over-identifying us, depending on the context. Take, for example, a government-backed, biometric-enabled national ID solution that provides a high level of assurance about an individual. This approach may be necessary, useful, or even desirable in some contexts (e.g. in the context of border security or certain financial services) but would risk over-identifying the individual in other, lower-assurance contexts (e.g. in subscribing to a publication or opening a gym membership).
Some emerging ID frameworks already nod to this theory, albeit implicitly. Take for example the emerging Draft Provisions on the Cross-Border Recognition of Identity Management and Trust Services from the U.N. Commission on International Trade Law (UNCITRAL). The Draft defines “identity” as “a set of attributes that allows the subject to be sufficiently distinguished/uniquely describes the subject within a given context” and defines “identification” as “the process of collecting, verifying, and validating sufficient attributes to define and confirm the identity of a subject within a specific context.”⁶ Another example is NIST’s Digital Identity Guidelines, which encourage a risk-based approach to identity standards based heavily on context. Unfortunately, some still refer to the “digital ID context” as if it were singular.⁷
It is time for the digital ID community to embrace a meaningful theory of contextual identity lest we reduce ourselves to the same fate again.
— — — — — — — — — — — — — –
¹ See SDG Target 16.9 (“By 2030, provide legal identity for all, including birth registration.”).
² Its use in commercial settings is in part what the Supreme Court of India struck down about the Aadhaar identity scheme (while upholding the constitutionality of Aadhaar in general).
³ Helen Nissenbaum, PRIVACY IN CONTEXT: TECHNOLOGY, POLICY AND THE INTEGRITY OF SOCIAL LIFE (2010).
⁴ Helen Nissembaum, Privacy as Contextual Integrity, WASH. L. REV., Vol. 79, No. 1, pp. 119-157 (2004).
⁵ Draft Provisions on the Cross-border Recognition of Identity Management and Trust Services, United Nations Commission on International Trade Law, Working Group IV (Electronic Commerce), A/CN.9/WG.IV/WP.160 (16 September 2019), available at https://uncitral.un.org/en/working_groups/4/electronic_commerce.
⁶ See NIST Special Publication 800–63–3 (“In analyzing risks, the agency . . . should consider the contextand the nature of the persons or entities affected to decide the relative significance of these harms […] The analysis of harms to agency programs or other public interests depends strongly on the context.”).
⁷ This is particularly concerning in conversations about building an “identity layer for the Web” (where “the Web” is presumed to be one context and where “the Web” is presumed to be separate from the offline/real world, which is no longer the case).
All Rights Reserved for Elizabeth M. Renieris