
3. Digital ID and Society
“Digital ID” is an umbrella term used to describe the wide range of ways a person's identity, and information about them, can be verified, tracked, and monitored using digital technologies.
In governments around the world, these systems vary enormously and could refer to anything from a single-sign-on function across government services to facial recognition systems to compulsory digital identity cards.
This lack of precision means the term can easily be misunderstood and different kinds of digital ID can be confused or conflated. For instance:
a person whose life is made easier by a single NHS patient record may not also support the roll-out of high street facial recognition vans by law enforcement agencies;
someone who wants to more easily open a bank account may not want to make use of biometric technologies to check their tax code; and
people who want children to be safer online may feel it is an invasion of privacy for their children’s ages to be guessed by age-estimation tools.
This is a broad church of technologies, products, and services, and the lack of precise terminology can mean that different applications and ideas can be easily bundled together.
This Government’s policies, which are set out in more detail in Section Two (above), are focussed largely on driving efficiency. The Times has reported that Department for Science, Innovation and Technology Secretary of State Peter Kyle
wants to avoid arguments about civil liberties by focusing on more practical steps, which he argues will make it ‘much easier’ for people to interact with government and commercial services. [18]
Disregarding the rights and broader social implications of these “more practical steps” not only risks failing to meet the Public Sector Equality Duty, it also risks creating new problems that could increase social divisions, create new forms of injustice, and decrease the efficacy of Government.
As such, the Government’s digital ID policies need to be understood in the broader socio-technical landscape — specifically in the context of other digital technologies, ambitions for digital government, and the parallel expansion of digital ID in policing and immigration.
3.1 Digital ID and information storage
Unlike non-digital forms of identity, digital identity does not just offer proof of who we are; some systems also collect and store additional information about us, meaning that some digital ID systems become central repositories of information about a person’s activities and behaviours. Research by the digital rights organisation Engine Room found that:
Gaining a legal identity can empower people in a variety of ways, but the collection of a lot of personal data about a large group can also act as a surveillance mechanism… ignoring the infrastructure and mass data gathering behind these systems, or the personal nature of the data gathered, could put already vulnerable populations at risk of harm. [19]
Some digital identity systems also use biometric methods of identification and verification. [20] The “capture, retention and use of large and complex datasets, human samples and biometric identifiers” is a complex field that needs careful stewardship, and the Home Office’s Biometrics and Forensics Ethics Group acknowledge that guardrails must be put in place to ensure no use of biometrics “selectively disadvantage any group in society, particularly those most vulnerable”. [21]
Facial recognition technology (FRT) is one form of digital biometric verification that is associated with digital ID systems. Groundbreaking research by Joy Buolamwini and Timnit Gebru found that facial recognition technologies “perform better on lighter [skinned] subjects as a whole than on darker [skinned] subjects” and were more likely to successfully recognise men than women. [22] As well as having potentially high error rates, FRT is also a more invasive, surveillance-oriented form of identity recognition than many other forms of digital ID, such as digital identity cards and verification services.
3.2 Accidental Techno-Authoritarianism?
Digital identity is a government technology that can — sometimes inadvertently — exacerbate authoritarian outcomes. This Government’s focus on delivering both law and order and higher productivity will require its technologies to be supported by a range of robust accountability mechanisms, including transparency and good governance.
In the private sector, it can be sufficient for a digital tool to deliver nothing more than convenience, but the unique role of Government in society means the purpose of its digital tools is more complex than those of retailers and train companies. While a consumer may be content to make trade-offs to get the cheapest prices at a supermarket or the fastest route through a check-out, a voter will expect a Government committed to delivering “security, fairness and opportunity for all” [23] to deliver efficiency as well as safety, prosperity, and justice — not instead of it.
As civic technologist Alex Blandford says:
We have built a situation where law enforcement gets most of the benefits of a database state, and citizens get few of the benefits of government being a bit smarter and needing [to receive] fewer prods to give you what you are owed. [24]
The techno-authoritarianism of “the database state” is sometimes ideological, sometimes the result of optimistic managerialism — the idea that linking a little more data will make everything better — and sometimes a combination of the two. Digital government expert David Eaves notes that democratic governance is essential to avoiding this drift towards accidental authoritarianism so that “digital technologies are harnessed by the state to focus on the creation of public good and support individual liberty”. [25]
It is also the case that the underlying technologies used to run and manage these systems are not neutral, and many have been built on discriminatory ideological legacies that have become obscured over time and through technological diffusion. As political scientist Virginia Eubanks says,
We don’t look at the way that the newest tools — algorithms, machine learning, artificial intelligence — are built on the deep social programming of things that went before, like the poorhouse, scientific charity, and eugenics. [26]
This means that intended efficiencies in one part of the system can have a range of unintended consequences that need to be identified and mitigated as, and ideally before, they arise. Digital exclusion, in particular, can mean that the digitisation of essential services can result in the sometimes inadvertent intensification of existing social and economic inequalities. Those more likely to experience digital exclusion include:
older people
disabled people
low-income families
people of colour
people who are unemployed
refugees, asylum seekers, and migrants
people who are homeless or at risk of homelessness
people who use English as a second language
people who experience or are at risk of experiencing social isolation
people with fewer education qualifications
people who live in rural areas
people who live in social housing
and people who are navigating issues such as addiction and domestic abuse. [27]
Delivering universal public benefit in such a complex landscape is a challenge, but as Martha Lane Fox, the former UK Digital Champion whose recommendations led to the creation of the Government Digital Service, has said, good digital services,“Reach the furthest first and leave no one behind”. [28]
3.3 Digital ID and Policing
As the polling results in Sections Four and Five show, Digital ID policies have a complex relationship with policing and law and order, and the expansionist tendencies of police forces’ use of data and technology are a factor in shaping the overall trust landscape.
FRT is now quite commonly used at UK borders and is becoming more frequently used as a tool for law enforcement. Live Facial Recognition (LFR) is currently used by the Metropolitan Police across London “as a real-time aid to help officers locate people on a ‘watchlist’ who are sought by the police”. [29] This works by placing LFR vans in “Zones of Recognition”, such as high streets and busy public areas, to “monitor facial images … [which] are searched against a Watchlist of images of people who are wanted, or based on intelligence are suspected of posing a risk of harm to themselves or others.”
The Met’s own policy document notes that:
Areas subject of particular debate and scrutiny relate to the intrusion into civil liberties and the instances of false-reporting relating to the accuracy of LFR, the potential for wide-scale monitoring through the use of LFR, and the possibility for automated decision making as a result. [sic] [30]
Unlike some other forms of digital ID, this is proactive, compulsory, and happens to people in public spaces, often without their consent.
A 2022 audit of FRT in policing in England and Wales found a series of critical failures with regard to privacy, discrimination, accountability, and oversight, and investigations by advocacy group Liberty found that UK police forces had conducted facial recognition searches in the database of British passport holders. [31] In 2022, the Racial Justice Network reported that several forces had run identity checks against the the Home Office immigration database, and their findings show that biometric monitoring tools such as fingerprint scanning and facial recognition highlight racist and discriminatory policing practices:
Black people are 4 times more likely to be stopped and scanned than a white person. Asian people are 2 times more likely to be stopped and scanned than a white person. Men are 12 times more likely to be stopped and scanned than those identified as 'Women' or 'Unknown' by officers. [32]
This is supported by Kristian Lum and William Isaac’s 2016 Royal Statistical Society paper on predictive policing which found that:
police records do not measure crime. They measure some complex interaction between criminality, policing strategy, and community–police relations. [33]
Amnesty International’s Matt Mahmoudi has described facial recognition as a technology that “turns our identities against us and undermines human rights;” [34] this is an unintended consequence that this Government must commit to avoiding.
[18] Chris Smyth and Mark Sellman, “Government-backed ‘digital IDs’ to let people open bank accounts”, The Times (July 22 2024), https://www.thetimes.com/uk/politics/article/government-backed-digital-ids-will-provide-trust-mark-for-paying-taxes-808gxzww9
[19] The Engine Room, “Understanding the Lived Effects of Digital ID” (2018), https://digitalid.theengineroom.org/
[20] “Biometric Recognition”, Information Commissioner’s Office, https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/lawful-basis/biometric-data-guidance-biometric-recognition/biometric-recognition/
[21] “Biometric Recognition”, Information Commissioner’s Office, https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/lawful-basis/biometric-data-guidance-biometric-recognition/biometric-recognition/
[22] Joy Boulamwini and Timnit Gebru, "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification", Conference on Fairness, Accountability, and Transparency, Proceedings of Machine Learning Research 81: 1-15, 2018 https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
[23] “The King’s Speech”, The Prime Ministers Office (July 2024), p. 39 https://assets.publishing.service.gov.uk/media/6697f5c10808eaf43b50d18e/The_King_s_Speech_2024_background_briefing_notes.pdf
[25] David Eaves, “The Narrow Corridor and the Future of Digital Government”, Institute for Innovation and Public Purpose (30 August 2024), https://medium.com/iipp-blog/the-narrow-corridor-and-the-future-of-digital-government-cf66718d3781
[26] “Public Thinker: Virginia Eubanks on Digital Surveillance and People Power”, Public Books, 9 July 2020, via Shannon Mattern, A City Is Not a Computer: Other Urban Intelligences (Princeton University Press, 2021), p. 13. See also: Ruha Benjamin, Race after technology: abolitionist tools for the new Jim Code (Polity, 2019); Abeba Birhane, "Algorithmic injustice: a relational ethics approach", Patterns 2:2, 2021; Catherine D'Ignazio and Lauren F. Klein, Data Feminism (MIT Press 2023); Safiya Umoja Noble, Algorithms of oppression: how search engines reinforce racism (New York University Press 2018)
[27] For more on this, see Dominique Barron and Anna Dent, “Affordable, Accessible, and Easy-to-Use: A Radically Inclusive Approach to Building a Better Digital Society”, Promising Trouble (May 2024), https://www.promisingtrouble.net/blog/a-radically-inclusive-approach-to-digital-society
[28] “Martha Lane Fox sets out key digital proposals for the NHS”, NHS England (8 December 2015), https://www.england.nhs.uk/2015/12/martha-lane-fox/
[29] “Facial Recognition Technology”, Metropolitan Police.
[30] Metropolitan Police, “MPS LFR Policy Document: Direction for the MPS Deployment of overt Live Facial Recognition Technology to locate person(s) on a Watchlist” (March 2024), https://www.met.police.uk/SysSiteAssets/media/downloads/force-content/met/advice/lfr/policy-documents/lfr-policy-document2.pdf
[31] Radiya-Dixit, “Evani, A Sociotechnical Audit: Assessing Police Use of Facial Recognition” (Cambridge: Minderoo Centre for Technology and Democracy, 2022); Harriet Clugston, “Police secretly conducting facial recognition searches of passport database”, Liberty (08 January 2024), https://libertyinvestigates.org.uk/articles/police-secretly-conducting-facial-recognition-searches-of-passport-database/
[32] L. Loyola-Hernández, C. Coleman, P. Wangari-Jones and J. Carey, “#HandsOffOurBiodata: Mobilising against police use of biometric fingerprint and facial recognition technology”, The Racial Justice Network and Yorkshire Resists, UK (2022), https://racialjusticenetwork.co.uk/wp-content/uploads/2022/10/sts-20-22-foi-report.pdf
[33] Kristian Lum, William Isaac, “To Predict and Serve?”, Significance, Volume 13, Issue 5, October 2016, Pages 14–19, https://doi.org/10.1111/j.1740-9713.2016.00960.x
[34] “Ban dangerous facial recognition technology that amplifies racist policing”, Amnesty International (26 January 2021), https://www.amnesty.org/en/latest/press-release/2021/01/ban-dangerous-facial-recognition-technology-that-amplifies-racist-policing/