This is a paper Kevin and I wrote a couple of years ago and submitted to an academic conference, I forget which one. It wasn’t accepted, as the selection committee disagreed with our hypothesis; but re-reading it now, it seems prescient. While some of the details may be out-dated, the central problem remains and has become even more urgent. As such, we decided without further delay to publish it here as it stands.
Abstract
It has been argued that we are entering an era of post democracy (Crouch 2014; 2011) in which the apparatus of the democratic state, parliament, courts and so on, endure, but their power is diminished. As we witness this rapid decline in the power and authority of the citizenry and indeed the state itself, there has been an extraordinary rise in what is coming to be described as digital citizenship (Siapera, 2018). Although we may all remain members of a nation-state, so too are we becoming members of a community best described as Involuntary Citizens of the global Data State. This paper begins by explaining the concept of the social contract to demonstrate the deterioration of the liberal democratic state. It then discusses the concept of the Data State, and showcases why legal processes are not just insufficient, but are redundant in this new world order where data rules and citizens lose. We then explore cases in the US and UK health sectors where data is captured and processed for larger, most often commercial, purposes, decisively showing that the ubiquitous, transnational Data State will continue to silently evade everyday citizens whether or not they consent to its dominion. We conclude by discussing the similarities between the Data State and that of the traditional national state and why the existence of the Data State needs to be urgently addressed. The Involuntary Citizen concept contributes a new angle to the existing literature about data protection and surveillance. It reaches further as it is intended to represent the impact of Big Data, process automation and AI applications in the round. It is not primarily concerned with the act of observing but rather with the consequences of uncontrollable availability of digital traces and the unpredictable nature of their application.
Introduction
We have identified a crisis that stands to threaten the legitimacy of the democratic state. We argue that the unfettered application of data capture and processing technologies, is creating a new form of citizenship, in which the commonly understood relationship between the citizen and the state, as set out by John Locke, is severely challenged. If the evolution of legal practice cannot address the phenomena we describe, and so far the evidence suggests that it will not, then the threat posed to liberal democratic societies is both profound and urgent. We explore the influx of surveillance technologies being deployed in the health sector and hypothesise that the future of the industry will be subject to unintended barbaric consequences as a result of the race towards efficiency in the UK and profit in the US. To be clear, we are not simply discussing the role of surveillance capitalism, although it is, of course, part of the phenomena we are concerned with. The Involuntary Citizen concept reaches further as it is intended to represent the impact of Big Data, process automation and AI applications in the round. It is not primarily concerned with the act of observing but rather with the consequences of the uncontrollable availability of digital traces and the unpredictable nature of their application. Philosopher of Technology Benjamin Bratton summarises this poetically, with his sinister prophecy that “as the ‘eyes’ of the state evolve, its bones and blood will follow.”1
The Social Contract Explained
The social contract, a concept described by John Locke, sets out that citizens voluntarily enter into in an exchange with the governing body of the time for the protection of their own rights. Historically, in western liberal democratic societies, the texts produced by the 17th Century English Philosopher laid the groundwork for what we now accept as essential to a well-functioning society.2 In particular, he tried to answer the question of who should rule and on what basis. His notion of a “voluntary agreement”3 and “mutual consent”4 is of particular relevance, for our purposes. Today, this concept is commonly known as the social contract and is comprehensively articulated at paragraph [192]5:
“…no government can have a right to obedience from a people who haven’t freely consented to it; and they can’t be supposed to have done that until either they are put into a full state of liberty to choose their government and governors or at least they have standing laws to which they have given their free consent directly or through their representatives…”
Why is this relevant? Because it sets out that subjects must have the free will to choose who governs them or, at least, have the ability to decide on the laws by which they are governed (through their elected representatives). In the current climate, tools have been deployed that obfuscate this relationship. Political bots are prime examples of tools that are used to spread disinformation and distort political discourse. Their “human-like” quality combined with their actions mean they have the very real ability to inflict several layers of damage onto society. Indeed, rough calculations presented by the National Bureau of Economic Research suggest bots may explain almost 2% points to the pro-“leave” vote during Brexit and at least 3 percentage points to the Trump campaign in the U.S. presidential race.6 We argue that the involuntary subjugation of citizens to unfettered data capture and processing technologies is a further example that a new statism emerging. One in which the social contract as conceived by John Locke is no longer at the core of liberal democratic societies. Instead, it has been replaced by a phenomena that will result in barbaric consequences.7 This form of tyranny creates new subjects every day by exploiting the necessary and unavoidable traces we cannot hide or own but which can be exchanged. In so doing, these traces can be used against us.
Legal Practice Cannot Address this Phenomena
A significant problem that democratic and rights-based societies face in this era is that the digital subject does not own or control its own substance – there is no corpus to habeas, no property to protect. That is, private citizens cannot chose to hide, monitor, control or change the traces open to digital capture and manipulation. Privacy laws, as they stand, are insufficient.8 The problem of international jurisdictionality in this new global computational era highlights a situation where the current laws are found lacking:9
“What if a data object is originated in Beijing by a Japanese citizen, uploaded to a server off the shores of Vladivostock in international waters, and then used by a kid at an internet café in Las Vegas to commit a crime in Brazil? Does one country’s data privacy and prosecution laws have clear means to control this?”
In this example, the new world order challenges the current concept of jurisdiction and, in so doing, legality. Furthermore, we simply cannot anticipate the uses to which the multitudinous facets of our digital existence may be put to manage, manipulate, imprison, or harm, in ways as yet unimagined, any human being who is also a digital subject. These two points stand to question the legitimacy and utility of the Nation State and its set of traditional tools.
Furthermore, even if the Nation State were to “catch up”, the Data State’s strength and durability would not weaken. Over roughly the same period of time as we have seen the rapid decline in the power and authority of the citizenry and indeed the state itself, there has been an extraordinary rise in the what is coming to be described as digital citizenship (Siapera, 2018). Although we may all remain members of a nation-state, so too are we becoming members of a community best described as the Digital Citizens of a global state. An entity overseen by huge private corporations whose income is largely derived from selling data collected from and by the population of nation-states. They act in concert with nation-states but are largely if not wholly free to act in their own best interests and certainly without undue reference to the political expression of the wishes or choices of citizens of those states.
In the next section we will describe a series of examples of what we consider to be occasions where people find themselves being treated as Involuntary subjects of commercial practices, be they derived from the Nation State itself, commercial enterprises directly or some combination of the two. We hope to demonstrate that in going about our daily lives we currently and increasingly face situations in which we are subject to both direct and indirect observation with negative consequences when algorithmic tools are developed to monetize our lives as we live them. More seriously we seek to evidence that the individual is not in position to anticipate or prevent this exploitation because of the fundamental nature of the data involved.
Evidence that Unfettered Data Capture and Processing is Taking Place
Our research shows that today, the very act of living and moving through the world creates a record that can be owned and disposed of by entities. All of the traces our digital past: social media use, TV watching, phone usage, car ownership, banking, shopping, energy using, travelling, eating, drinking or digitalised present; face, voice, DNA, weight, height, clothes, gait, typing pattern or eye movements can be recorded (and exploited) for various purposes.
Telehealth
The data gathered and stored for the purposes of remote health, or telehealth, include race, gender, age, socio-economic status, physical and mental medical history, physiological metrics, and genetic makeup. Telehealth and telemedicine, digital epidemiology, and digital disease detection are growing areas and all raise questions around “privacy, liability, agency, and intellectual property issues”.10 Health Resources and Services Administration defines telehealth as “the use of electronic information and telecommunication to support long-distance health care, health-related education, and public health administration.”11 Telemedicine refers to remote diagnosis and prescription and involve products being distributed by direct supply contracts between manufacturers and end users or via a third party supplier.12
Sensors in a variety of household and consumer electronics can extract and monitor health metrics. Mobile devices including cell phones can perform ECGs, DIY blood tests, and be used as thermometers; patients can be prompted to check their weight, pulse, or oxygen levels, and enter results into mobile patient portals. Jewellery and accessories are also viable metric-gatherers, already used for health monitoring. For example, Leaf Urban Smart Jewellery measures activity and stress, and tracks sleep patterns;13 Fitbit produces wireless-enabled wearable technology that measures fitness metrics; and Apple Watch has been used for a heart study with Stanford Medicine; it detects cardiac issues and prompts a telehealth doctor intervention within minutes.14 There is potential for newly developed smart fabric technology to record far more detailed data on movement and activity.15
It is proposed that avatars will be able to ask questions precisely tailored to individuals by both using the data stored about them, and iteratively in response to answers given. Based on a patient’s genetic makeup, and what is known about how similar patients respond to various therapies, remote doctors who are being advised by AI engines will use telemedicine to confer and quickly decide how best to diagnose and treat that patient.16 Currently, AIs are not legally permitted to diagnose or prescribe for a patient without the input of a physician, but they can advise health professionals.17 We see three flaws with this approach. First, despite jurisdictions clearly stating that they intend to only use these technologies to assist human decision making, not replace it, there is concern that these tools will eventually lead the human decision maker to trust the computer over his or her own judgement. Second, as these technologies develop, they will be in a position to provide more consistent, optimised and arguably “better” advice. Third, and this is the most relevant for the purposes of this paper, where will they retrieve their data from? Will citizens be able to discern that the data they provide through a variety of sensors, feed into the healthcare they eventually receive,18 or the economy that they rely on? 19 Will their permission or consent be required? Will they, in fact, have the choice to opt-out if they wish to access healthcare or qualify for insurance? In the context of the UK, this would be in the interests of making cost efficiencies for the state-funded National Health Service (NHS). In the US, as data-subjects of Big Pharma and the medical-industrial complex, the value of your data may subsidise the cost of healthcare.
In the UK and Europe, GDPR legislation protects citizens from data collection by social media entities, but a huge amount of data remains publicly available for potential scraping and analysis by the state or state-licenced bodies such as the NHS or, perhaps currently more likely, the DWP for health and disability back-to-work assessments, alongside medical and benefits records and other state-collated data. The US precedents show that many kinds of metric can be used by machine learning algorithms that use neural nets and decision tree algorithm “forests” to make health diagnoses and predictions. While in the US, sharing data with third parties is arguably legally protected commercial speech under the First Amendment (there is a precedent,20 as in Mason Marks21), this may be analogous to data sharing by the state in the UK and Europe.
Mental Health
IoMT devices can also be used for mental health monitoring and data collection; physiological metrics from wearables and the timing and volume of calls and texts as well as voice monitoring from mobile phones or voice-activated devices such as Alexa could all feed in to build up a picture of a person’s mental health over time. This data is expected to improve medical understanding of mental health issues, and prediction of crises when combined with a user’s medical history, user-data gathered from social media websites and machine learning algorithms built to diagnose mental illness. Apps such as Darpa-funded Cogito Companion are being developed for mental health monitoring. Companion accesses existing mobile phone sensors – voice recorder, GPS, accelerometer and usage history – to detect changes in mood, activity levels, quality of sleep, social behavior and movement patterns. It listens to everything a user says, analyzing for vocal cues that signal mood alteration, examining tone, energy, fluidity of speaking and levels of engagement with a conversation. 22
Suicide rates surged to a 30-year high in 2014, the last year for which the Centers for Disease Control and Prevention has data. According to Mason Marks in the Washington Post, last year more than one million Americans attempted suicide and 47,000 succeeded. In a recent academic study, machine learning was shown to have an 80 to 90 percent success rate in suicide attempt prediction, up to two years in the future.23, 24 Florida State and Vanderbilt University researchers are being federally funded to continue the study;25 while social media providers are also investing in using data generated on their platforms for self-harm prediction. As the Florida / Vanderbilt study sums up:
“recent meta-analyses on hundreds of studies from the past 50 years indicate that the ability to predict future suicide attempts has always been at near chance levels. The primary reason for this lack of progress is that researchers have almost always used a single factor (i.e., a simple algorithm) to predict future suicide attempts (i.e., a complex classification problem; see Ribeiro et al., 2016b). Fortunately, ML represents a potentially effective approach for the development of complex algorithms capable of solving (or making substantial progress toward solving) complex classification problems.”
Facebook and Instagram have developed algorithms that scores posts for suicide risk, while startup Objective Zero’s app uses GPS data to infer suicide risk in veterans. 26, 27 Data protection laws in the US currently don’t apply to this data as they’re not a health provider;28 Yale health law fellow Mason Marks argues that Facebook’s use of the ‘DeepText’ software, including making calls to police when a user is flagged as being high-risk, that could lead to mandatory psychiatric evaluations, constitutes medical practice and should be regulated as such.29 Our main concern with this is that this is a medical issue, being treated by a private company.
In the US, as Mason Marks points out, suicide prediction research is being pursued in two fields concurrently, one academic-medical and the other commercial tech provision; the first is slower due to academic and ethical standards, while tech firms have no such restrictions, being entirely unregulated: “in 2017, Facebook reportedly told advertisers that it could identify teens who feel “defeated,” “worthless” and “useless”.30 The implications and (mis)use of suicide R&D data could apply to any health research in the commercial sector, leading to bias in employment, credit and insurance for example. Marks suggests implementing Balkin and Zittrain’s proposal for legal imposition of ‘fiduciary duties’ on commercial entities; these require parties to act in the patients’ best interest. The concept of social media platforms constituting ‘information fiduciaries’ would extend to the use of all user data.31
Image posts on picture-lead social media can be analysed for picture hue, saturation, brightness and imagery. Any Involuntary voice recording can be scraped for tone, energy, fluidity of speaking and levels of engagement with a conversation; social media text posts for language use and specific words, and for friends’ comments and responses, in any language (Facebook’s DeepText algorithm is language agnostic). Everyday electronic and wearable devices can gather physiological metrics around social behaviour, movement patterns, activity levels, voice, GPS and phone usage.
Other Conditions/Treatments
Chronic conditions such as diabetes, cardiovascular conditions and cancer, in addition to an increasing geriatric population and a rising demand for healthcare centralisation and patient empowerment are key telemedicine market drivers.32 IoMT devices can be used to monitor changes in existing conditions, for remote diagnosis and prescription. Other innovative means of remote observation such as digital phenotyping to monitor Parkinsons disease via VR face-reading glasses are also being developed.33
Researchers have claimed that machine learning based upon structural magnetic resonance imaging predicts psychopathic traits in adolescent offenders and could have a far-reaching impact on the diagnosis, treatment, and prediction of future behaviour. This line of research is designed to quantify personality traits and deploy the results to assess, adjust treatment(s) and predict subsequent behaviour. Classification models for discovering patterns in neuro-imaging datasets have been successfully tested on incarcerated youths for raised levels of psychopathic tendencies, which has implications for not just predictive health but also for predicting criminality. This has been proposed as an indicator for recidivism. An MRI scan deployed to create evidence as to the likelihood of future criminality before sentencing is a dystopian flavour to an extant justice system – founded on the logic of personal responsibility and retributive/rehabilitative justice (Steele, V.R. et al. 2017). Being subject to this does not, in our view, make one an Involuntary Citizen, as a subject in prison has a different relationship to citizenship and data rights. But it does speak to the ever increasing scope of the observational efforts of the state.
State of Data
The commercial Data State has created the circumstances where the private citizen is forced into negotiating part of their life away. For example, privatising university education and creating insurmountable student debt, many people are left with overcoming an unpayable debt burden:
- Neo-liberal governments have privatised aspects of civil society such as access to the law or higher education
- That contributes to a situation in which people are living with debt for life and hence subject to the algorithms of private companies for a variety of purposes.
- Those circumstances create the state where these Involuntary Citizens become subject to the disciplines that flow from that:
- All subject to surveillance and control
- There is no way to live e.g. go on holiday, rent a car, etc. without being subject to surveillance
Young people have been forced into negotiating parts of their lives away in the cause of servicing an unrepayable debt burden to private debt holders they had never heard of before, off the back of multiple, separate algorithmic decisions to treat debts as junk on the basis of your postcode, gender, age etc.34 The financialisation of the lives of the poor, to take one example, might well have once been the province and competency of the Nation State; the techne of managing the lives of the poor would not look the same in the hands of the public sector.
Out with the Old, in with the New
The brief moment when we were asked if we were OK with being subject to Internet surveillance by technologies such as cookies or device fingerprinting seems archaic, even nostalgic. This sense of nostalgia reflects how far in the past it already seems; or, to put it another way, it demonstrates the rapidly accelerating speed of change, and the corresponding urgency of addressing it. This systemic change is likely to speed up as more interconnected systems develop, in an accelerated iterative process. The lack of legislation and overview is leading to the an alarming high-frequency evolution of what Benjamin Bratton terms the Accidental Megastructure of global computation:35 a new international infrastructure that is rapidly emerging without any planning, oversight, global agreement or regulation. This megastructure is currently being orthogonally, algorithmically shaped by the pursuit of profit and efficiencies.
The speed at which we move around a store, be it virtual or real, is digitally measured; how and when our eyes linger is matched with a demographic profile in order to influence insurance offers or the price of a new telephone contract. Who can cope with, let alone predict or ‘manage’, this level of continuous abuse? We must all of us accept that we are now citizens of the Data State, and that it overlaps with but is far more ubiquitous than the Nation State. It is transnational by definition and most definitely not created or operated to serve its citizens. This reality is not simply a disturbing consequence of global capitalism, it is a direct threat to the continuing legitimacy of and hence the very existence of democratic societies. Because if the (legal) undermining of the Nation State renders them inadequate to protect and serve their citizens, what actually are they for?
Finally, It is important to distinguish between the citizen – as – member of a Nation State and the Involuntary Citizenship of which we speak. The obvious criticism we elicit by using the term Involuntary Citizen is that a minority people in the world have chosen their citizenship and next to none have that choice at birth. How then are the phenomena we describe different from the obligation to pay tax or go to prison on conviction for a crime?
Conclusions
We argue here that the Involuntary Citizen is being made subject to services and institutions not analogous to those which would previously have been integral to the Nation State but those which have been afforded by and emerged from the the technology and logics of our era regardless of whether there was measurable public demand for them.
Footnotes
1 Bratton, B. 2015. The Stack, p.120.
2 This paper will focus on the impact that western liberal ideology has on today’s conception of statism. For a comparative discussion about the impact of culture, see Jack A. Goldstone, Jack A. Cultural Orthodoxy, Risk, and Innovation: The Divergence of East and West in the Early Modern World. Sociological Theory. 1987; 5(2): 119-135. doi: 10.2307/201934.
3 John Locke, Second Treatise of Government, in the version presented at www.earlymoderntexts.com, p.57.
4 John Locke, Second Treatise of Government, in the version presented at www.earlymoderntexts.com, p.56.
5 John Locke, Second Treatise of Government, in the version presented at www.earlymoderntexts.com, p. 63.
6 Gorodnichenko, Y., Pham, T., & Talavera, O. (2018). Social media, sentiment and public opinions: Evidence from #Brexit and #USElection, Working Papers, No 2018-01, Swansea University, School of Management.
7 The Law of Unintended Barbarity concludes that the unfortunate and unintended consequences will not be prevented. Hogan, A., Hogan, K., & Tilt, C. (2018) On the Cruelty of Computational Reasoning, EVA Copenhagen 2018 – Politics of the Machines – Art and After Aalborg University, Copenhagen, Denmark, 15 – 17 May 2018 DOI: http://dx.doi.org/10.14236/ewic/EVAC18.3.
8 WACHTER, S. and MITTELSTADT, B.D. (2018) “A Right to Reasonable Inferences: Re-thinking Data Protection Law in the Age of Big Data and AI”, Columbia Business Law Review
9 Bratton, op cit. p.114
11 ibid.
12 See: https://www.grandviewresearch.com/press-release/global-telemedicine-industry
13 See: https://www.wired.co.uk/article/latest-fashion-trends-technology-2018
15 See: https://www.wired.co.uk/article/smart-materials-loomia-clothes-sensors
18 The new Internet of Medical Things (IoMT) can associate data from diverse mobile and wearable devices to produce a cohesive remote medical report accessible for individuals’ health care providers. It can also be used in aggregate for a new health care futures market, predicting trends in particular cultures and countries.
20 See: https://www.supremecourt.gov/opinions/10pdf/10-779.pdf
21 See: https://www.washingtonpost.com/outlook/suicide-prediction-tech
22 See: https://www.wired.com/2017/03/artificial-intelligence-learning-predict-prevent-suicide/
23 Walsh, Ribeiro and Franklin, 2017
24 See: https://www.wired.com/2017/03/artificial-intelligence-learning-predict-prevent-suicide/
26 See: https://www.objectivezero.org/
28 See: https://medium.com/futuresin/facebooks-suicide-algorithms-is-invasive-25e4ef33beb5
29 See: https://www.washingtonpost.com/outlook/suicide-prediction-tech
30 See: https://www.theatlantic.com/technology/archive/2016/10/information-fiduciary/502346/
31 See: https://www.grandviewresearch.com/industry-analysis/telemedicine-industry
32 See: https://www.wired.co.uk/article/emteq-vr-digital-phenotyping-charles-nduka
33 See: https://ewic.bcs.org/content/ConWebDoc/60257
34 Bratton, op cit.