AustLII Home | Databases | WorldLII | Search | Feedback

Law, Technology and Humans

You are here:  AustLII >> Databases >> Law, Technology and Humans >> 2024 >> [2024] LawTechHum 24

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Kadioglu Kumtepe, Cemre C; Riley, Stephen --- "Digital Dignity: the Shibboleth of Digitalization in Europe?" [2024] LawTechHum 24; (2024) 6(3) Law, Technology and Humans 156


Digital Dignity: The Shibboleth of Digitalization in Europe?

Cemre C. Kadioglu Kumtepe and Stephen Riley

University of Leicester, United Kingdom

Abstract

Keywords: Dignity; digital dignity; Europe; digitalization; public sphere; private sphere.

1. Introduction

Dignity is normative bedrock: the demand for humanity in our systems and in our practices. Dignity is also a ‘shibboleth’: an idea that carries great moral force without clear content.[1] We should expect dignity to help draw the outlines of a human, or humanised, relationship with technology in our digital era. We should also expect to see dignity equivocating – in a way that is both useful and unstable – between specific rights and a general demand for digitised lives to be structured by human values.[2]

We should care about the interaction of dignity and technology. Whatever language we choose to express it in, technology should be ‘humanised’, which we can take to mean an intuitive set of concerns about, inter alia, not subsuming human interests into social efficiency and not unduly negating human autonomy with limits on choice and creativity. Dignity expresses these concerns tolerably well while also admitting a specific kind of normative equivocation. On the one hand, it captures these ideas of interests and autonomy associated with humanity and humanisation. On the other hand, it implies a certain kind of primacy of virtue and self-control rather than justice and choice. This equivocation between justice and virtue broadly maps onto the political division between liberalism and communitarianism, meaning the functions of ‘dignity’ are shaped by, and therefore cannot be used to solve, major conflicts in social and political theory. But we can use dignity to insist that our core social and political debates be grounded in, or answerable to, a philosophical anthropology. That is, our social objectives, and perhaps even our theories of justice, must remain responsive to ongoing discourse about what humans are and how they understand themselves.

This article provides an analysis of the possible significance of dignity for understanding the digital era and digital rights (Section 2). Our focus is Europe, where there is a developed legal discourse of dignity as well as various attempts to marry that discourse of dignity with innovative regulatory projects. Those projects to tame, anticipate or stabilise, our relationship with digitalisation and information technology certainly aspire to be principled projects. That is, European regulations are an attempt to reconcile innovation with ‘European values’, of which dignity is preeminent (Section 3). We argue that this reconciliation will take different forms and faces different challenges depending upon whether we are broadly concerned with the public (Section 4) or the private (Section 5). To be sure, dignity may have relatively limited justiciable impact on the governance of digital practices. It nevertheless offers a powerful normative concept for interrogating radically changing dimensions of our governance and our lived experience (Section 5).

2. Digital Dignity: Meanings and Scope

In this section, we sketch some important differences between the digital epoch, digital virtue, digital dignity, and digital rights. The function of this analysis is to explain why there is some instability in the meaning of dignity. But it also has stable core meanings that, with due care, allow us to use dignity as a powerful critical notion speaking to both ‘macro’ questions of legitimacy and ‘micro’ questions of lived experience.

The term ‘digital dignity’ is not widely used in the scholarly literature. Relevant scholarship refers to dignity in the ‘digital era’ or ‘digital world’. We might, following these suggestions, focus not on dignity as a normative concept concerning particular rights or status, but rather on our digital epoch, where rights and status have distinctive manifestations and face distinctive challenges. Thus, digital dignity could be used to denote a broad sociological field of inquiry analysing how dignity is affected by technologies such as smart cities, electric vehicles, the use of drones in wars, and so on. This epochal background, or sociological backdrop, is important and we will return to these broader social and technological trends.

This epochal concern with the digital era should be contrasted with Zhai and Sun’s virtue-based definition of ‘digital dignity’. They use the phrase to refer to dignity in online activities and digital interactions: digital dignity is ‘the respect, honor, and recognition individuals maintain while engaging in online activities and digital interactions. The difference between human dignity and digital dignity lies primarily in their context and scope.’[3] Note that this definition focuses on virtues – of self-governance and self-control – exercised in online or digital activities. That is, digital dignity denotes the virtues required to govern ourselves in digital spaces, which may permit or encourage behaviours that would be vicious or impermissible offline. Self-control and self-governance will be elements in our analysis of public and private actors (below). Nonetheless, these connections between dignity and virtue – while certainly customarily and intelligibly associated with ‘dignity’ – might be said to fail to capture a more fundamental sense of equal basic worth, and equal basic rights, associated with dignity.[4]

We can isolate and sharpen these more fundamental dignitarian concerns with equality and rights using Pablo Gilabert’s distinction between our condition dignity and our status dignity, noting that digital technologies interact with these differently.[5] Our condition dignity – essentially our basic well-being – can be enhanced by the technologies characteristic of our epoch. Innovations in medicine, and expansion in the means to maintain relationships with others, represent – all things being equal – positive contributions to our well-being and our condition dignity. At the same time, our sense of our own basic equal worth – our status dignity – might well be diminished by technologies or enhancements that allow more possibilities for agency and self-respect for some people than for others.[6] This contrast between condition dignity and status dignity will allow us to unpack some conceptual and regulatory debates that otherwise would be unhelpfully blurred. We will need, for instance, to distinguish how digital barriers and digital divides might deny us basic entitlements producing an impact on our condition dignity from how our data may be used by the state, without our explicit consent, to make judgements about our eligibility for those entitlements, thereby diminishing our status dignity.

Taken together, condition dignity and status dignity can be considered as producing distinctive rights, liberties, powers or privileges that accrue to all relevantly situated humans. These forms of dignity are, in other words, the basis or foundation of our human rights and other basic or constitutional rights. Condition dignity generates important negative rights against degradation and positive rights to subsistence and health. Status dignity requires, among other things, core civil and political rights as well procedural rights. Consequently, it is relatively straightforward to generate, using appeal to status and condition dignity, groups of rights that speak directly to the interaction of the human and the digital or technological. What, then, are these digital rights flowing from dignity?

Roughly, four groups of rights arise from, or are especially applicable to, our digital lives.[7] First and most directly, there are basic negative rights against limitations to our freedoms. These could, for instance, be rights against having our access to the internet blocked. Second, there are positive rights to participate in, access or fully enjoy certain practices or possibilities. So, augmenting our right to access the internet, we may have a claim-right to the skills necessary to effectively capitalise on that access, a point illustrated by a resolution of the EU’s Parliamentary Assembly.[8] Third, we have immunities against having our rights, or powers, altered illegitimately. This might include not being subject to processes that involve automated decision-making without human judgement, or not being subject to the will of private actors lacking public legitimacy.[9] Finally, there may be powers and procedural rights which (more positively) guarantee our access to justice and access to decision-making and adjudicative practices. Such rights protect us from, or ensure the humanisation of, automated processes and insist on the generic, but demanding, protections associated with ‘natural justice’.[10] Some of these kinds of rights will be discussed in the following sections. In the remainder of this section, we briefly identify some of the cross-cutting moral and legal considerations raised by this cluster of rights.

In the first instance, many of these considerations could be reducible to equality. Under our digital rights sits a commitment to maintain equal freedoms, equal status and equal access to justice. This could be construed as a distributive consideration: we should ensure that everyone has the fullest access to best range of technology available. This could also be construed as a recognition-based consideration: equality is not only about fairly distributing a resource, but looking to the ways in which we might fail to recognise certain groups as needing additional support to gain access to the benefits of technology, or to have their distinctive barriers to those benefits recognised. Consumers and less technologically literate individuals could, in different ways, need adaptation or adjustment to their digital encounters to ensure that they experience a ‘level-playing field’. In sum, ‘digital dignity’ may be a way to draw together various cross-legal phenomena, justifying rights on the basis of new forms of discrimination or highlighting overlooked interstitial phenomena – that is, the ways in which online and offline identities and vulnerabilities mix and overlap.

Differently, at least a proportion of these rights concern legitimacy and authority. Digital rights are often participatory rights to participate fully in the digital era or freedom to refrain from participating in the digitalization. They may be procedural rights meant to ensure that everyone has final recourse to legitimate legal processes in their digital dealings. Digital rights are immunities against having their online rights taken away in an offline world. Or they are protection from the vulnerabilities caused by digitalization. Each of these considerations speaks to the practical limits to the regulatory reach of law. That is, we may have nominally equal entitlements online and offline, but technology and data have produced new limits to the justiciability of digital activities, limits of practicality in enforcement, limits in what it is considered socially desirable to police, and the limits of legitimate public intrusion into the workings of private technology actors. Because of this, a fully dignitarian response to the digital epoch might be twofold.

On the one hand, we need to ensure that the traditional reach of legal institutions and legal accountability is extended to our online lives, thereby denying that the digital arena is an anomic space lacking traditional legal accountability.[11] On the other hand, these concerns could speak to the importance of democracy and accountability. That is, digital dignity is less a concern with traditional legal accountability and justiciability and more a question of the right of our political representatives to decide on the reach of the law and, conversely, not allowing law and authority to have their reach determined by a mixture of pragmatic responses to technology and less than fully accountable private actors.[12] Although it is now generally accepted that the digital arena is or should be within the reach of the law,[13] determining the scope of the law and allowing space for private actors to self-govern is still a grey area.

Finally, much of our ‘digital rights’ discourse turns on our experiences of dehumanisation by and through technology. Some manifestations are found in the greater temptation for individuals to defame or express hate speech. Other manifestations might include the ability of the internet to foster or fuel harmful practices around self-image or self-harm. These phenomena speak to an altered experience of self and other on the part of both victims and victimisers. And this in turn speaks to a distinctive need to adapt rights under public and private law (e.g. defamation, hate speech) to new contexts where agency, will and responsibility are subject to new influences. In a related vein, there are ‘cybernetic’ phenomena – for example, the ubiquity of technology to support individual decision-making – which blur boundaries between individual will and the limits imposed by a program.[14] This may not be dehumanisation in a directly harmful sense, but rather something closer to the erosion of distinctively human practices, and perhaps erosion of human equality. This speaks most clearly to our status dignity: one’s basic equality and basic status may now be called into question if others have better decision-making support through technology. Conversely, one’s own self-ownership or self-determination may be questioned if our actions are ascribable to technologies whose parameters and presuppositions are beyond our control. With these broad connections between the digital epoch, digital rights, and digital dignity sketched, we can turn to one existing regulatory framework.

3. Digital Dignity: The ‘European Way’

What can be said of Europe’s distinctive approach to the digital epoch, and how does Europe’s overlapping regulatory regimes create problems and possibilities for digital dignity?

With its ‘2030 Digital Compass’, the European Commission set out the ‘European Way’ for the digital decade.[15] Unsurprisingly, the communication does not define this European way but sketches its values, including solidarity and prosperity, and addresses various means of empowering individuals.[16] It considers digitalization as the enabler of rights and freedoms.[17] The European Union has similarly acknowledged that the ‘time has come for the European Union to spell out how its values and fundamental rights should be applied in the online world’.[18] In this sense, the European Union has recognized a digital epoch where rights and status face distinctive manifestations and challenges.

Across Europe, dignity is used to conceptualize rights in the digital epoch. For instance, the Declaration of Internet Rights adopted by the Italian Parliament recognizes dignity as a fundamental pillar of internet rights and emphasizes dignity in several articles.[19] The European Declaration on Digital Rights and Principles for the Digital Decade (EDRPS) calls for a digital transformation that “puts people and their human rights at the centre throughout the world.”[20] In this vein, EU instruments concerning digitalization mention dignity as the main principle underlying other fundamental rights to be considered in digitalization. For instance, connections between dignity, the digital epoch and fundamental rights can be found in the GDPR,[21] Digital Services Act (DSA),[22] eIDAS[23] and AI Act.[24]

The DSA has purported rules on notice for taking actions for removing and disabling access to illegal items or information without hindering the freedom of expression or information. These rules aim to protect fundamental rights, including human dignity.[25] The DSA requires very large online platforms and search engines to assess the systemic risks in the European Union arising from the design or operation of their service/systems. These risks include any negative impacts over human dignity or in relation to gender-based violence, protection of minors and persons’ physical and mental well-being.[26]

The AI Act is drafted considering the ethical guidelines that are developed by High-Level Expert Group on Artificial Intelligence, which is independently appointed by the European Commission. Among others, the guidelines emphasise the importance of human agency and oversight, which are described as developing tools that respect human dignity and personal autonomy that can be overseen by humans.[27] The Act purposefully prohibits practices that may contradict EU values, including human dignity, that are enshrined in the Charter of Fundamental Rights of the European Union.[28] Following the risk-based approach, possibility of any violation of fundamental rights, including human dignity, are categorised as high risk under the Act. [29]

The meaning of these dignitarian provisions will be shaped by pre-existing discourses of dignity in the European Union.[30] Recalling the cross-cutting categories outlined above, these existing discourses of dignity speak to the importance of equality, the political bases of legitimacy and the dangers of dehumanisation. In what follows we give some examples of these categories in use.

First, the European Union’s attempts to use dignity to combat dehumanising practices is expressed by the European Data Protection Supervisor. Dignity is violated by objectification, ‘where a person is treated as a tool serving someone else’s purposes’.[31] This is discussed especially within the scope of data protection. Due to increased digitalization and the preponderance of commercial, administrative, and social activities moving online, there is limited opportunity to ‘opt out’. This requires disclosing more personal information and being present online, demands that challenge ‘the notion of free and informed consent’.[32] The personal data attached to an individual is treated as a commodity for the use of others, and consent to some uses opens the door to individuals being targeted for advertisements or tailored content.[33] The commodification of personal data, and perceiving data owners as consumers or users, raises the question of just how humane our general digital environment can claim to be, regardless of whether or not consent provides a voluntary threshold through which individuals signal compliance.[34] In line with this assertion, and underpinning the importance of voluntariness, we observe that the AI Act prohibits practices that may lead to impair the person’s ability to make informed decisions by adopting subliminal, purposefully manipulative or deceptive techniques.[35]

Second, these questions about consent connect with other, equally complex dignitarian questions concerning legitimacy. For all its invocations of dignity, EU discourse can be criticised for making values secondary to commercial and regulatory pragmatism. For instance, GDPR has been criticized for acting ‘more like a market regulation than treating privacy as a fundamental right’.[36] Leading the way in regulation of digitalization, the European Union was criticized for exporting its values to other countries, creating the ‘Brussels effect’[37] – that is, the ability to unilaterally create, and export, regulatory regimes that serve European interests and may (or may not) represent the export of ‘European’ values. As the EDPRD provides:

The Union should promote the Declaration in its relations with other international organisations and third countries with the ambition that the principles serve as an inspiration for international partners to guide a digital transformation ...[38]

While the European Union’s claim to democratic legitimacy may be stronger than those of other transnational organisations, it remains an important dignitarian question whether there is or could be a single principled regulatory approach to digital transactions that is permissibly exported or imposed on others. It is not to be taken for granted that Europe’s values are legitimately universalisable, regardless of their manifest connection with dignity. At the same time, it is tempting to conclude much more pragmatically that the Brussels effect should be accepted or tolerated on the basis (first) that any rules are better than no rules and (second) that the EU’s regulations should be embraced as having at least some recourse to principles like dignity. These perspectives should be handled with caution, as dignity and other European values can be used as a façade for the purposes of market regulation.

Finally, in the form of anti-discrimination measures, equality is hardwired into the constitutional fabric of the European Union and we need not question the Union’s commitment to challenging digital divides.[39] It is perhaps in a parallel body, the European Court of Human Rights (ECtHR, the Court), that we find its most extensive set of meeting-points between equality, digital rights and dignity.[40] Data collection, use and storage questions fall within the various concerns captured in Article 8 (the right to private and family life), but also speak to other rights to assembly, religion and expression, among others.[41] The Court has been central in pushing back against expansive or unnecessary data-collection projects by the state, typically in the health and criminal justice fields. The Court stresses both the principle that disproportionate collection will always infringe Article 8 and the pragmatic consideration that even legitimate collection of data could be appropriated and repurposed for illegitimate ends.[42] For present purposes, it is enough to note that the discourse of the ECtHR is inevitable conjoined with concerns for equality and non-discrimination on the part of the state. It is also important to note – and something that will be returned to later in our discussion – that equality under human rights law encompasses a form of ‘civic’ presumption of innocence, which militates against the pre-emptive, and not just disproportionate, use of data and surveillance to monitor populations and manage risk.

With these distinctions in mind, we turn to some concerns and phenomena that broadly concern public spheres and public actors (Section 3) and those that concern private spheres and private actors (Section 4).

4. The Public

We understand the public in opposition to the private: the shared and open space of politics, the rejection of private interest and the partial subsumption of private concerns into a common good. Even without these dialectical relationships with the private, ‘public’ has a number of functions and possible meanings. In what follows, it will be used primarily to denote public actors (especially governmental actors) and public spaces. (Other uses will be in play, but these are much more likely to be contested and diachronic – for instance, the meaning of ‘public interest’ or ‘public concern’.) Two concerns characterise an encounter between digital dignity and the public: first, public responsibilities – the roles of public actors, and the legitimacy or illegitimacy of public actors’ roles being discharged by private actors; and second, public spaces – the digital (especially the internet and social media) as a public space or quasi-public space that can serve as a substitute or proxy for ‘the public’ in a political sense, a place of liberty, assembly and expression. First, we consider public responsibilities from the most basic (prevention of harm) to the more expansive (maintaining and promoting rights), then we consider the extent to which these responsibilities can legitimately be privatised. Second, we treat the digital as an extension of public spaces and consider what this has meant, and could mean, for regulation in Europe.

At the most basic level, public actors have responsibility to prevent harmful activities or dehumanizing activities of both public and private actors, including activities that use technology to harm or exploit. For instance, the AI Act Article 5 lists prohibited practices by means of ‘placing on the market, the putting into service or the use of an AI system’. These practices may have the objective or the effect of causing or being reasonably likely to cause the mentioned person/group of persons significant harm. Some of the prohibited practices include deploying subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques in order to materially distort the behaviour of a person or a group of persons. Such behaviour impacts making an informed decision, which causes people to make decisions exploiting any of the vulnerabilities of a natural person or specific group of persons due to their age, disability, social or economic situation to materially distort the behaviour of a person/persons in that group. Alternatively, it could mean using systems for categorising the biometrics of individuals to infer their race, political opinions, trade union memberships, beliefs, sex life and orientation, or using ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement unless it is strictly necessary for the objectives listed under the act.[43] The Act also prohibits risk assessments for predicting the risk of a person committing a crime by profiling their personality and characteristics.[44]

As such, extending criminal justice measures can include deploying data and technology to investigate or prevent crime. And crime prevention inevitably raises questions about the proper limits of policing, including – as discussed below – the scope of the presumption of innocence. There is a well-documented tendency for public authorities to progressively limit the openness, anonymity and liberty of public spaces in favour of digital technologies that make otherwise anonymous individuals known to the state. This ‘regulatory creep’, or ‘governance through criminalisation’, also raises concerns about private actors such as private security firms who, either with state permission or in the absence of contrary regulations, have assisted in the progressive privatisation of public spaces and the over-governance of public spaces.[45] In essence dignity, and human rights more generally, will continue to be invoked where the state’s legitimate harm-prevention activities have the danger of drifting into politicised risk-anticipation and over-governance.[46]

The usefulness and impact of such regulations will depend upon the concept and standard of harm employed in judgment. Nonetheless, this represents a natural extension of the state’s harm-preventing responsibilities over the private actors and also the self-limitation of the state itself so its own powers are not open to abuse or over-extension.

What of the more expansive and positive responsibility of public authorities to respect and promote rights? Whether and how dignity necessarily implies a range of liberties (rather than rights or duties) remains a contested question. But there is no doubt that access to the internet remains something close to a basic human right, albeit one that may have to be ‘progressively realised’ over time.[47] This creates a kindred or converse problem: the progressive extension of e-government making more services accessible online, but also at the same time limiting their accessibility to those already empowered to access them and marginalising those who face distinctive barriers (poverty, or disability) to accessing online services. E-government increases accessibility, but at the same time may deepen the digital divide[48] and produce higher levels of surveillance.[49] Moreover, even public efforts to enhance participation and equality may be experienced as interference in an open public good. Lawful users or participants can feel the normativity of the administrative decisions more than the infringers, who are more likely to be tech-literate as they know how to by-pass the system.[50] Suffice to say, the extension of public actors and public roles into the digital sphere must be accompanied by positive rights to empower individuals to enjoy those rights. How these rights should be put in place is addressed below.

Of these negative and positive responsibilities characteristic of public actors, how many can or should be discharged by private actors? Most obviously, if the responsibility of public actors is to maintain liberties, then privatisation of their functions is problematic even if it does not lead directly to dehumanising or even harmful outcomes. The idea of ‘essentially public functions’[51] to be discharged only by public actors – for example, legal judgment or policing the public sphere – is eroded in favour of semi-privatised processes that are more efficient but lack full legitimacy. Moreover, the normal institutions, practices and implications of representative democracy are potentially bypassed where processes are permitted to exist outside of state control and which either shut down further recourse to the law or shape the online ‘public’ sphere in such a way that private actors themselves determine what counts as permissible or impermissible ‘in public’. This ‘agencification’[52] in digitalisation of the public administration or the attribution of regulatory powers to agencies is predicted to worsen the negative impacts on the openness, efficiency and independence of the EU administration.[53] Of course, moves towards semi-privatisation or self-regulation have strong pragmatic drivers. A lack of borders or established rules for sovereignty makes it difficult to regulate cyberspace, detect criminality or enforce the rules; increased transactions and engagement among individuals and businesses across borders make it difficult to regulate and adjudicate the activities. Therefore, moving towards semi-privatisation and self-regulation seems to become ‘making a virtue of necessity’: unavoidable and rationalizable as necessary for innovation and free markets. The concerns notwithstanding, dignity should be associated with arguments for essentially public goods, goods that cannot be discharged by private actors without fundamentally altering the nature of the service or practice being offered.[54] Accountability is altered, and dignity diminished, when the only meaningful recourse for the weaker party is mediation. ‘Instrumental’ thinking would evaluate these developments purely in terms of their outcomes and their efficiency. It is the merit of dignitarian thinking to ask about intrinsic values, regardless of efficacy, and stress that some values can only exist where public actors take their administration out of private hands and isolate them from private interest.[55]

In a slight shift of perspective, it is important to think about the internet as public sphere or agora, where political encounters between equal citizens take place. Many of the initially exaggerated claims for the internet as purely democratic and a perfect ‘marketplace of ideas’ have met with a more complex reality. Online, as offline, it remains important to treat the internet as a public space that is characterized by liberties and the contestation of reasonableness.[56] That is, the role of the state here is nuanced and complex, given that the state has the responsibility to both allow the liberty of assembly and expression and avoid intervention and to be the principal determinant of what will count as reasonable and intervene to condition public discourse to maintain its reasonableness.[57]

This can be put another way. On the one hand, a complete absence of state concern and state regulation would be a failure to discharge its basic functions. Note that status and condition dignity, with their implications of not subjecting others to humiliating or exploitative treatment, are clearly engaged in manipulation, fake news, fake identities and algorithmic bias. On the other hand, a willingness to act on – to actively repress – individuals by using coercive force against undesirable speech is both inimical to democracy and a direct attack on dignity to the extent that this implies self-governance and freedom of thought and conscience. Clearly, there are connections with the previous discussion of public and private authority. To allow powerful private individuals to determine the parameters of free expression is not only to expose us to the arbitrary, but to shape a crucial public forum with private values. In essence, dignity does not provide a simple solution to the Gordian knot of public authorities being needed to facilitate a reasonable marketplace of ideas and those authorities being required to avoid taking expansive or excessive power to determine the shape of public discourse. Dignity speaks to the importance of both.

To summarise, public authority and public power have to be exercised to protect status dignity (equality, civil liberties) and condition dignity (protection of the individual, basic entitlements). The European ‘ideology’ is a broadly liberal one of defending rights, tying legitimacy to democratic accountability and facilitating open public spaces. It would be useful, if outside the immediate scope of this chapter, to speculate on whether the more libertarian, as opposed to European liberal, characteristics of North America would yield a different – perhaps more constraining and more minimal – conception of the scope of legitimate public authority. Be that as it may, the normative concerns will be similar, namely accountability and the legitimate limits of privatisation. As will the tensions around maintaining a ‘reasonable’ public space of public discourse, especially where our online public fora encourage a new approach to – indeed, perhaps an erosion of – what we see as reasonable limits to public discourse.

5. The Private

The public and private are co-constituted. We must always understand the one via how it excludes the other. Public responsibility excludes private interest; the private sphere excludes public scrutiny. Accordingly, the above concerns with the public intersect conceptually and practically with a range of private concerns with self-determination, self-expression, choice and respect. Nevertheless, there are distinctive dignitarian and digital issues that speak to the rights and privileges of the private citizen, the privacy that we have a right to expect and private data. We here consider some aspects of the digital, privacy and lived experience. These are, first, the diffusion of the digital into the everyday, and with it novel ‘intrusions’ into the private sphere and anonymity, and second, the body in the digital epoch.[58]

Digitalization has become an inherent part of everyday activities. The capture of our attention by phones, and the inescapability of our public roles and responsibilities bleeding into our private lives, raise questions of our becoming mere means to ends if our private lives are not insulated from our public roles. Other ambiguous phenomena from the point of view of dignity include the obvious power imbalances created in ostensibly equal and benign online interactions: imbalances between consumers and sellers, the vulnerability of minors, varying degrees of digital literacy and so on. In essence, the private sphere cannot be treated as a closed space, a space immunised from other private interest and public intrusions. It is now a porous sphere where public and private interests and information comingle. This is not a ‘violation’ of dignity in terms of direct or egregious dehumanisation, but it does represent a change in the ways we understand ourselves and can form preferences, reasons and interests independently of the will of others:

The diminution of intimate space available to people, as a result of unavoidable surveillance by companies and governments, has a chilling effect on people’s ability and willingness to express themselves and form relationships freely, including in the civic sphere so essential to the health of democracy.[59]

The expansion of pseudonymity and anonymity allows individuals to hide behind user names and avatars. These possibilities, in turn, make possible new threats to our identity and reputation. While, on the one hand, we have a right to be forgotten so that the internet is no longer a repository of images and assertations that we consider private but have become public, it is also possible to share – or be the victim of sharing – images that are pre-eminently private. These create ongoing problems in making a new experience of the private manifest in law: ‘the existence of a digital person and digital life presupposes an expansion of the legal coordinates of the person and reveals the need to develop more comprehensive mechanisms for protection of the person in the digital world’.[60]

The essentially private – the body – produces complex points of interaction between dignity and the digital. At one extreme, the digital has produced new possibilities for violating the bodily privacy and integrity of the living (e.g. ‘revenge porn’) and of the dead (e.g. the police sharing images of murder victims).[61] The potentially harmful influence of social media on self-understanding and body self-image have already been alluded to. That these phenomena are mediated by the digital or at one remove from the embodied individual makes them no less important, morally or legally:

Human existence is no longer limited only to physical attributes, it also includes digital representations of these attributes ... The biological body does not disappear, but feelings, consciousness, action and will pass into the digital world. The life of a modern person is now developing in two different dimensions, which are interconnected as two sides of the same coin: they cannot be separated. This implies that it is impossible to legally protect one dimension and ignore the other one.[62]

Taken together, these considerations of the interaction of the digital and the body have been taken up by law-makers across Europe under the influence of wide public concern.[63] The relatively limited contribution potentially made by dignity to the articulation of regulatory responses to these trends[64] should not undermine the very real contribution it can make to articulating the problems.

While observing digital dignity in articulating and resolving the problems, how dignitarian values or approaches can be embedded in code by private actors should be considered. Digitalisation depends on the innovation that is broadly done by private actors. By their design choices, private actors create codes, which essentially shape the behaviour of the users. The users are either directed or restricted to take a certain action. Although this does not lead to immediate conclusion that private code necessarily ‘subverts’ law, such spilling of the private into the public raises legitimacy questions.[65] Digital dignity should be taken as a fundamental value that needs to be observed ex ante when designing the code,[66] otherwise the law’s role is reduced to nothing more than policing minimum requirements to protect private individuals from the worst forms of dehumanisation.[67]

6. Discussion

The commitment to dignity already variously manifested in the European Social Charter, human rights charters and treaties, constitutional instruments across Europe and the case law flowing from these instruments provides a strong starting point for assessing the meaning of digital rights. They are also potentially a limited framework whose limitations are instructive.

The modern context in which a commitment to dignity is meant to be operationalised is one transformed by digital processes – that is, an ensemble of tools and technologies that disrupt boundaries and in which a transformation of the division between public and private has been wrought. Both the spheres of public and private are inter-penetrating in way never before realised or permitted. And the roles and powers of private and public actors have mutated in crucial ways, not least the more systematic granting of expansive (self-)regulatory powers to private actors, and the responsibility of public actors to manage risk through monitoring, storage and processing of data. In essence, if earlier interpretations of dignity could rely upon stable boundaries between public and private action, and stable boundaries between private interest and public legitimacy, these boundaries are permanently disrupted by the digital epoch.

The legacy of dignitarian discourse is to approach these new phenomena with particular sensitivity to the ways in which humans can be instrumentalised – become mere means to others’ ends – and to grant especial protection to the essentially private (i.e. to the inviolability of the body) and the essentially public (i.e. the right of the state to impose authoritative judgments in disputes). If these are the paradigmatic (European, constitutional) functions of dignity, there have always been less determinate, or more contested, spheres within which dignitarian claims are intelligible but more easily defeated by competing norms. There are those cases where we are means to an end but not clearly mere means to ends: forms of exploitation under the influence of commercial or algorithmic technological forces, or insecure labour practices would be on the cusp between voluntary action and impermissible instrumentality. At the same time, the rights of the individual vis-à-vis public spheres and public authorities are ambiguously dignitarian. The right to free expression is as an aspect of dignity, but perhaps less paradigmatically dignitarian than the right to privacy.[68] And the legitimacy and reasonableness of public actors speak to our basic rights, equality and expectation of reasoned governance. But again, while salient, dignity is perhaps not as axiologically central as justice or equality.

In short, the history of the interaction of dignity and human rights law has been one of judicial efforts to narrow and operationalise the concept of dignity, principally in the area of egregious harm. In some instances, dignity usefully bridges different rights or explains how they can be reconciled or mutually qualified. Moreover, dignity allows cross-field, cross-systemic and cross-jurisdictional dialogue. But, if it represents a lingua franca permitting principled unity in decision-making, it is also constantly at risk of being an anodyne commitment to rights, justice and the rule of law generally, or failing to express the core moral phenomenon on which a decision is based.[69]

Accordingly, ‘digital dignity’ seeks to inhabit a discursive area where some important, attractive, underlying conceptual resources are already problematic (the public–private divide, the responsibility of public actors as distinctively dignitarian responsibilities) and where the dignity discourse is arguably better realised through specific human rights (expression, privacy, liberty) than through a master-discourse of dignity and dignitarian rights better suited to direct and acute wrongs and harms. The example of ‘nudging’ is a good example of where dignity helps to raise difficult normative questions but less clearly provides answers. Technology – commercial and social media platforms – bypasses conscious decision-making.[70] These are potentially benign, labouring-saving and legitimate features of digital interaction. Dignity encourages us to ask, at the very least, whether these are always benign and legitimate, and at what point they should be seen as deliberate challenges to our conscious agency.

In other ways, dignity helpfully opens up crucial normative questions that might otherwise be reduced too quickly to positive legal questions – that is, questions of the interpretation and scope of our existing rights. Of these, arguably the most important is that of accountability, legitimacy and authority. Dignity interrogates the dilution of accountability and legitimacy where private actors claim final, or authoritative, determination of our rights, in some instances through processes that are themselves devoid of human judgement or appeal to human oversight. Specifically, dignity allows us to articulate the wrongs occasioned by first forfeiting our right to a decision by another human, and second forfeiting our recourse to the state as the judge of entitlements. There are essentially public goods that are only possible via the state and state institutions (punishment, welfare, authoritative dispute-resolution) and dignity involves an insistence that these cannot be substituted for informal practices or contractual promises by private actors. It will be noted that this does return us to the (unstable) public–private distinction in terms of public legitimacy and private actors. We maintain, however, that this better captures the practical and sociological changes following the emergence of digital culture and the distinctive challenges to dignity produced by this.

A second under-explored, but overarching concern lies in the ‘civic presumption of innocence’.[71] This includes a right not to have our innocence called into question by persistent, invasive or excessive collection and retention of data and images.[72] The scope and meaning of this presumption are contestable, but it must also be thought to include the digitalisation of processes that would call in question the reliability or trustworthiness of individuals or collectives without due care and attention, and without grounds for human assessment and intervention. The identification of an individual as untrustworthy on the basis of anonymous voting processes, or the diminution of creditworthiness on purely automated bases, seem to undermine a basic right to be treated as equal: equally entitled to have our status determined by fair and open processes; equally entitled to have our name and reputation determined by human, and evidentially legitimate, judgments. Some of these concerns are captured by national, or transnational, regulations. But the civic presumption of innocence is an important general category of normative concern, and one that captures many of our anxieties around digitalisation and our status dignity, the fear that our ‘innocence’ and ‘guilt’ are now increasingly digitised categories.

7. Conclusion

Dignity takes us to normative bedrock, or at least as close as possible to such fundamental bedrock as is shared among the heterogeneous states of Europe. Appeal to dignity can involve a temptation to treat the fundamental as the minimal – that is, to assume that our most basic, shared values must be nothing more or less than a rejection of egregious abuses and gross inequalities. Dignity does allow us to articulate where these injustices could be found or produced in a digital epoch. But allowing dignity to have its fullest range of positive meanings concerning rights, status and conditions thereby allows us to interrogate our lived experiences and the structures of accountability, phenomena that are essential to understanding the interweaving of rights, status and our digital lives.

The efforts among European states to put these concerns on a regulatory footing also betray a tendency to treat dignity as nothing more than (although, of course, nothing less than) a commitment to avoiding gross injustice and inequality. The regulatory landscape that we encounter includes powerful responses to, and anticipations of, the far-reaching impacts of technology. But we also see a certain kind of regulatory inertia that would fold digitalisation, and digital dignity, within the regulatory – and significantly commercial – ambitions of European states. Dignity cannot apply only to the extent that it does not interfere with commercial and market freedoms or does not limit the regulatory freedom of states. Dignity conditions these freedoms, or it is not a recognisable discourse of dignity at all.[73]

Bibliography

Al-Rodhan, Nayef. “Artificial Intelligence: Implications for Human Dignity and Governance.” Oxford Political Review, March 27, 2021. https://oxfordpoliticalreview.com/2021/03/27/artificial-intelligence.

Benjamin, R. Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity Press, 2019.

Bomprezzi, Chantal. Implications of Blockchain-Based Smart Contracts on Contract Law. Glashütte: Nomos, 2021.

Bostrom, Nick. “Dignity and Enhancement.” Contemporary Readings in Law and Social Justice 1, no 2 (2009): 84–115.

Brattberg, Erik, Venesa Rugova and Raluca Csernatoni. Europe and AI: Leading, Lagging Behind, or Carving Its Own Way? Washington, DC: Carnegie Endowment for International Peace, 2020.

Burrell, Jenna and Marion Fourcade. “The Society of Algorithms.” Annual Review of Sociology 47, no 1 (2021): 213–237. https://doi.org/10.1146/annurev-soc-090820-020800.

Carbo, Toni. “Information Rights: Trust and Human Dignity in e-Government.” International Review of Information Ethics 7, no 9 (2007): 168–174. https://doi.org/10.29173/irie18.

Chamon, Merijn. “Setting the Scene: EU Agencies, Agencification, and the EU Administration.” In EU Agencies: Legal and Political Limits to the Transformation of the EU Administration. Oxford: Oxford University Press, 2016.

Çɪnar, Özgür Heval. “The Current Case Law of the European Court of Human Rights on Privacy: Challenges in the Digital Age.” The International Journal of Human Rights 25, no 1 (2021): 26–51. https://doi.org/10.1080/13642987.2020.1747443.

Constantaras et al. “‘Inside the Suspicion Machine’: Discriminating on the Basis of Race and Gender.” Wired, March 6, 2023. https://www.wired.com/story/welfare-state-algorithms.

Couldry, Nick and Ulises A. Mejias. “The Costs of Connection: How Data are Colonizing Human Life and Appropriating it for Capitalism.” Social Forces 99, no 1 (2020): e6. https://doi.org/10.1093/sf/soz172.

Diver, Laurence E. Digiprudence: Code as Law Rebooted. Edinburgh: Edinburgh University Press, 2022.

Dorfman, Avihay and Alon Harel. Reclaiming the Public. Cambridge: Cambridge University Press, 2024.

Duff, Antony. “Who Must Presume Whom to Be Innocent of What?” Netherlands Journal of Legal Philosophy 42 (2013): 170–192. https://doi.org/10.5553/NJLP/221307132013042003002.

Dupré, Catherine. The Age of Dignity: Human Rights and Constitutionalism in Europe. London: Bloomsbury, 2016.

Düwell, Marcus. “Human Dignity and the Ethics and Regulation of Technology.” In The Oxford Handbook of Law, Regulation and Technology, edited by Roger Brownsword, Eloise Scotford and Karen Yeung, 177–196. Oxford: Oxford University Press, 2017.

Ess, Charles M. “Choose now!” In The Routledge Handbook of Language and Digital Communication, edited by Alexandra Georgakopoulou and Tereza Spilioti, 412–416. London: Routledge, 2015.

Finck, Michéle. “Blockchains: Regulating the Unknown.” German Law Journal 19, no 4 (2018): 665–692. https://doi.org/10.1017/S2071832200022847.

Galetta, Diana-Urania and Herwig CH Hofmann. “Evolving AI-base Automation: Continuing Relevance of Good Administration.” European Law Review 48 (2023): 617–635.

Gilabert, Pablo. Human Dignity and Human Rights. Oxford: Oxford University Press, 2019.

Gutwirth, S. “Beyond Identity?” IDIS 1, no 1 (2009): 123–133.

Habermas, Jurgen. The Future of Human Nature. Chichester: John Wiley & Sons, 2014.

Hannon, M. “Public Discourse and Its Problems.” Politics, Philosophy & Economics 22, no 3, (2023): 336–356. https://doi.org/10.1177/1470594X221100578.

Hildebrandt, Mireille. “Profiling and the Identity of the European Citizen.” In Profiling the European Citizen: Cross-disciplinary Perspectives, edited byMireille Hildebrandt and Serge Gurwirth, 303–343. Dordrecht: Springer, 2008.

Kantola, Johanna, and Kevät Nousiainen. “The European Union: Initiator of a New European Anti-discrimination Regime?” Institutionalizing Intersectionality: The Changing Nature of European Equality Regimes (2012): 33–58.

Lupton, Deborah. “Digital Bodies.” In Routledge Handbook of Physical Cultural Studies, edited by Michael Silk, David Andrews and Holly Thorpe, 200–208. London: Routledge, 2017.

Macklin, Ruth. “Dignity is a Useless Concept.” BMJ 327, no 7429 (2003): 1419–1420. https://doi.org/10.1136/bmj.327.7429.1419.

Moore, Margaret. “On Reasonableness.” Journal of Applied Philosophy 13, no 2 (1996): 167–178.

Pele, Antonio and Caitlin Mulholland, “On Facial Recognition, Regulation, and Data Necropolitics.” Independent Journal of Global Legal Studies 30 (2023): 173.

Peter, Fabienne, “Political Legitimacy.” In The Stanford Encyclopedia of Philosophy (Winter 2023) edited by Edward N. Zalta and Uri Nodelman, Stanford, CA: Stanford University Press. https://plato.stanford.edu/archives/win2023/entries/legitimacy

Psychogiopoulou, Evangelia. “The European Court of Human Rights, Privacy and Data Protection in the Digital Era.” In Courts, Privacy and Data Protection in the Digital Environment, edited by Maja Brkan and Evangelia Psychogiopoulou, 32–62. Cheltenham: Edward Elgar, 2017.

Rachovitsa, Adamantia and Niclas Johann. “The Human Rights Implications of the Use of AI in the Digital Welfare State: Lessons Learned from the Dutch SyRI Case.” Human Rights Law Review 22, no 2 (2022): 1–15. https://doi.org/10.1093/hrlr/ngac010.

Reglitz, Merten. “The Human Right to Free Internet Access.” Journal of Applied Philosophy 37, no 2 (2020): 314–331. https://doi.org/10.1111/japp.12395.

Riley, Stephen. Human Dignity and Law: Legal and Philosophical Investigations. London: Routledge, 2017.

Riley, Stephen. “What is Orientation in Dignitarian Thinking? Self, Other, Time and Space.” Law, Culture and the Humanities 20, no 1 (2024): 41–65. https://doi.org/10.1177/1743872120982287.

Rouvroy, Antoinette and Yves Poullet. “The Right to Informational Self-Determination and the Value of Self-Development: Reassessing the Importance of Privacy for Democracy.” In Reinventing Data Protection?, edited by Serge Gutwirth, Yves Poullet, Paul Hert, Cécile Terwangne and Sjaak Nouwt, 45–76. Dordrecht: Springer, 2009.

Sandel, Michael J. What Money Can’t Buy: The Moral Limits of Markets. Basingstoke: Macmillan, 1998.

Schopenhauer, Arthur. On the Basis of Morality. New York: Bobbs-Merrill, 1965.

Shandler, Ryan, and Daphna Canetti. “A Reality of Vulnerability and Dependence: Internet Access as a Human Right” Israel Law Review 52, no 1 (2019): 77–98. https://doi.org/10.1017/S0021223718000262.

Shany, Yuval. “Digital Rights and the Outer Limits of International Human Rights Law.” German Law Journal 24, no 3 (2023): 461–472. https://doi.org/10.1017/glj.2023.35.

Shibboleth Consortium. About. https://www.shibboleth.net/about-us/the-shibboleth-project.

Stephen, Riley. “Private Security: Twin Indignities.” Critical Legal Thinking, November 13, 2017. https://criticallegalthinking.com/2017/11/13/private-security-twin-indignities.

Stevens, Marthe, Steven R. Kraaijeveld and Tamar Sharon, “Sphere Transgressions: Reflecting on the Risks of Big Tech Expansionism.” Information, Communication & Society (2024): 1–13. https://doi.org/10.1080/1369118X.2024.2353782.

Taylor, Linnet. “Can AI Governance Be Progressive? Group Interest, Group Privacy and Abnormal Justice.” In Handbook on the Politics and Governance of Big Data and Artificial Intelligence, edited by Andrej Zwitter and Oskar J. Gstrein, 19–40. Cheltenham: Edward Elgar, 2022.

Teo, Sue Anne. “Human Dignity and AI: Mapping the Contours and Utility of Human Dignity in Addressing Challenges Presented by AI.” Law, Innovation and Technology 15, no 1 (2023): 241–279. https://doi.org/10.1080/17579961.2023.2184132.

Vardanyan, Lusine, Václav Stehlík and Hovsep Kocharyan. “Digital Integrity: A Foundation for Digital Rights and the New Manifestation of Human Dignity.” TalTEch Journal of European Studies 12, no 1 (2022): 159–185. https://doi.org/10.2478/bjes-2022-0008.

Van der Sloot, Bart. “Decisional Privacy 2.0: The Procedural Requirements Implicit in Article 8 ECHR and Its Potential Impact on Profiling.” International Data Privacy Law 7, no 3 (2017): 190–201. https://doi.org/10.1093/idpl/ipx011.

Vasak, Karel. “The European Convention of Human Rights Beyond the Frontiers of Europe.” International & Comparative Law Quarterly 12, no 4 (1963): 1206–1231. https://doi.org/10.1093/iclqaj/12.4.1206.

von Denlinger, Tyler. “Protecting Personal Dignity: Advocating for a Federal Right of Publicity Against Pornographic Deepfakes.” Chapman Law Review 27, no 1 (2023): 247.

Whitman, James Q. “The Two Western Cultures of Privacy: Dignity versus Liberty.” Yale Law Journal 113 (2003): 1151.

Yarmel, Aaron, and Jonathan Lang. “The Ethics of Customizable AI-generated Pornography.” Midwest Ethics Symposium: Artificial Intelligence (2024). https://scholarship.depauw.edu/midwest_ethics/2024/2024/22.

Zhai, Sophie and Heshan Sun. “On the Influence of Online Conversations: A Human Dignity Perspective.” PASIC 2023 Proceedings. https://aisel.aisnet.org/pacis2023/144.

Cases

Arvelo Apont v. the Netherlands, ECtHR Case no. 28770/05, 3 November 2011.

MM v United Kingdom, ECtHR Case no. 24029/07, 13 November 2012.

Obergefell v. Hodges 576 U.S. 644 (2015).

Pretty v United Kingdom, ECtHR Case no. 2346/02, 29 July 2022.

S and Marper v United Kingdom ECtHR Case nos. 30562/04 and 30566/0, 4 December 2008.

Tapia Gasca et D. v. Spain, ECtHR Case no. 20272/06, 22 December 2009.

Legislation and Regulations

European Parliament and the Council, Regulation (EU) No 524/2013 of 21 May 2013 on Online Dispute Resolution for Consumer Disputes and Amending Regulation (EC) No 2006/2004 and Directive 2009/22/EC (Regulation on consumer ODR) [2013] OJL 165.

European Parliament and Council, Regulation of 27 April 2016 on the protection of natural persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJL 119.

European Parliament and Council, Regulation (EU) 2022/2065 of the of 19 October 2022 on a Single Market for Digital Services and Amending Directive 2000/31/EC (Digital Services Act) [2022] OJL 277,1.

European Parliament and the Council, Regulation (EU) No 910/2014 of the of 23 July 2014 on Electronic Identification and Trust Services for Electronic Transactions in the Internal Market and Repealing Directive 1999/93/EC [2014] OJL 253, 72.

European Parliament and of the Council, Regulation (EU) 2024/1689 of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) [2024] OJL 2024/1689.

Official Publications and Reports

European Commission, ‘2030 Digital Compass: The European Way for the Digital Decade’ COM (2021) 118 final.

European Commission. “European Declaration on Digital Rights and Principles for the Digital Decade” (EDRPS) COM (2022) 28 final.

EDPB-EDPS Joint Opinion 5/2021, 18 June 2021. https://www.edpb.europa.eu/system/files/2021-06/edpb-edps_joint_opinion_ai_regulation_en.pdf.

Italy Chamber of Deputies, Committee on Internet Rights and Duties. “Declaration of Internet Rights.” https://www.camera.it/application/xmanager/projects/leg17/commissione_internet/testo_definitivo_inglese.pdf

European Court of Human Rights. “Factsheet: Personal Data Protection.” (November 2023) https://www.echr.coe.int/documents/d/echr/fs_data_eng.

European Data Protection Supervisor (EDPS). “Communication to European Asylum Support Office: Formal Consultation on EASO’s Social Media Monitoring Reports.” (Case 2018-1083). https://edps.europa.eu/sites/edp/files/publication/19-11-12_reply_easo_ssm_final_reply_en.pdf.

European Data Protection Supervisor. “Towards a New Digital Ethics: Data, Dignity and Technology.” Opinion 4/2015 (11 September 2015).

European Parliament. “Digitalisation and Administrative Law: European Added Value Assessment.” https://www.europarl.europa.eu/RegData/etudes/STUD/2022/730350/EPRS_STU(2022)730350_EN.pdf.

Independent Office for Police Conduct [UK]. “Inappropriate photographs taken at crime scene – Metropolitan Police Service, June 2020.” (July 2022). https://www.policeconduct.gov.uk/our-work/learning/inappropriate-photographs-taken-crime-scene-metropolitan-police-service-june-2020.

Parliamentary Assembly, ‘The Right to Internet Access’ Resolution 1987 (2014) Final version. https://assembly.coe.int/nw/xml/XRef/Xref-XML2HTML-en.asp?fileid=20870&lang=en.

United Nations “Technical Notes on Online Dispute Resolution” (United Nations, New York, 2017). https://uncitral.un.org/sites/uncitral.un.org/files/media-documents/uncitral/en/v1700382_english_technical_notes_on_odr.pdf.


[1] Shibboleths are concepts by which we gain admittance to a group. Arthur Schopenhauer’s critique of dignity as a shibboleth and its possible connection with the digital epoch, might be thought of as follows. (See Schopenhauer, On the Basis of Morality, 100). Dignity is an ambiguous idea that carries a lot of force without clear content; it is a concept that sharply separates its advocates and its detractors, and that is also the threshold concept for admittance to a group (in the case, the European Union). Shibboleth is also the name of a single sign-on solution for networks that allows people to use one identity for logging into different accounts across organizations or applications. Shibboleth Consortium, “About.”

[2] Riley, “What is Orientation in Dignitarian Thinking? Self, Other, Time and Space,” 41.

[3] Zhai, “On the Influence of Online Conversations: A Human Dignity Perspective.”

[4] Certainly, from the point of view of virtue ethics or virtue theory, there is nothing incomplete or deficient in failing to convert our normative commitments into the language of rights. That said, even virtue ethics is a theory of duties, and directed duties – that is, perfect duties owed to specific members of our community or society – would be a feature of any recognisable normative ethics.

[5] Gilabert, Human Dignity and Human Rights. See also Riley, Human Dignity and Law.

[6] A position defended by Habermas in The Future of Human Nature. This contention would be contested by, inter alia, Nick Bostrom, for whom self-enhancement can only increase our status. Such enhancement need not imply comparative denigration of others. See Bostrom, “Dignity and Enhancement.”

[7] Various typologies or generations of rights are possible; human rights owe their division into generations to Vasak “The European Convention of Human Rights Beyond the Frontiers of Europe.” See also Shany, “Digital Rights and the Outer Limits.”

[8] Parliamentary Assembly, “The Right to Internet Access’ Resolution 1987 (2014).” The Assembly recommended that member states ensure everyone has sufficient internet access to exercise their rights under the ECHR and without being subject to undue surveillance or discrimination. The states should ensure that everyone receives the minimum quality of internet services considering affordability, interoperability and integrity and the latest technological developments.

[9] Galetta, “Evolving AI-base Automation,” 617 for detailed discussion on the automated decision-making in good administrative practices. On legitimacy more generally, see the Peter, “Political Legitimacy.”

[10] Galetta and Hofmann discuss the importance of reasoned decisions, the right to have hearings, and access to file(s), as part of automated decision-making processes. Galetta, “Evolving AI-base Automation.” Similarly, we see a line of European and international regulations on online dispute resolution (ODR) and digital services that require minimum fundamental procedural rights to be offered by the platforms. See European Parliament and Council, Regulation (EU) 2022/2065 of the of 19 October 2022 on a single market for digital services and amending Directive 2000/31/EC (Digital Services Act) [2022] OJL 277, 1.”

[11] See Shany, “Digital Rights and the Outer Limits.”

[12] Coders, for instance, are private actors who structure decisions and, in creating AI, what counts as knowledge, “via mechanisms that are technically and socially opaque and which are not straightforwardly susceptible to public contest, redress, and (judicial) review.” Diver, Digiprudence, 21.

[13] Bomprezzi, Implications of Blockchain-Based Smart Contracts, 61–62; Finck, “Blockchains: Regulating the Unknown,” 665.

[14] This phenomenon is alluded to in the following sections within the scope of prohibited artificial intelligence practices in the Union.

[15] European Commission, “2030 Digital Compass.”

[16] European Commission, “2030 Digital Compass,”1, 2.

[17] European Commission, “2030 Digital Compass,” 2.

[18] European Commission, “European Declaration on Digital Rights and Principles for the Digital Decade (EDRPS)” COM (2022) 28 final,” Preamble para 1.

[19] Italy Chamber of Deputies, Committee on Internet Rights and Duties, “Declaration of Internet Rights.” Similarly, the Spanish Government prepared a document, establishing digital rights, which mentions dignity (dignidad) in several articles.

[20] European Commission, “EDRPD,” Preamble para 6.

[21] European Parliament and Council, Regulation of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJL 119.

[22] European Parliament and Council, “Digital Services Act”.

[23] European Parliament and the Council, Regulation (EU) No 910/2014 of the of 23 July 2014 on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC [2014] OJL 253, 72.

[24] European Parliament and the Council, Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)[2024] OJL 2024/1689. See also EDPB-EDPS Joint Opinion 5/2021.

[25] European Parliament and Council, “Digital Services Act” Preamble para 52, 81.

[26] European Parliament and Council, “Digital Services Act” Article 34(1)(b)–(d).

[27] European Parliament and Council, “AI Act.” Preamble para. 27.

[28] European Parliament and Council, “AI Act.” Preamble para. 28.

[29] European Parliament and Council, “AI Act.” Preamble para. 48.

[30] Dupré, The Age of Dignity, 21ff.

[31] European Data Protection Supervisor, “Towards a New Digital Ethics,” 12.

[32] European Data Protection Supervisor, “Towards a New Digital Ethics,” 12.

[33] Rouvroy, “The Right to Informational Self-Determination,” 45–76, especially 71ff, where there is discussion of the danger of data protection becoming untethered from values such as dignity and privacy, and opening the door to (predictable, dignitarian) problems: narrowly technical readings of data protection principles and the commodification of data.

[34] Rouvroy, “The Right to Informational Self-Determination,” 12–13.

[35] European Parliament and Council, “AI Act,” Article 5(1)(a).

[36] Vardanyan, “Digital Integrity,” 159.

[37] Vardanyan, “Digital Integrity,” 159. See also Couldry, “The Costs of Connection.”

[38] European Commission, “EDRPD,” Preamble para 6.

[39] Kantola, “The European Union: Initiator.”

[40] See more generally Psychogiopoulou, “The European Court of Human Rights”; Çɪnar, “The Current Case Law.”

[41] The dignitarian jurisprudence of the European Court predates the digital era; the leading case is Pretty v United Kingdom ECtHR Case no. 2346/02, 29 July 2022, where dignity is taken to be the foundation – the main interpretative and justificatory principle – of the Convention. Other decisions and jurisprudence are discussed below. See also MM v United Kingdom ECtHR Case no. 24029/07, 13 November 2012; Tapia Gasca and D. v. Spain ECtHR Case no. 20272/06, 22 December 2009.

[42] Van der Sloot, “Decisional privacy 2.0,” 190. Arvelo Apont v. the Netherlands ECtHR Case no. 28770/05, 3 November 2011; S and Marper v United Kingdom ECtHR Case nos. 30562/04 and 30566/0, 4 December 2008.

[43] European Parliament and Council, AI Act, Article 5(1)(a), (b), (c), (g), (h).

[44] European Parliament and Council, AI Act, Article 5(1)(d).

[45] Taylor, “Can AI Governance Be Progressive?” 9. Emerging technologies raise concerns about the power imbalance between private and public sphere especially because “companies took over activities in the public sphere which had previously been governed by public authorities.”

[46] Benjamin, Race After Technology. See also Pele, “On Facial Recognition, Regulation, and ‘Data Necropolitics’.”

[47] Reglitz, “The Human Right to Free Internet Access,” 331. Shandler, “A Reality of Vulnerability and Dependence,” 77.

[48] Carbo, “Information Rights,” 171.

[49] European Data Protection Supervisor (EDPS), “Communication to European Asylum Support Office: Formal Consultation on EASO’s Social Media Monitoring Reports (Case 2018-1083).”

[50] Diver, Digiprudence: Code as Law Rebooted, 18.

[51] Dorfman, Reclaiming the Public.

[52] Chamon, “Setting the Scene.”

[53] European Parliament, “Digitalisation and Administrative Law: European Added Value Assessment.”

[54] In essence, certain goods or practices (e.g. welfare, punishment) cannot exist unless administered by the state. Without the state, these would become something qualitatively different (respectively, charity and revenge). See Riley, “Private Security: Twin Indignities.”

[55] Sandel, What Money Can’t Buy: The Moral Limits of Markets.

[56] Moore, “On Reasonableness,” 167–178. Whitman, “The Two Western Cultures of Privacy.”

[57] See Hannon, “Public Discourse and Its Problems.”

[58] See further, Hildebrandt, “Profiling and the Identity of the European Citizen”; Gutwirth, “Beyond Identity?”; Stevens, “Sphere Transgressions.”

[59] European Data Protection Supervisor (EDPS), “Communication to European Asylum Support Office: Formal Consultation on EASO’s Social Media Monitoring Reports.”

[60] Vardanyan “Digital Integrity,” 171.

[61] Independent Office for Police Conduct, “Inappropriate Photographs Taken at Crime Scene – Metropolitan Police Service, June 2020.” Von Denlinger, “Protecting Personal Dignity”; Yarmel, “The Ethics of Customizable AI-generated Pornography.”

[62] Vardanyan, “Digital Integrity,” 170.

[63] See Lupton, “Digital Bodies” on the especial concerns surrounding children’s bodies and digitisation. Also see Brattberg, Europe and AI; Ess, “Choose Now!”

[64] For a critical counter-argument, see Macklin, “Dignity is a Useless Concept,” 1419. Also Riley, “What is Orientation in Dignitarian Thinking?” on tensions between the autonomy and embodiment elements of dignity discourse.

[65] Diver, Digiprudence, 27.

[66] Diver, Digiprudence, 32, 196.

[67] One example is a forward-looking project that aims to create human dignity-aware AI algorithms. Although this is likely to have biases due to contentious interpretations of human dignity translated into algorithms, it is positive to see that dignity is being considered as part of digitalisation. Al-Rodhan, “Artificial Intelligence: Implications for Human Dignity and Governance”; Burrell and Fourcade, “The Society of Algorithms.”

[68] Again, this judgment might be culturally distinctive to Europe. See Whitman, “The Two Western Cultures of Privacy”.

[69] See the dissentients in Obergefell v. Hodges 576 U.S. 644 (2015) – a US Supreme Court case on same-sex marriage. Here the validity of using ‘dignity’ to expand the scope of equality law is questioned.

[70] A different but related problem lies in ‘click-wrap’ or ‘shrink-wrap’ contracts that bypass reasonable demands on the contracting party to read and understand their terms and conditions. See more generally Teo, “Human Dignity and AI”.

[71] Duff, “Who Must Presume Whom to Be Innocent of What?,” 170.

[72] Constantaras, “‘Inside the Suspicion Machine’.”

[73] A position similarly defended by Düwell, “Human Dignity,” 177.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/LawTechHum/2024/24.html