AustLII Home | Databases | WorldLII | Search | Feedback

Law, Technology and Humans

You are here:  AustLII >> Databases >> Law, Technology and Humans >> 2024 >> [2024] LawTechHum 8

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Kristofik, Andrej --- "Indeterminacy of Legal Language as a Guide Towards Ideally Algorithmisable Areas of Law" [2024] LawTechHum 8; (2024) 6(2) Law, Technology and Humans 1


Indeterminacy of Legal Language as a Guide Towards Ideally Algorithmisable Areas of Law

Andrej Krištofík

Institute of Law and Technology, Masaryk University, Czech Republic

Abstract

Keywords: Legal theory; algorithmic law; legal automation; artificial intelligence; indeterminate language; evolutive interpretation.

1. Introduction

‘Nothing.’ That was the answer of McCarthy, one of the fathers of the technology of artificial intelligence (AI), when he was asked in an interview, ‘What do judges know that we cannot tell a computer?’[1] It has now been more than 50 years since this interview took place and we can observe various technologies being deployed within the legal sphere and various public institutions, with a varying degree of not only autonomy but also success. And while such deployment is far from perfect, the question ‘What do judges know that we cannot tell a computer?’ remains relevant. While we would like to retain that question in this article, we will shift the emphasis to ‘What do judges know that we cannot tell a computer?’

This is an issue for various reasons. First, there is the conflict of choosing the base for such information, a sort of Hart–Dworkin debate 2.0, trying to establish whether the rule ‘told’ to the computer ought to be based on the written laws and norms or their previous interpretation by the relevant bodies applying them.[2] Second, should one manage to solve this riddle, other issues of legal philosophy appear. The issue central to the following article is the issue of legal language, most notably its indeterminacy – that is, in a clash with a rather precise language of the computational algorithms. Our language is riddled with many features innate to a human[3] language and communication, such as metaphors, synonyms or language that is simply indeterminate without the proper context being established between the communicators.[4] The interpretation and the underlying communication are necessarily dependent on the interpreting agent being able to establish the proper context and grasp the meaning of the indeterminate language used.

Use of such ‘problematic’ language has been heavily debated in both legal theory[5] and practice.[6] Similar discussion may be observed in the second field relevant to this article: that of computer science and, more precisely, fields of AI.[7] Combining these endeavours, which both require a higher degree of linguistic determinacy, may then appear as an exponentially more complicated problem.

In this article, we investigate the relationship between the indeterminacy of legal language and the possibilities of automating legal decision-making carried out by public institutions. In order to do that, we first have a look at the issue of indeterminacy of legal language in general, as well as its role and necessity, to figure out whether such language can be avoided or its effects mended, if desirable. Second, we will establish the impact of such indeterminacy on the processes of legal decision-making automation and provide several existing examples. Lastly, should it prove to be problematic, we will try to establish whether the language presents an absolute obstacle to automation, or whether there are some areas of legal decision-making for which this issue is not present.

Analysing all these points ought to lead us to shedding some light on the thesis of this article:

The use of indeterminate language in law can be seen as a limit to algortihmisation, pointing us towards such areas of law and decision making where there is no use for it, therefore showing us areas of law that are good fit for the process of algorithmisation.

This, in turn, should be determined by closely examining the role and necessity of indeterminate language in law. The approach of examining the algorithmisable areas via the language used is beneficial due to its future-proofness as these conclusions should still remain relevant, even with the further advances in the field of machine learning that would perhaps allow us to overcome some of the technical obstacles connected to vague language. The conclusions might at times seem obvious from our current state-of-the-art machine learning systems; however, given the unprecedented pace of development we have been experiencing, analysis that relies purely on technological limitations will not remain relevant for long. Therefore, this article takes our existing understanding of the issue of indeterminacy in legal language and applies it to this new issue. Its ultimate goal is to show how we should approach the issue of algorithmisation of legal decision-making from the perspective of legal scholarship on legal language, not to contribute to the development of this scholarship at large.

2. Legal Language and Its Indeterminacy

Regardless of one’s opinions on the law and its nature,[8] one cannot escape the fact that law lives within language and language forms its foundation.[9] As such, by its use, (legal) language determines meaning (in law)[10] – its construction and communication,[11] which are essential for its applicability.[12] The essential role of language brings with it the problem of language for and in law.

Given the extensive nature of this issue,[13] it is necessary to establish definitions for key terms. This article thus provides merely an introductory overview of essential concepts for the following analysis, rather than aspiring to be a comprehensive work on the subject. Poscher observes that legal theory has mostly overlooked the differences between various ‘linguistic indeterminacies’, categorising them jointly as ‘vague’ language.[14] While definitions may vary, all forms of ‘problematic language’ share a common semantic indeterminacy within their core.[15] The choice of definition not only aids in subsequent analysis but also offers insights into addressing the challenges posed by such language.[16]

In linguistically oriented literature, the question of vagueness is usually examined through the ‘sorites argument’ – or the paradox of the heap.[17] This ancient argument is based on the premise that a heap of sand consists of one million grains. Removing a single grain does not make it a non-heap. Repeating this process leads to the observation that even a single grain must still constitute a heap.[18] This conclusion arises from the vagueness of the predicates, allowing for one-dimensional variation within the predicate’s scope. It becomes impossible to pinpoint the exact threshold at which the predicate no longer applies.[19] While the sorites argument is conceptually simple, numerous theories have sought to address it. These theories range from mere descriptions to assertions of the inherent vagueness of all existing objects.[20]

Many theoretical developments have advanced our understanding of the complexities inherent in human language and communication. Importantly, these theories have highlighted that vagueness is just one aspect of ‘problematic’ language[21] and, significantly for law, they have differentiated between vagueness and the open texture of language.[22] This applies to language in general, indicating that, while aspiring to precision,[23] even legal language shares some of these challenges.

2.1 Defining Problematic Language

While some authors, ironically, acknowledge the semantic difficulty of defining ‘problematic’ language,[24] a further exploration of key terms is necessary. The use of the term ‘problematic language’ intentionally highlights the notion that a proper definition of vagueness of language reveals that vagueness alone does not encompass the full range of issues tied to semantic indeterminacy.[25]

Defining this over-arching term requires us to examine its constituent elements, which vary in the existing literature. Some authors use the term ‘indeterminacy’ as a catch-all for any form of problematic language rooted in the semantic core.[26] In the literature, this term is split either into ambiguity or (several forms of) vagueness.[27] Gürler suggests five possible interpretations, primarily differing in the extent to which they investigate the essential semantic indeterminacy of a given term.[28] Gürler’s analysis builds on Solum’s critical perspective on indeterminacy, which he describes as follows: ‘The law is indeterminate with respect to a given case if and only if the set of results in the case that can be squared with the legal materials is identical with the set of all imaginable results.’[29] Perhaps more fittingly, further on in the text he refers to indeterminate rules as rules that allow for a mere ‘rule-guided’ decision as opposed to strict ‘rule-bound’ decision, or completely ‘unbound’ decisions.[30] Rule-guided decisions differ from the other two types in the level of consideration that they allow, or essentially require, from the body applying them.

The fundamental understanding of the use of indeterminate or problematic language in law leads to a unified definition for this article, as provided by Jeremy Waldron in his work on philosophical issues of vagueness in law and language.[31] He also chooses the over-arching term ‘indeterminacy’ and, similarly to the previous authors, identifies its constitutive elements as vagueness, ambiguity and contestability. Waldron not only offers a list of these constitutive elements that aligns with prior works on this subject but also provides fitting definitions. As a result, this article will address problematic language within Waldron’s definitions.

The first element identified by Waldron is ambiguity, which refers to language where an expression X can be interpreted in two distinct ways. This arises when there are two predicates, P and Q, that resemble X but refer to different – although possibly overlapping – sets of objects.[32] The meaning of each predicate results in a different way of categorising objects as falling within or outside its scope. In simpler terms, ambiguity in this context refers to polysemy, where expressions can reasonably have multiple distinct meanings.[33] In a legal context, consider a regulation of banks. Without additional context, one could reasonably interpret this regulation as pertaining both to financial institutions and land adjacent to rivers.

The second form of problematic language is vagueness. It is such language where a predicate P is considered vague if there are instances x1, x2 and so forth within the domain of the term’s application. In such cases, users are typically undecided about the truth value of statements like ‘x1 is P’ or ‘x2 is P’ due to the inherent nature of the predicate P rather than a lack of knowledge.[34] For instance, when using the term ‘blue-coloured’, we may reasonably argue about whether certain shades should be classified as ‘blue’; this argument does not stem from a lack of knowledge of the term ‘blue-coloured’. Vague terms are those whose boundaries are not clearly defined, and for any such predicate, one can envision a borderline case whose categorisation as part of the vague term is inherently unclear.[35]

The last type, identified by Waldron, is contestability. While less common in the literature, it can be seen as an indeterminacy, the source of which is the interpreter’s personal values. A predicate P is then contestable if (1) it is not implausible to regard both ‘something is P if it is A’ and ‘something is P if it is B’ as alternative interpretations of the meaning of P; (2) there is an evaluative or normative element e* in the meaning of P; and (3) due to (1) and (2), there is a history of using P to embody rival standards, such as ‘A is e*’ and ‘B is e*’.[36] For example, the term ‘democracy’ could be understood, based on one’s values, to mean either a mere representative government or a strictly participatory plebiscite approach to public affairs.[37]

Grouping these individual terms, as suggested by Waldron, leads to another frequently discussed aspect of indeterminacy in (legal) language: the open-texture.[38] Originally described by Waismann[39] as a general linguistic attribute, the concept of open-texture has made its way into legal philosophy.[40] Waismann’s work drew from Wittgenstein’s ‘Philosophical Investigations’, using ‘open-texture’ to describe words that are not always strictly bound by the rules of language. The distinction between open-texture and vagueness lies in the potential for vagueness within the open-textured terms.[41] Hart’s application of this concept to law have provided a more solid foundation for its use.[42]

Hart takes this originally strictly linguistic concept and applies it to open-texture of legal rules by observing that any law, or more precisely a rule or concept inscribed within this law, has a core plain language meaning, regardless of whether it has been set as such by the law-maker or derived from the previous cases. When applying such a rule to a specific situation, there is a need to classify the situation within the scope of the rule’s terms. The indeterminacy of the rules will, after all, not arise in the moment of their creation, but rather always at the hands of the applying body, which is often required to unequivocally decide whether some vague predicate does or does not apply to a well-defined, yet borderline, case.[43]

In the quintessential Hartian example, if there is a rule against ‘vehicles’ in a park, we must determine whether a given object is a vehicle and should be subject to the rule. Since the rule has a well-defined core, such classification will in most situations not present a problem – or, as Hart observes:

The plain cases, where the general terms seem to need no interpretation and where the recognition of instances seems unproblematic or ‘automatic’, are only the familiar ones, ... where there is general agreement in judgments as to the applicability of the classifying terms.[44]

However, the issue arises at the ‘borders’ of the term when it is unclear whether the rule applies – for example, when classifying a stroller in the context of ‘no vehicles in the park’. This ‘fringe of vagueness’ creates open texture[45] of the law – or, as Bix aptly notes, legal language simply lacks the mathematical precision.[46]

Similarly to Waismann, who claims it presents an advantage that allows for accommodation of new discoveries and natural development of the language,[47] Hart also observes that this attribute of legal rules allows them to be reinterpreted and applied in such cases that were not, or could not have been, predicted nor expected (‘ignorance of fact’ and ‘ignorance of aim’) by the law-maker, thus making the rules more flexible.[48]

This brings us to the next point of inquiry: the function of this tool, which is not an error but a deliberate choice by the law-maker.[49] Understanding this purpose is essential in determining whether, if problematic for algorithmic processes, such indeterminate language can be eliminated.

2.2 Function of Indeterminate Language

Just as different authors propose diverse theories and definitions of indeterminate language, there are typically various theories concerning its function. The functionality will be explored through the authors already presented, or those who build upon them.

The preceding sections have already offered a brief overview of the function of the fringe vagueness of terms, which includes the possibility for future development and the flexibility of law to be able be applied to cases that the law-maker did not or could not consider, either by virtue of ignorance or by virtue of the impossibility of knowing future developments, that ought to be reflected by the law.

A similar perspective has been identified in specialised types of communication by Umberto Eco.[50] He notes that, as opposed to the specialised one, vagueness or indeterminacy is typically something that does not occur very often in everyday communication since there exists a shared ‘code’ between communicators that governs the context and interpretation of the communication.[51] He illustrates this point using the example of aesthetic communication, which intentionally maintains ambiguity and lacks an established code (between the communicators). This intentional vagueness is meant to encourage diverse interpretations by individuals experiencing a given work of art in various temporal and societal settings.[52] In this context, the code is determined ad hoc by the broader societal setting, which is inevitably influenced by the interpreter’s temporal placement.[53]

Applying this observation to legal communication, one can readily discern the reflection of this and the open-texture in the case of the living instrument doctrine.[54] This doctrine is often invoked in the debates on textualism and possibilities of departure from the original meaning by the interpreting body.[55] In constitutional law, such discussions are common in the Anglo-American tradition,[56] while in continental legal systems the notion is closely linked to the European Convention on Human Rights.[57] As the principal applying body, the European Court of Human Rights advocates the use of an evolutionary or dynamic interpretation, encapsulated within this doctrine.[58]

Developed throughout its case law,[59] this doctrine allows the Convention to be ‘kept abreast of present day condition’[60] and to be ‘not stereotyped as at the date of the treaty but ... be understood in the light of the progress of events and changes in habits of life’.[61] Similar to Waismann’s observation regarding the function of open texture in common language, which enables language to respond to societal developments, the concept of the living instrument enables the court to address ‘ignorance of fact’ and ‘ignorance of aim’ inherent in the drafting of the Convention. It allows the reinterpretation of the rules to align more closely with the current societal attitudes and opinions.[62]

For instance, it enables the court to interpret rights related to the protection of family life, particularly those pertaining to the rights of same-sex couples, without requiring changes to the Convention’s wording – by invoking the living instrument doctrine, the court can interpret the concept of marriage to include same-sex couples.[63]

The concept of living instrument is perhaps not a surprising inclusion as a lot of issues related to law and language revolve around interpretation.[64] However, when it comes to the function of indeterminate language, it needs to be investigated further. In such further investigation, the living instrument doctrine can be seen not as the function of indeterminate language itself but merely its result. Living instrument always relies on the interpreting body to perform the necessary interpretation that fulfils the goal of evolutionary interpretation.

By focusing on the role of the interpreting body, we can discern a different variation of the function, as proposed by Waldron. In his essay on philosophical issues of vagueness, Waldron contends that the use of indeterminate language is a deliberate choice by the law-maker, serving a specific function. He argues that every ambiguous term has a precise meaning, one of prompting the body applying such rule to make a value judgement within the scope of the term. This involves making a value judgement based on the evaluative criteria inherent in the term.[65]

In Waldron’s argument, the necessary value judgement is expected to be performed by the body applying the law.[66] This perspective aligns with similar observations made by previously mentioned authors. Lanius, in his analysis of Hart’s work, highlights that while the function described by Hart pertains to the ability to shift the interpretation of given rule, this function is in fact one of (purposeful) power shifting, away from the legislative body towards the body applying the law.[67] Essentially, where Hart sees a certain ‘future-proofing’ and Waldron sees an evaluative delegation, Lanius takes those ‘effects’ and traces them further to a power delegation by legislator. This delegation allows the interpreter to define the boundaries of the term.

A similar view is expressed by Soames,[68] who posits that the use of language by the law-maker serves a deliberate purpose, evident in its continued use: to shift power and responsibility toward the applying body.[69] Even within his interpretation of indeterminacy as a tool of power shifting, the perceived effect is similar to the previously noted observations. The value of vague language lies in its function of creating a space for the applying body for the incremental, case-by-case precisification.[70] Such observation reinforces the value of indeterminacy, as reflected in the living instrument doctrine.

Soames further advances an argument in favour of one of the theories of vagueness, that of the partial definition/context-sensitive theory.[71] This position stems from the observation that vague language serves as a tool of case-by-case specification,[72] as (only) applying bodies possess familiarity with the specific situation.[73]

In these various yet interconnected interpretations, we can discern the function of indeterminate language taking shape. First, all the authors agree that it is a deliberate and purposeful tool for law-makers.[74] By incorporating it, law-makers delegate a portion of their power to the applying body and instruct them to make a value judgement within the chosen term. This enables the body to overcome the limitations of the law-maker, who cannot have knowledge of individual cases or anticipate future societal developments. Consequently, the body applying the law is called upon to make an evaluative judgement based on the specific circumstances and employ an evolutionary interpretation of the rule to better align with current society.

2.3 The Necessity of Indeterminate Language

This leads to consideration of the second question, relating to a need for such language. The question can be addressed on two levels. The first is the general nature of language itself, as a necessary medium for the law, as a certain degree of indeterminacy appears to be an innate quality of human language.[75] As a result, there were several efforts to construct an artificial language based upon the linguistic rules of natural languages, which would even allow them to remain speakable. However, the artificiality of these languages is reflected mainly in their strictly logical structure,[76] which eliminates ambiguity in the process of information exchange.[77]

Should we accept the thesis that indeterminacy is an innate quality of natural language, we could eliminate it by using a strictly logical artificial language. This, however, begs the question of its feasibility with regard to the basic rule of law requirement of the accessibility of law, which requires the law to be written intelligibly.[78] It is unlikely that language, with no utility outside of law, could be learned by everyone just to understand the laws. While using a strictly logical language might eliminate linguistic indeterminacy, it could render the law ineffective in fulfilling its essential function.

One such artificial logical language, Loglan,[79] has been proposed as an ideal medium for an interlingual machine translation.[80] It has been suggested as a mediary that provides the in-between step of translation, precisely due to its ‘machine-friendly’ nature.[81] This leads to the possibility of using such a language for the specific purpose of translating laws for machines[82] that will be applying such rules. Translating the rule into the algorithm is an essential step,[83] regardless of whether artificial language is the intermediary. This, however, does not provide the answer to how should we treat the indeterminate language of the law, nor does it appear that we would be pragmatically capable of replacing natural language.

It has become clear from the analysis of the function of indeterminate language that such language represents not only a necessary, but even a useful, tool. The indeterminacy of legal language allows law-makers to overcome certain limitations when drafting laws. Erasing such language would require us to provide an alternative solution to the problems that are addressed by its use. Therefore, it is not viable to eliminate indeterminate language. Instead, we should explore the implications of this conclusion for the possibilities of automated decision-making.[84] However, we first need to look more closely at the concept of automated decision making.

3. Algorithmic Law and Automated Decision-making

First, we should establish what kind of decision-making systems we are addressing. Even in the current era of consumer-friendly and readily available advanced Large Language Models (LLMs) and other AI-based systems, there is a tendency to not only think of AI use in public institutions as a distant idea,[85] but also to overlook the existing examples of various degrees of automation[86] in public institutions. The role of these systems, and some technological background and examples, are provided in the following section to better situate the analysis.

3.1 Automated Decision-making Systems

Automated decision-making systems are machine-based systems that replace human decision-making. They have a range of applications within public institutions, including public health, tax administration and adjudication of social welfare benefits.[87] A specific example is SyRI (System Risk Indication),[88] a welfare fraud screening software. The system utilizes several state-maintained databases that are combined and subsequently analysed by this system in order to flag an ‘unlikely’ candidate for welfare benefits.[89] A more infamous example is the use of automated decision-making in the realms of criminal justice, such as HART[90] or COMPAS.[91] Both are essentially assistive tools, as their outcomes should serve as a mere recommendation.[92] This raises the concern of ‘mere rubber-stamping’,[93] where the judge may unquestioningly accept[94] the machine-generated decision, resulting in a higher level of automation than intended.[95]

There are some key factors that automated decision-making systems need to have in common to be relevant for us. The systems must carry out a decision that has its base in legal rules. Such a requirement is significant not only because we are addressing legal automated decision-making, but also because the issue of ‘translating’ the (potentially ambiguous) rule for the computer. Additionally, the decisions must be binding so as to not have the human ‘ambiguity reinterpretation’ available.

3.2 Indeterminate Language and Algorithmic Law

There are two points for consideration regarding the question of how to treat the indeterminate language in law (for the purposes of its algorithmisation). One of them presents us with an overlap of legal theory and technical approaches[96] to AI[97] and, further, is familiar to us even from human decision-makers. We should then look at how we are supposed to deal with this language encountered in law. Waldron delves into how human decision-makers determine the meaning of terms once this discretion has been afforded to them.[98] He suggests that humans utilise paradigm cases, which serve as reference points for indeterminate terms.[99] This idea has also been substantiated by other authors. For example, Christie notes that ‘many difficult types of civil cases ... are being increasingly tried by the court without a jury, precisely because a judge can and, of course, does look at the other cases in point’.[100]

Since we have an idea of how a human decision-makers act, the question is where this leaves us in relation to automation. This theory raises parallels with machine learning, which is a subset of AI that deals with training algorithms based on patterns and examples.[101] What is, however, more important is the concept of ‘learning’[102] itself, and the underlying logic of this technical process.

Through this process, the machine learning algorithm creates and refines the algorithmic actions it should perform by recognising patterns in its training dataset.[103] Training data usually comprises past events, representing the process that the algorithm should replicate, with the machine grasping the concept from a certain example.[104] Machine learning constructs the algorithm by analysing past data that inform it of the pattern of the intended actions.[105] Essentially, then, it looks at what can be seen as a paradigm cases.[106]

This might seem like a lucky break – a rather straightforward solution to the challenge posed. However, before concluding this issue as being solved by a mere observation of similarity in the human and machine process, we need to have a look at the second point worth considering: the utility of indeterminate language in the law, as discussed in earlier sections. Before committing to the approach of paradigm cases and their parallels in machine learning, we need to examine whether such an approach fulfils the intended purpose of deploying an indeterminate language within the law.

We have outlined a set of reasons for the utilisation of indeterminate language in legal contexts. One such motivation stems from the Ecoian observation on the issue of vagueness and code/time-dependant interpretation,[107] which has led us to the concept of law as a living instrument. By employing indeterminate language in law, lawmakers can surmount one of the two hurdles identified by Hart – the inability to know the future. The language used is then never precisely defined because it was never meant to be. The meaning of a vague term is supposed to be fluid and reinterpreted for and according to the times when the interpretation occurs, thus incrementally and over time[108] changing the meaning of the law as necessary.

Such a description also calls to mind a challenge that is well known in the field of computer science: the issue of algorithmic staleness.[109] The issue has been summed up perhaps most fittingly by Ongenae, who has proclaimed that if law is a living thing, AI might just kill it.[110] This observation is substantiated by other authors – for example, Dervanović, who demonstrates this conclusion using the example of the ECHR.[111] The way this effect takes place lies in the aforementioned technical aspect of this solution, in the approach of machine learning. While it is true that this approach leverages the suggested paradigm case approach to adjudicate on indeterminate laws, such an approach also presupposes that this case has already been solved and that the case at hand closely mirrors, in all relevant factors, the paradigm case (or the cases that form the training data) and that no significant novelty has been introduced.[112] Since the decision-making is based solely on previous decisions and previous interpretation, the possibilities for evolutionary interpretation are in fact not possible.

Evolutionary interpretation requires the applying body to consider the individual[113] case and adjust the indeterminate statement that is to be applied. It is thus, as observed above, a tool with a given purpose, and as such we should consider how well we can fulfil the other purposes, namely the shift of responsibility. Various authors have scrutinised this purpose. However, one common thread is that these authors concur that indeterminate language is intentionally deployed. By creating a certain leeway for consideration within the indeterminate terms, the law-maker delegates some of its powers/responsibilities onto the applying body, thus overcoming the second Hartian hurdle of ignorance of fact. What this implies in practice for the entity applying the law has been identified best by Waldron, who observed that such language calls upon the one applying the law to carry out a value-based judgement within the scope allowed to them by the indeterminate term. This process of value-based judgement, executed by the relevant body, effectively overcomes Hart’s hurdle of ignorance.

This brings us to the final dimension of carrying out a value judgement, the performance of which was originally intended for a human decision-maker. While this is a technical question to some degree,[114] the significance of the human element in value-based judgement seems to be undisputed.[115] However, the technology to which we should pay attention is not necessarily one of machine learning, which should better be left for those specialised in it, but rather the technology of the state – of law-making. The objective of employing the tool of indeterminate language isn’t for the legislator to leave their work incomplete but rather to deliberately prompt humans to exercise judgement in specific cases, not because there is no other option but because that is the precise goal.[116] In many cases, the law remains indeterminate precisely with this aim, not due to the lack of possibilities of its determinacy.[117] Thus, as far as the tool of indeterminacy is deployed mindfully by the law-maker, its presence also informs us of the need for a human intervention.

What, then, is the conclusion for automated decision-making and algorithmic law? Does the absence of human value-based judgement render it impossible, or at least illegitimate? Not necessarily, as the observation needs to be taken a step further for us to be able to reach such a conclusion.

First, to address some objections, the primary issue within the framework of this article is that Waldron only argues for the function of value-based judgements with regard to indeterminacy in the context of contestability. A suitable response to this objection comes from Schiffer’s ‘realist’ perspective. He argues that the fact that this theory is explained in the section of contestability is a mere structural happenstance, further supported by the unclear border of Waldron’s distinction between contestability and vagueness. This is a result of the examples he chose, but more significantly is due to the realist observation that indeterminate or borderline cases can only be resolved in one way, regardless of the theories and distinctions between indeterminacies of legal language. This resolution is simply achieved by determining them based on what the decision-maker already holds to be correct within her own internal viewpoint.[118] Thus, any borderline case decision inherently requires some (human) value-based judgement.

Whether or not one agrees with this perspective, it doesn’t inform us precisely about the fate of future automation of legal decision-making. For that we need to look back into Waldron’s essay that forms the framework of this article. Waldron identifies one last type of a specialised indeterminate language used within law, one that steps out not only of Waldron’s call for value-based judgement, but arguably steps out of Schiffer’s realist perspective: the indeterminacy of function. It is employed when a rule presents a continuum of values that can be expressed as a function of some variable. While Waldron suggests that constructing continua in place of vague or otherwise indeterminate language is a solution to the problem of indeterminacy, he also points out that this has already been happening in some laws – for instance, in tax law, where we do not merely indeterminately set forth the rule that rich people should pay more taxes but we set forth an exact continua for this by setting the rule to be ‘tax payable = f(taxable income)’.[119] Even though this statement is vague in the sense that it does not provide us with an exact meaning, it is exact in its instructions regarding how to precisely obtain and substitute the meaning (by defining the function f) without any need for value judgement. Consequently, this rule can be adjudicated by anyone (and anything) capable of performing the function.

The last question of this article – whether the indeterminacy of legal language poses a problem in all possible instances of automated decision-making – can be addressed within the context of the general question regarding whether the issue of indeterminacy of legal language presents an absolute obstacle to the automation of legal decision-making. These two questions are interconnected, and the answer is twofold.

First, it is possible to conclude that indeterminate language represents an absolute obstacle to the automation of legal decision-making due to its primary function of inciting value judgement and helping the law-maker to overcome the limitations in law-making. By attempting to create an algorithmic version, we would deprive the law of this possibility. However, it is important to remember that this tool should be used purposefully by the law-maker only in such situations where it is necessary. In cases where the use of this tool is not justified, it should either be replaced with definitions based on continua and function, or more determinate language should be employed. In such areas of law, the issue of indeterminacy does not pose a challenge for the automation of legal decision-making.

4. Conclusion

In the preceding sections of this article, we delved into the complexities of indeterminate language within the legal domain and its significance in the context of automating legal decision-making. We first observed some foundational definitions within this rather broad issue, with Waldron’s conceptual framework being the anchor for our analysis. Yet the definitions themselves were not the primary focus. Rather, the crucial observation that emerged from this exploration related to the necessity and function of employing indeterminate language within the realm of law.

After all, should indeterminacy be problematic but unnecessary, an easy solution would be to simply avoid it and use more precise language. Throughout our exploration, it became evident that indeterminate language in the legal context is not merely an inconvenience that can be easily eradicated. The comprehensive analysis within the relevant texts revealed that the utilisation of indeterminate language is not a mistake or a display of negligence on the part of law-makers. To the contrary, it functions as a deliberate tool in the lawmaker’s arsenal. This tool serves to create a deliberate ‘margin of appreciation’ for those applying the law, ensuring that it aligns with the specific details of each unique case. Simultaneously, it allows for a gradual evolution in the interpretation of this indeterminate language, enabling the law to adapt to the ever-evolving dynamics of society. The meaning of indeterminate language within law is thus not anything vague or indeterminate; rather, it has a precise meaning, a precise function of prompting the body applying the law to make its own value-based determinative judgement about the extent of vague terms, relevant for the specificities of the presented case.

Attempting to eliminate indeterminate language from legal texts and depriving law-makers of this tool would shift our legal landscape closer to a state of hyper-legality.[120] This potential transformation poses a significant risk of deforming the legal system, a concern frequently raised in connection with the implementation of AI-based technologies in law.[121] In order to better determine the relationship between automated legal decision-making and the issue of indeterminate language, we first introduced some of the existing automated legal systems and then provided a brief overview of the technology of machine learning as a base of many such applications. Our conclusion partially lies in these observations, as we have noted that the use of past training data will have a negative impact on the possibility of evolutionary interpretation of some laws. This conclusion, however, lies mostly in the technological analysis of the methods of AI used in such systems. The overall conclusion can, and should, be reached in a technologically neutral way, stepping out of the need for analysing the level to which the machine can or could mimic a human decision-maker.[122] The purpose of indeterminate language is to elicit a value judgement performed by a human, to substitute for the judgement of a law-maker that they could not make. As long as this tool is used in a mindful and intentional way by the law-maker, it also informs us of the impossibility of automating such decision-making.

The conclusion of this article clarifies that the presence of indeterminate language in law does not render the creation of a legitimate automated decision-making system impossible. Rather, it prompts us to consider which specific sections of law are suitable for automation. The conclusion is built upon two key observations. First, by requiring us to make a value-based judgement when applying laws, the law – or the lawmaker – informs us of the essential unfitness of the given area for the substitution of the human element, since it plays an elementary role in applying the law in an intended and just manner. This then forms a certain negative space for automation. The second observation brings us back to the issue of indeterminacy and observation of some authors that some of the indeterminate language used by law-makers is in fact accompanied by a well-defined function. This, as a matter of fact, can and should be applied mechanically, informing us of the positively created space of possibility of automation. The goal of this article was thus not to contribute to the vast discussion on the issues of indeterminate language within the law, but rather to apply the relevant existing scholarship to a new and upcoming problem, proving that we could create reasonable and just automated processes by using the relevant application of the existing scholarship. Moreover, this article attempted to strictly adhere to legal theory in its conclusions, precisely in order to create a technologically neutral application, in order to ensure that even with further advances in AI, its application to legal decision-making remains proper and just. Staying mindful of these criteria, choosing the proper area of decision-making should in the end warrant ‘that the union of law and algorithms can be a successful foundation for fairness and justice’.[123]

Bibliography

Asgeirsson, Hrafn. “Can Legal Practice Adjudicate Between Theories of Vagueness?” In Vagueness and Law, edited by Geert Keil and Ralf Poscher, 95–126. Oxford: Oxford University Press, 2016.

Ashley, Kevin D. Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age. Cambridge: Cambridge University Press, 2017.

Balagopla, Apalran. “Judging Facts, Judging Norms: Training Machine Learning Models to Judge Humans Requires a Modified Approach to Labeling Data.” Science Advances, 19, no 9 (2023): 1–14. https://doi.org/10.1126/sciadv.abq0701.

Barnes, Jeffrey. Modern Statutory Interpretation: Framework, Principles and Practice. Cambridge: Cambridge University Press, 2023.

Bekkum, Marvin and Frederik Borgesius. “Digital Welfare Fraud Detection and the Dutch SyRI Judgment.” European Journal of Social Security, 4 (2021): 323–340. https://doi.org/10.1177/13882627211031257.

Bělohradský, Václav. Čas Pléthokracie. Prague: 65.pole, 2021.

Binns, Reubens and Michael Veale. “Is That Your Final Decision? Multi-stage Profiling, Selective Effects, and Article 22 of the GDPR.” International Data Privacy Law, 4 (2021): 319–332. https://doi.org/10.1093/idpl/ipab020.

Bix, Brian. “I H.L.A. Hart and the ‘Open Texture’ of Language” Law and Philosophy 10, no 1 (1991): 51 - 72. https://doi.org/10.1093/acprof:oso/9780198260509.003.0002.

Bix, Brian. Law, Language, and Legal Determinacy. Oxford: Oxford University Press, 1995.

Bjorge, Eirik. Domestic Application of the ECHR. Oxford: Oxford University Press, 2015.

Böhmer, Rene. “Helping Police Make Custody Decisions Using Artificial Intelligence.” https://www.cam.ac.uk/research/features/helping-police-make-custody-decisions-using-artificial-intelligence.

Brownsword, Roger. Law 3.0. London: Routledge, 2020.

Brozek, Bartosz. Defeasibility of Legal Reasoning. Kraków: Zakamycze, 2004.

Chadha-Sridhar, Ira. “The Value of Vagueness: A Feminist Analysis.” Canadian Journal of Law & Jurisprudence, 1 (2021): 59–84. https://doi.org/10.1017/cjlj.2020.22.

Choi, Albert. “Strategic Vagueness in Contract Design: The Case of Corporate Acquisitions.” Yale Law Journal, 119, no 5 (2010): 848–924.

Christie, George. “Vagueness and Legal Language.” Minnesota Law Review, 48 (1964): 885–911.

Cofone, Ignacio. “Algorithmic Discrimination is an Information Problem.” Hastings Law Journal, 70 (2019): 1389–1444.

Dabkowski, Maksymylian. “A Wittgensteinian Look on Vagueness.” Ivy League Undergraduate Research Journal. (2018). https://lingbuzz.net/lingbuzz/006362.

Davenport, Thomas. “Automated Decision Making Comes of Age.” MIT Sloan Management Review, July 15, 2005. https://sloanreview.mit.edu/article/automated-decision-making-comes-of-age.

Davis, Joshua. “Of Robolawyers and Robojudges.” Hastings Law Journal, 73, no 5 (2022): 1173–1202.

Derrida, Jacques. Deconstruction and the Possibility of Justice. London: Routledge, 1992.

Dervanović, Dena. “I, Inhuman Lawyer: Developing Artificial Intelligence in the Legal Profession.” In Robotics, AI and the Future of Law, edited by Marcelo Corrales, 209–234. Dordrecht: Springer, 2018.

Eco, Umberto. Theory of Semiotics. Bloomington, IN: Indiana University Press, 1976.

Eco, Umberto. Travels in Hyperreality. New York: Picador, 1987.

Edwards, Justin, Allison Perrone and Phillip Doyle. “Transparency in Language Generation: Levels of Automation.” Association for Computing Machinery. https://dl.acm.org/doi/10.1145/3405755.3406136.

Eliot, Lance. “Antitrust and Artificial Intelligence (AAI): Antitrust Vigilance Lifecycle and AI Legal Reasoning Autonomy.” arXiv. http://arxiv.org/abs/2012.13016.

Ellison, Mark and Uta Reinöhl. “Compositionality, Metaphor, and the Evolution of Language.” International Journal of Primatology 45, (2022): 703–719. https://doi.org/10.1007/s10764-022-00315-w.

Endicott, Timothy. Vagueness in Law. Oxford: Oxford University Press, 2003.

Endicott, Timothy. “The Value of Vagueness.” In Vagueness in Normative Texts, edited by Vijay Bhatia, 27–48. New York: Peter Lang, 2005.

Equivant. Practitioner’s Guide to COMPAS Core. https://www.equivant.com/wp-content/uploads/Practitioners-Guide-to-COMPAS-Core-040419.pdf.

Flasiński, Mariusz and Janusz Jurek. “On the Learning of Vague Languages for Syntactic Pattern Recognition.” Pattern Analysis and Application, 26 (2023): 605–615. https://doi.org/10.1007/s10044-022-01120-0.

Goertzel, Ben. “Lojban++: An Interlingua for Communication Between Humans and AGIs.” International Conference on Artificial General Intelligence (2013): 21–30. https://doi.org/10.1007/978-3-642-39521-5_3.

Goldfarb, Avi and John Lindsay. “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War.” International Security, 3 (2022): 7–50. https://doi.org/10.1162/isec_a_00425.

Gowder, Paul. “Is Legal Cognition Computational?” In Computational Legal Studies: The Promise and Challenge of Data-driven Research, edited by Edward Whalen, 215–230. Cheltenham: Edward Elgar, 2020.

Greenberg, Mark. “Legislation as Communication? Legal Interpretation and the Study of Linguistic Communication.” In Philosophical Foundations of Language in the Law, edited by Andrei Meramor and Scott Soames, 217–256. Oxford: Oxford University Press, 2011.

Greenwalt, Ken. Legal Interpretation: Perspectives from Other Disciplines and Private Texts. Oxford: Oxford University Press, 2010.

Grim, Patrick. “The Buried Quantifier: An Account of Vagueness and the Sorites.” Analysis, 65, no 2 (2005): 95–104. https://doi.org/10.1093/analys/65.2.95.

Gurer, Sercan. “The Problem of Legal Indeterminacy in Contemporary Legal Philosophy.” Annales de la Faculté de Droit d’Istanbul, 57 (2008): 37–64.

Hacker, Phillip. “A Legal Framework for AI Training Data: From First Principles to the Artificial Intelligence Act.” Law, Innovation and Technology 13, no 2 (2020): 257–301. https://doi.org/10.1080/17579961.2021.1977219.

Hale, Baroness Brenda. “Beanstalk or Living Instrument? How Tall Can the European Convention on Human Rights Grow?” Greys’ Inn. https://www.gresham.ac.uk/watch-now/beanstalk-or-living-instrument-how-tall-can-european-convention-on-human.

Hart, HLA. Essays in Jurisprudence and Philosophy. Oxford: Oxford University Press, 1983.

Hart, HLA. The Concept of Law. Oxford: Clarendon Press, 1994.

Heikikkila, Melissa. “Dutch Scandal Serves as a Warning for Europe Over Risks of Using Algorithms.” Politico. https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms.

Henderson, Peter. “Beyond Ads: Sequential Decision-Making Algorithms in Law and Public Policy.” CSLAW’22. https://doi.org/10.1145/3511265.3550439.

Jackson, Eugenie and Christina Mendoza. “Setting the Record Straight: What the COMPAS Core Risk and Need Assessment is and is Not.” Harvard Data Science Review. https://hdsr.mitpress.mit.edu/pub/hzwo7ax4/release/7

Kattan, Michael W, Dennis A Adams and Michael Parks. “A Comparison of Machine Learning with Human Judgment.” Journal of Management Information Systems, 9, no 4 (1993): 37-57. https://doi.org/10.1080/07421222.1993.11517977.

Keil, Geert and Poscher, Ralf. “Vagueness and Law.” In Vagueness and Law, edited by Geert Keil and Ralf Poscher, 1–22. Oxford: Oxford University Press, 2016.

Lanius, David. Strategic Indeterminacy in the Law. Oxford: Oxford University Press, 2019.

Letsas, George. “The ECHR as a Living Instrument: Its Meaning and Legitimacy.” In Constituting Europe: The European Court of Human Rights in a National, European and Global Context, edited by Andreas Follesdal and Brigit Peters. Cambridge: Cambridge University Press, 2013.

Liu, Han-Wei, Ching-Fu Lin and Yu-Jie Chen. “Beyond State v Loomis: Artificial Intelligence, Government Algorithmization and Accountability.” International Journal of Law and Information Technology, 2 (2019): 122–141. https://doi.org/10.1093/ijlit/eaz001.

Ludwig, Kirk. “Vagueness and the Sorites Paradox: Philosophical Perspectives.” Language and Mind, 16 (2002): 419–461.

McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. Boca Raton, FL: CRC Press, 2004.

Monti, Rocco. “Umberto Eco and the Aesthetics of Vagueness.” European Journal of Pragmatism and American Philosophy, 13, no 1 (2021): 1–16. https://doi.org/10.4000/ejpap.2306.

Murphy, Joseph. “The Duty of the Government to Make the Law Known.” Fordham Law Review, 51, no 2 (1982): 255–292.

Nicholas, Amanda. “Dealing with Ambiguous or Vague Information.” AI Buzz. https://www.ai-buzz.com/dealing-with-ambiguous-or-vague-information

Nicholas, Nick. “Lojban as a Machine Translation Interlanguage in the Pacific.” Fourth Pacific Rim International Conference on Artificial Intelligence (1996): 31–39.

Nico, Luck. Machine Learning-Powered Artificial Intelligence in Arms Control. Frankfurt: Peace Research Institute, 2019.

Ongenae, Kevin. “AI Arbitrators ... ‘Does not compute’.” In Artificial Intelligence and the Law, edited by Jan Bruyne and Cedric Vanleenhove, 101–122. Cambridge: Intersentia, 2021.

Otakpor, Nikeonye. “On Indeterminacy in Law.” Journal of African Law, 32, no 1 (1998): 112–121. https://doi.org/10.1017/S0021855300010251.

Rachovitsa, Adamantia and Niclas Johann. “The Human Rights Implications of the Use of AI in the Digital Welfare State: Lessons Learned from the Dutch SyRI Case.” Human Rights Law Review, 22, no 2 (2022): 1–15. https://doi.org/10.1093/hrlr/ngac010.

Registry. European Convention on Human Rights: A Living Instrument. https://edoc.coe.int/en/european-convention-on-human-rights/8528-the-european-convention-on-human-rights-a-living-instrument.html

Rehnquist, William. “The Notion of a Living Constitution.” Harvard Journal of Law & Public Policy, 29, no 2 (2006): 401–415.

Riner, Reed. “Loglan and the Option of Clarity: A Genuinely User-Friendly Language for Humans and Their Machines.” A Review of General Semantics, 47, no 3 (1990): 269–279.

Russell, Bertrand. The Basic Writings of Bertrand Russell. London: Routledge, 2009.

Scalia, Antonin. “Common Law Courts in a Civil Law System: The Role of United States Federal Courts in Interpreting the Constitution and Laws.” In A Matter of Interpretation: Federal Courts and the Law, edited by Amy Gutman, 3–48. Princeton, NJ: Princeton University Press, 1998.

Schiffer, Stephen. “Vagueness and Partial Belief.” Philosophical Issues, 10 (2000): 220–257.

Sen, Pratap Chandra. “Supervised Classification Algorithms in Machine Learning: A Survey and Review.” In Emerging Technology in Modelling and Graphics, edited by Kumar Mandal and Debika Bhattacharya. Cham: Springer, 2020.

Shahid, Masuma. “Equal Marriage Rights and the European Courts.” ERA Forum, 23 (2023): 397–411. https://doi.org/10.1007/s12027-023-00729-w.

Shapiro, Scott J. “The “Hart–Dworkin” Debate: A Short Guide for the Perplexed.” In Ronald Dworkin: Contemporary Philosophy in Focus, edited by Arthur Ripstein, 2–55. Cambridge: Cambridge University Press, 2007.

Shobe, Jarrod. “Intertemporal Statutory Interpretation and the Evolution of Legislative Drafting.” Columbia Law Review, 114, no 807 (2014): 627–736.

Skitka, Linda, Mosier, Kathleen and Burdick, Mark. “Accountability and Automation Bias.” International Journal of Human-Computer Studies, 4 (2000): 701–717. https://doi.org/10.1006/ijhc.1999.0349.

Soames, Scott. “Vagueness and the Law.” In The Routledge Companion to Philosophy of Law, edited by Andrei Marmor, 95–108. London: Routledge, 2012.

Solan, Lawrence. “Vagueness and Ambiguity in Legal Interpretation.” In Vagueness in Normative Texts, edited by Vijay Bhatia, 73–97. New York: Peter Lang, 2005.

Solum, Lawrence. “On the Indeterminacy Crisis: Critiquing Critical Dogma.” University of Chicago Law Review, 54, no 462 (1987): 462–503.

Sorensen, Roy. “The Sorites Argument.” In Companion to Metaphysics, edited by Ernest Sosa and Gary Rosenkrantz, 565–566. Chichester: Wiley Blackwell, 2009.

Stafford v The United Kingdom, ECtHR Application no 46295/99.

Surden, Harry. “Artificial Intelligence and Law: An Overview.” Georgia State University Law Review, 35, no 4 (2019): 1306–1337.

Tiersma, Peter M. Legal Language. Chicago: University of Chicago Press, 2000.

Venice Commission. Rule of Law Checklist. https://www.venice.coe.int/webforms/documents/?pdf=CDL-AD(2016)007-e

Waismann, Friedrich. “Verifiability.” Journal of Symbolic Logic, 10, no 3 (1947): 119–150.

Waismann, Friedrich. Philosophical Papers. Dordrecht: D. Reidel, 1977.

Waldron, Jeremy. “Vagueness in Law and Language: Some Philosophical Issues.” California Law Review, 82, no 3 (1994): 509–540.

Williamson, Timothy. The Problems of Philosophy: Vagueness. London: Routledge, 1996.

Withorne, Jamie. Machine Learning Applications in Nonproliferation: Assessing Algorithmic Tools for Strengthening Strategic Trade Controls. CNS, 2020.

Wu, Tim. “Will Artificial Intelligence Eat the Law? The Rise of Hybrid Social-Ordering Systems.” Columbia Law Review, 119, no 7 (2021): 2001–2028.

Yeung, Douglas. “Identifying Systemic Bias in the Acquisition of Machine Learning Decision Aids for Law Enforcement Applications.” Perspective: Expert Insights on Timely Policy Issues (2021): 1–24. https://www.rand.org/pubs/perspectives/PEA862-1.html.

Legal materials

Christine Goodwin v The United Kingdom, ECtHR Application no 28957/95.


[1] McCorduck, “Machines Who Think,” 375.

[2] Shapiro, “The “Hart–Dworkin” Debate,” 22–55.

[3] Ellison, “Compositionality, Metaphor, and the Evolution of Language.”

[4] Or, as Eco, “Travels in Hyperreality,” 140 puts it, without a proper Code between the communicators.

[5] Keil, “Vagueness and Law.”

[6] Choi, “Strategic Vagueness in Contract Design”; Lanius, “Strategic Indeterminacy in the Law.”

[7] For an overview, see Nicholas, “Dealing with Ambiguous or Vague Information”; Flasiński, “On the Learning of Vague Languages for Syntactic Pattern Recognition.”

[8] See, for example, the Hart–Dworkin debate that was about language as much as the nature of law. Or see an overview in Otakpor, “On Indeterminacy in Law.”

[9] “Our law is a law of words”: Tiersma, “Legal Language.”

[10] Waldron, “Vagueness in Law and Language.”

[11] Greenberg “Legislation as Communication?”

[12] Murphy, “The Duty of the Government to Make the Law Known.”

[13] Brozek, “Defeasibility of Legal Reasoning,” 34.

[14] Keil “Vagueness and Law,” 2.

[15] Waldron, “Vagueness in Law and Language.”

[16] Keil “Vagueness and Law.”

[17] Williamson, “The Problems of Philosophy: Vagueness.”

[18] Sorensen, “The Sorites Argument.”

[19] Ludwig, “Vagueness and the Sorites Paradox.”

[20] Ludwig, “Vagueness and the Sorites Paradox”; see also Bertrand Russell, The Basic Writings of Bertrand Russell, 106. Russell points out that language describes objects; however, objects suffer from temporal and spatial instability due to the nature of physics, thus the object described never-already exists as being described by that language. Russell, The Basic Writings of Bertrand Russell, 106.

[21] Grim, “The Buried Quantifier.”

[22] Keil “Vagueness and Law,” 3.

[23] Tiersma, “Legal Language,” 71.

[24] Lanius, “Strategic Indeterminacy in the Law,” 21.

[25] Waldron, “Vagueness in Law and Language.”

[26] Lanius, “Strategic Indeterminacy in the Law.”

[27] Lanius, “Strategic Indeterminacy in the Law.”

[28] Gurler, “The Problem of Legal Indeterminacy.”

[29] Solum, “On the Indeterminacy Crisis.”

[30] Solum, “On the Indeterminacy Crisis.”

[31] Waldron, “Vagueness in Law and Language.”

[32] Waldron, “Vagueness in Law and Language,” 512.

[33] Dabkowski, “A Wittgensteinian Look on Vagueness.”

[34] Waldron, “Vagueness in Law and Language,” 513.

[35] Solan, “Vagueness and Ambiguity in Legal Interpretation,” 73.

[36] Waldron, “Vagueness in Law and Language,” 513.

[37] The notion of contestability has been taken to its more extreme notion by a pragmatic view of Schiffer, who claims that any vague case has just one way of being solved by a judge, simply based on what the judge holds to be correct in their internal view; thus, they are essentially always making a value-based judgement. See Schiffer, “Vagueness and Partial Belief”; Greenwalt, “Legal Interpretation,” 45.

[38] See Lanius, “Strategic Indeterminacy in the Law,” 149; at one point, Lanius introduces these terms as interchangeable.

[39] Waismann, “Verifiability.”

[40] Hart, “The Concept of Law,” 124; for the relation to Waismann, see Bix, “HLA Hart and the ‘Open Texture’ of Language.”

[41] Endicott, “Vagueness in Law,” 37.

[42] Endicott, “Vagueness in Law,” 37.

[43] Soames “Vagueness and the Law,” 99.

[44] Hart, “The Concept of Law,” emphasis added.

[45] Hart, “The Concept of Law,” 123.

[46] Bix, “Law, Language and Legal Indeterminacy,” 22–25.

[47] Waismann, Philosophical Papers 13.

[48] Hart, “The Concept of Law,” 128. These are the general reasons for the raise of the “problematic” language given and analysed by the aforementioned authors; however, they are not the only ones that can be raised. Another reason could be the need for a political agreement during the phase of drafting the relevant law. While the sources may vary, their resulting effect, of a necessary close judgement by the applying body, remains. See further Shobe, “Intertemporal Statutory Interpretation and the Evolution of Legislative Drafting.”

[49] Waldron, “Vagueness in Law and Language.”

[50] Eco, “Travels in Hyperreality,” 44.

[51] Eco, “Travels in Hyperreality,” 140.

[52] Eco, “Travels in Hyperreality,” 140.

[53] Eco “Theory of Semiotics,” 48–50.

[54] Rehnquist, “The Notion of a Living Constitution.” This doctrine essentially argues for an evolutive interpretation, one for which the text is not as important as the societal setting in which it is interpreted, meaning that the interpretation of terms and rules can and should change over time.

[55] Letsas, “The ECHR as a Living Instrument,” 106.

[56] Scalia, “Common Law Courts in a Civil Law System.”

[57] Registry, “European Convention on Human Rights.”

[58] Bjorge, “Domestic Application of the ECHR,”131.

[59] Christine Goodwin v The United Kingdom and Stafford v The United Kingdom.

[60] Christine Goodwin v The United Kingdom and Stafford v The United Kingdom.

[61] Christine Goodwin v The United Kingdom and Stafford v The United Kingdom.

[62] Such power of the court, however, brings with it the necessary discussion of what constitutes the core of such law. Letsas, “The ECHR as a Living Instrument.”

[63] Shahid “Equal Marriage Rights and the European Courts.”

[64] Such a core function, to be able to adapt the legislation to “socially desirable outcome” that could not possibly have been foreseen by the legislator, is identified as the main function of the open texturedness of language. See Hart, “Essays in Jurisprudence and Philosophy,” 269–270.

[65] Waldron, “Vagueness in Law and Language,” 527.

[66] Even though Waldron does not further such argumentation, one could perhaps add that it is possible mainly due to the intimate knowledge of the given case at hand by the applying body, thus fulfilling the Derridean condition of justice as an individual and individualised re-discovery of the law in every judgement and every case at hand. See Derrida, “Deconstruction and the Possibility of Justice.”

[67] Lanius, “Strategic Indeterminacy in Law,” 149.

[68] Soames, “Vagueness and the Law.”

[69] Asgeirsson, “Can Legal Practice Adjudicate Between Theories of Vagueness?”

[70] Asgeirsson, “Can Legal Practice Adjudicate Between Theories of Vagueness?”

[71] Soames, “Vagueness and the Law,” 95–103.

[72] Context sensitivity refers to the discretion of a speaker using the term. Not to be confused with the statutory interpretation rule of noscitur a sociis that requires the judge to look at the immediate surrounding terms. See Barnes, “Modern Statutory Interpretation,” 270.

[73] See Derrida, “Deconstruction” for the individualisation of the judgment as a prerequisite of justice.

[74] Given the scope of this article, we have considered only the relation of the law-maker and the body applying the law; however, this is not to say that the use of an indeterminate language has no value for the people to whom the law is addressed. See Endicott, “The Value of Vagueness.”

[75] Some of the mentioned theories went so far as to suggest that it reflects the indeterminate nature of reality. Russell, The Basic Writings of Bertrand Russell, 106.

[76] Riner, “Loglan and the Option of Clarity,” 275.

[77] Riner, “Loglan and the Option of Clarity,” 275.

[78] Venice Commission, Rule of Law Checklist, 15.

[79] Besides being one of the most developed, it has become popular through cultural references, such as in the book The Moon is a Harsh Mistress by Robert Henlein, where the supercomputer Mycroft Holmes uses this language.

[80] Nicholas, “Lojban as a Machine Translation Interlanguage in the Pacific.” This idea has gained some traction even within the community of Google Translate developers: see, for example, https://groups.google.com/g/opencog/c/7mwTsU6E3iA?pli=1.

[81] Riner, “Loglan and the Option of Clarity.”

[82] Goertzel, “Lojban++.”

[83] Due to the nature of representative democracy, it is necessary to translate the law into the algorithm as opposed to, for example, merely letting the body decide once and make that into the rule. This would also side-step the issue of necessary individual and individualised decision-making as a prerequisite of justice and public administration, a key aspect of this article. Further, such a one-off solution ignores the need for evolutive interpretation.

[84] For a general study of problematic language and “digital” law, see Ashley, Artificial Intelligence and Legal Analytics.

[85] See Davenport, “Automated Decision Making Comes of Age,” an article from 2005 that is already providing historical genesis.

[86] Edwards, “Transparency in Language Generation” and, specifically for law, Eliot, “Antitrust and Artificial Intelligence (AAI).”

[87] Henderson “Beyond Ads.”

[88] The system has become rather controversial after issuing an alarming number of false positive fraud identifications as well as its breach of the right to privacy. While these issues are important for automated decision-making, they will not be considered further here. See Bekkum, “Digital Welfare Fraud Detection and the Dutch SyRI Judgment” and Heikikkila, “Dutch Scandal Serves as a Warning for Europe.”

[89] Rachovitsa, “The Human Rights Implications of the Use of AI.”

[90] Böhmer, ‘Helping Police Make Custody Decisions Using Artificial Intelligence.”

[91] Jackson “Setting the Record Straight.”

[92] Equivant, Practitioner’s Guide to COMPAS Core.

[93] Binns, “Is That Your Final Decision?”

[94] Skitka, “Accountability and Automation Bias.”

[95] Liu, “Beyond State v Loomis.”

[96] Which would be the expected development with the ever-extending technical penetration of society. See Brownsword, “Law 3.0.”

[97] “Algorithm” is a more general term that also encompasses AI. Given not only the current technical advances, but also the workings of some of the existing automated decision-making models, we might assume that one of the approaches of AI will most likely be adopted for automating (legal) decision-making. For the relationship of algorithms and AI, see Yeung, “Identifying Systemic Bias.”

[98] Waldron, “Vagueness in Law and Language.”

[99] Waldron, “Vagueness in Law and Language,” 520.

[100] Christie, “Vagueness and Legal Language,” 906.

[101] Withorne, “Machine Learning Applications in Nonproliferation,” 9–12.

[102] Nico, “Machine Learning-Powered Artificial Intelligence in Arms Control,” 2–7.

[103] Surden, “Artificial Intelligence and Law.”

[104] Hacker, “A Legal Framework for AI Training Data.”

[105] For a technical perspective, see Flasiński, “On the Learning of Vague Languages.”

[106] It is worth noting that approaches other than Waldron’s can still lend themselves beneficially to our analysis. Consider the interpretation of Bix and his account of (Frégian) linguistic-logical approaches to the issue of the open-textured nature of language/determinacy of law. Bix suggests an approach of judges creating a list of similar/relevant applications of said terms and cases that are dissimilar, identifying common characteristics between those in both groups and subsequently looking for such characteristics, and classification in the case at hand – thus essentially mimicking a machine learning classification model. See Bix, “Law, Language, and Legal Determinacy.”

[107] Monti “Umberto Eco and the Aesthetics of Vagueness.”

[108] See the lecture of Lady Hale to the limits of such in Hale, “Beanstalk or Living Instrument?”

[109] Practice when the outcome of a ML model is used as a training data input, creating a feedback loop. In this article, the effect of staleness will not require the subsequent step but a mere locking in of the existing interpretation presented in the training data – thus creating a certain feedback loop. See Davis, “Of Robolawyers and Robojudges.”

[110] Ongenae, “AI Arbitrators.”

[111] Dervanović, “I, Inhuman Lawyer.”

[112] Gowder, “Is Legal Cognition Computational?”

[113] This also brings to the forefront the discussion of necessary individualisation of the decision as a prerequisite of a just decision: see Derrida, Deconstruction.

[114] Balagoplan, “Judging Facts, Judging Norms.”

[115] Goldfarb, “Prediction and Judgment.

[116] Chadha-Sridhar, “The Value of Vagueness: A Feminist Analysis.”

[117] Even though the effectiveness or the extent of applicability of more precise statutes could have been questioned.

[118] Schiffer, “Vagueness and Partial Belief.”

[119] Waldron, “Vagueness in Law and Language,” 526.

[120] Bělohradský, Čas Pléthokracie, 267–298.

[121] T Wu “Will Artificial Intelligence Eat the Law?”

[122] Kattan “A Comparison of Machine Learning with Human Judgment.”

[123] Cofone, “Algorithmic Discrimination is an Information Problem.”


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/LawTechHum/2024/8.html