iDecide: the Legal Implications of Automated Decision-making*
University of Cambridge
Cambridge Centre for Public Law Conference 2014:
Process and Substance in Public Law
15-17 September 2014
The Hon Justice Melissa Perry[2]
Justice of the Federal Court of Australia
Alexander Smith[3]
(1) Decision automation in context
It was a watershed moment when IBM's supercomputer 'Deep Blue' defeated world chess champion Garry Kasparov in 1997. Having programmed Deep Blue with the full history of Kasparov's previous public matches and style,[4] IBM's engineers demonstrated to the world that computers can make decisions in ways that outperform even the best of human minds.
You don't, of course, need to be a chess grand master to encounter automated systems in the world today. Computers assist us with decisions that shape our everyday lives, from the directions we follow when driving, to suggestions as to the books and music we might enjoy. We expect instantaneous matches to our queries on Google, and anything we might wish to buy to be available online at the click of a button.
These expectations are not limited to the private sector. Government departments and agencies are also expected to have an online presence, with their modes of service delivery not only available 24/7, but reduced to an 'app' downloadable for free. This trend corresponds with tight fiscal constraints on governments globally and with rapid growth in the volume, complexity and subject-matter of legislation and government decisions affecting private and commercial rights and interests.
It comes then as no surprise that governments have increasingly sought to utilise automated processes which employ coded logic and data-matching to make, or assist in making, decisions. These systems can be used to aid or guide a human decision maker at one end of the spectrum. At the other end of the spectrum, they may be used in lieu of a human decision-maker. They may also be integrated at different stages of a decision-making process with differing degrees of human oversight and verification.
The great benefit of these systems is that they can process large amounts of data more quickly, more reliably and less expensively than their human counterparts. They "come into their own" when high frequency decisions need to be made by government.
For example, rule based systems are used extensively in Australia to assess eligibility for social security payments. These include self-service options. So "Jo Bloggs", who receives welfare benefits, updates information about her income and employment status every fortnight online. Alternatively she rings the agency administering her payments and updates information by choosing the relevant options through the use of natural speech recognition technology or the telephone keypad. In each of these scenarios, the system automatically processes the information she gives to reassess her fortnightly entitlement to social welfare payments. The kinds of transactions undertaken by Jo Bloggs were replicated many tens of millions of times over the course of the last year. The Australian Government's Department of Human Services hosted over 74.5 million such digital self-managed transactions, and its self-service mobile apps were downloaded more than 750,000 times.[5]
To give another example, the ATO has introduced an online 'e-tax' system to assist taxpayers in completing their annual self-assessed tax returns. At first glance, e-tax might not appear to be automated administrative decision-making tool.[6] But, with each click of a button or selection from the telephone menu, Jo Bloggs is guided by electronic coding through different alternative pathways and, based on her answers, skips past options that the system determines are not relevant. So, like sheep being corralled through a pen, these systems effectively close-off irrelevant gateways as Jo Bloggs progresses through the matrix of pre-programmed pathways and the process is completed with the assessment of her income tax or refund for that financial year, and calculation of any penalties.
The benefits to be achieved by the use of systems such as these can again be illustrated by the experience of the Australian Taxation Office which now utilises over 600 different systems to assist in collection of the revenue. Taking but one example, in the 2012-13 financial year alone, the ATO's automated systems led to data-matching audits that resulted in the collection of approximately AU$514 million in additional revenue.
Despite the prevalence of these kinds of systems, people like Jo Bloggs and those advising them may well be unaware of the use of these systems in decisions directly affecting them. Indeed, it is fair to say that generally speaking the rise of automated decision-making systems per se has caught administrative lawyers off-guard.
The reality is that growth in this area commenced well before public law lawyers and academics began properly to reflect on how these systems interrelate with administrative law principles. In Australia, the Administrative Review Council (ARC) produced a ground-breaking report on Automated Assistance in Administrative Decision-Making in 2004. This appears to have been the first report to systematically review the administrative law implications of automated decision-making systems. This report in turn led to the establishment of a Working Group which launched a Better Practice Guide in 2007, again the first of its kind, to assist Australian agencies in the successful deployment of automated systems.[7] Nonetheless, at the time that the report and Better Practice Guide were produced, the use of automated systems was already well entrenched in Australian government and no doubt in the machinery of government employed in other nations also. Yet, as the ARC recognised, errors in computer programming and in the translation of complex laws into binary code can result in wrong decisions potentially on an enormous scale if undetected. Input errors may also lead to flawed decisions. Nor are all decisions by government of such a nature that they can appropriately or fairly be made by automated systems. The use of these systems by governments therefore raises questions as to the measures necessary to ensure their compatibility with the core administrative law values or principles that underpin a democratic society governed by the rule of law, in particular:
- to ensure the legality of purported actions by public bodies;
- to guard against the potential erosion of procedural fairness; and
- to safeguard the transparency and accountability of government decisions by the provision of reasons and effective access to merits and judicial review.
In this regard, I don't suggest that I have all of the answers.
The key point that I wish to emphasise is the importance of ensuring and facilitating a dialogue between legal practitioners, legislators and bureaucrats about these matters. As George Orwell wrote in his classic book 1984, '[i]f you keep all of the small rules, then you risk breaking the big ones'. So, as governments continue to 'outsource' their decision-making functions to expert automated systems, we as lawyers and academics must proactively turn our minds to the legal implications. We cannot permit a situation to arise where courts lose power to control the executive: we have "a duty to develop the law" as Lord Justice Laws said yesterday in conversation with Professor Feldman in the opening plenary session of this conference.
Time will permit me to focus upon only legality today and, as an aspect of legality, questions of substantive justice when, in other words, are these automated processes appropriately employed. I have considered particular issues associated with the remaining values and principles in my written paper with Alexander Smith.
(2) Legality
Ensuring that decisions made by pre-programmed systems are made within lawful boundaries raises a number of specific challenges.
(a) Need for specific authority
First those who purport to exercise public-decision making powers must be authorised to do so as a matter of law. As Lord Justice Laws remarked in his address yesterday, when it comes to the State, a positive justification is required for legal action. This principle constitutes an aspect of the overriding principle of legality.
Equally, if an automated system is being utilised to make part or all of a decision, the use of that system must be authorised. It cannot be assumed that a statutory authority vested in a senior public servant which extends by implication to a properly authorised officer, will also extend to an automated system; nor that authority to delegate to a human decision-maker will permit 'delegation" to an automated system. Authority to use such systems should be transparent and express.
Examples of express authority are beginning to appear on the pages of the statute books and we refer to some examples in the paper. But it is by no means clear that the issue is being dealt with comprehensively.
The concept of "delegating" a decision to an automated system, in whole or in part, raises a number of unique problems.
- For example, who is the 'decision maker'?
- To whom has authority has been delegated, if that is indeed the correct analysis?
- Is it the programmer, the policy maker, the authorised decision-maker, or the computer itself?
- Is the concept of delegation appropriately used in this context at all? After all, unlike human delegates, a computer programme can never truly be said to act independently of its programmer or the relevant government agency?
- What if a computer process determines some, but not all, of the elements of the administrative decision? Should the determination of those elements be treated as the subject of separate decisions from those elements determined by the human decision-maker?
An examination of the statute books throws up an increasing number of provisions addressing these issues by such mechanisms as deeming a decision made by the operation of a computer programme to be a decision made by the human decision-maker. Nonetheless such deeming provisions require acceptance of highly artificial constructs of decision-making processes. More sophisticated approaches may need to be developed as these issues come to be litigated in the courts and these provisions fall to be construed and applied.
(b) Lost in translation: from law to code
Questions of legality posed in the context of automated systems go beyond identifying the source of the authority for their use. One of the greatest challenges is to ensure accuracy in the substantive law applied by such processes.
In any process of translation, shades of meaning may be lost or distorted. In our increasingly culturally diverse societies, the challenge of ensuring the accurate and fair translation of proceedings in a court or tribunal into different human languages is confronted daily. Yet the failure to achieve effective communication by reason of inadequate interpretation can result in a hearing that is procedurally unfair and may, in reality, be no hearing at all.
The rise of automated decision-making systems raises an equivalent but potentially more complex question of what may have been lost or altered in the process of digital translation. Through the process of translating laws into code, computer programmers effectively assume responsibility for building decision-making systems that translate policy and law into code. Yet computer programmers are not policy experts and seldom have legal training. How can we be sure that complex, even labyrinthal, regulations are accurately transposed into binary code? Even lawyers and judges frequently disagree on meaning, and the process of statutory construction itself is not only concerned with the ordinary meaning of words. Laws are interpreted in accordance with statutory presumptions. Meaning is also affected by context. Apparent conflicts between statutory provisions may need to be resolved. And the hierarchy between provisions determined. These are not necessarily simple questions and the potential for coding errors is real.
Moreover, laws are not static and complex transitional provisions may need to be accommodated, along with relevant common law presumptions. Such systems will need to be kept up to date while maintaining the capacity to apply the law as it stands at previous points in time for decisions caught by transitional arrangements.
Conversely, it is no coincidence that the government agencies relying most heavily upon automated processes are the same agencies that apply the most complex, intricate and voluminous legislation. It is here that potential gains in efficiency stand to be achieved. For example, the Australian Department of Veterans' Affairs established an automated compensation claims processes system to automate certain aspects of its assessment and determination of compensation claims from veterans and their families.[8] The system guides decision-makers in applying over 2,000 pages of legislation and over 9,700 different rules. The efficiency gains have been substantial. The Department now determines 30% more claims annually using 30% fewer human resources in substantially less time, resulting in departmental savings of approximately $6 million each year.[9]
Yet it must also be borne in mind that, while the strength of automation lies in its capacity to deliver greater efficiencies of scale, this is also its weakness. This is particularly so given our tendency to trust the reliability of computers. As Carol said in Little Britain (for those here who are fans), Computer says no…
Together these factors may mean that programming errors may be replicated across many thousands of decisions undetected. In one example to which we refer in the written paper, an error in the Benefits Management System (CMBS) administered by Colorado in the United States caused hundreds of thousands of erroneously calculated Medicaid, welfare and benefits decisions to issue.[10] As one commentator, Danielle Citron, has troublingly pointed out, '[h]ad the failure of [that system] been less catastrophic, and thus less noticeable, the system's invalid rules might well have remained hidden.'[11]
These considerations highlight the importance of lawyers being involved in the design and maintenance of software applied in these kinds of decision-making processes, and the legislative frameworks within which they operate. In a society governed by the rule of law, administrative processes need to be transparent and accountability for their result, facilitated. Proper verification and audit mechanisms need to be integrated into the systems from the outset. And appropriate mechanisms for review in the individual case by humans, put in place. If such steps are taken - if the proper safeguards are in place - automated decision-making may, in fact, promote transparency and accountability, while enabling the greater efficiencies offered by the use of such processes to be achieved.
(c) Substantive fairness: when is it appropriate to use automated systems?
I'd like then to say a few words about when it may be appropriate to use automated systems.
Automated decision-making systems are grounded in logic and rules-based programs that apply rigid criteria to factual scenarios. Importantly, they respond to input information entered by a user in accordance with 'predetermined' outcomes. By contrast, many administrative decisions require the exercise of a discretion or the making of an evaluative judgment. Is a person is 'a fit and proper person' to hold a licence? Are a couple are in a 'de-facto' relationship? These are complex and subtle questions incapable of being transcribed into rigid criteria or rules and are, therefore, beyond the capacity of an automated system to determine. Different factors may need to be weighed against each other and may be finely balanced. If automated systems were used in cases of this kind, not only would there be a constructive failure to exercise the discretion; they apply predetermined outcomes which may be characterised as pre-judgment orbias.
The reasons why Parliament may provide for the exercise of a power by reference to discretionary and evaluative factors must also be kept in mind. They can equip the decision-maker with the means of reaching a decision in the individual case that best achieves the purposes of the power in question and that accords with community values and expectations, as well as considerations of fairness and common sense.
These considerations point to a further risk to which lawyers, policy-makers and legislators should be alert. It is not difficult to envisage that the efficiencies which automated systems can achieve and the increasing demand for such efficiencies may overwhelm an appreciation of the value of achieving substantive justice for the individual. In turn this may have the consequence that rules-based laws and regulations are too readily substituted for discretions in order to facilitate the making of automated decisions in place of decisions by humans. The same risks exist with respect to decisions which ought properly to turn upon evaluative judgments.
Legislative amendments directed towards facilitating greater automation by requiring the application of strict criteria in place of the exercise of a discretion or value-based judgment, should therefore be the subject of careful scrutiny in order to protect against the risk that the removal of discretionary or evaluative judgments may result in unfair or arbitrary decisions.
Conclusion
It is not immediately obvious when the IBM supercomputer 'Deep Blue' defeated Kasparov that the computer had a team of chess experts and programmers altering the engineering between games. As the guardians of our administrative law traditions, public law lawyers, academics and the courts must act as the experts who manually alter the engineering between 'games'. While acknowledging that employing automated decision-making can promote consistency, accuracy, cost effectiveness and timeliness in the making of decisions by government, we must also ensure that those systems are dynamic and recognise that we are responsible for the inputs and the outputs. Achieving efficiencies should not be allowed to compromise society's commitment to an administrative law system that is fair, transparent and accountable and that operates according to the rule of law. Safeguards need to be in place in order to ensure that the individual's right to hold decision makers to their obligations is preserved.
'iDecide' is companionship with technology: we decide to take data and imagination and make the moves; we must not allow ourselves to become pawns in the game.
ENDNOTES
This is the paper as presented orally by Justice Perry which is based upon, and reproduces in part, the detailed paper jointly authored by Justice Perry and Alexander Smith for the Cambridge Public Law Conference in September 2014. The detailed paper has been submitted for publication.
[2] LL.B (Hons) (Adel), LL.M, PhD (Cantab), FAAL.
[3] MPP Candidate (Harvard), LL.M (Cantab), LL.B (Hons), BA (Bond).
[4] Laurence Aung, 'Deep Blue: The History and Engineering behind Computer Chess' (2012) Illumin .
[5] Department of Human Services, 2012-13 Annual Report (2013) 30, 33, 63.
[6] Australian Government, Automated Assistance in Administrative Decision-Making: Better Practice Guide (2007)71.
[7] Above n 5 at ii; John McMillan 'Automated assistance to administrative decision-making: Launch of the better practice guide' (Paper presented at seminar of the Institute of Public Administration of Australia, Canberra, 23 April 2007).
[8] Above n 5, 68-69.
[9] Above n 7, 10.
[10] Cindi Fukami and Donald McCubbrey, 'Colorado Benefits Management System: Seven Years of Failure' (2011) 29 Communications of the Association for Informational Systems 5; Danielle Citron, 'Technological Due Process' (2008) 85 Washington University Law Review 1249, 1256.
[11] Citron, above n 9, 1256.
SELECT BIBLIOGRAPHY
Administrative Review Council, Automated Assistance in Administrative Decision Making: Report to the Attorney-General, Report No 46 (2004).
Australian Government, Automated Assistance in Administrative Decision-Making: Better Practice Guide (2007).
Danielle Citron, 'Technological Due Process' (2008) 85 Washington University Law Review 1249, 1256 and 'Open Code Governance' [2008] The University of Chicago Legal Forum 362.
Dr Melissa Perry QC, 'Administrative Justice and the Rule of Law: Key Values in the Digital Era' (Paper presented at 2010 Rule of Law in Australia Conference, Sydney, 6 November 2010).
John McMillan 'Automated assistance to administrative decision-making: Launch of the better practice guide' (Paper presented at seminar of the Institute of Public Administration of Australia, Canberra, 23 April 2007)
Justice Garry Downes, 'Looking Forward: Administrative Decision Making in 2020' (Speech delivered at the 2010 Government Law Conference, Canberra, 20 August 2010) and Future Directions in Administrative Law Part 1 [2011] AIAdminLawF 14; (2011) 67 AIAL Forum 35, 36.
Linda Skitka, 'Does automation bias decision-making?' (1999) 51 International Journal of Human-Computer Studies 991, 992.
Thomas Davenport and Jeanne Harris, 'Automated Decision Making Comes of Age', (2005) MIT Sloan Management Review 84.
William Marra and Sonia McNeil, 'Understanding "The Loop": Regulating the Next Generation of War Machines' (2012) 36(3) Harvard Journal of Law and Public Policy.