AustLII Home | Databases | WorldLII | Search | Feedback

University of New South Wales Law Journal

Faculty of Law, UNSW
You are here:  AustLII >> Databases >> University of New South Wales Law Journal >> 2021 >> [2021] UNSWLawJl 37

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Huggins, Anna --- "Addressing Disconnection: Automated Decision-Making, Administrative Law and Regulatory Reform" [2021] UNSWLawJl 37; (2021) 44(3) UNSW Law Journal 1048


1048 UNSW Law Journal Volume 44(3)

ADDRESSING DISCONNECTION: AUTOMATED DECISION-MAKING, ADMINISTRATIVE LAW AND REGULATORY REFORM

ANNA HUGGINS*

Automation is transforming how government agencies make decisions. This article analyses three distinctive features of automated decision- making that are difficult to reconcile with key doctrines of administrative law developed for a human-centric decision-making context. First, the complex, multi-faceted decision-making requirements arising from statutory interpretation and administrative law principles raise questions about the feasibility of designing automated systems to cohere with these expectations. Secondly, whilst the courts have emphasised a human mental process as a criterion of a valid decision, many automated decisions are made with limited or no human input. Thirdly, the new types of bias associated with opaque automated decision-making are not easily accommodated by the bias rule, or other relevant grounds of judicial review. This article, therefore, argues that doctrinal and regulatory evolution are both needed to address these disconnections and maintain the accountability and contestability of administrative decisions in the digital age.

I INTRODUCTION

The increasing automation of government processes is transforming administrative decision-making. Governments in Australia and internationally have embraced automated systems that make or facilitate decisions1 due to their potential to reduce the costs, and increase the efficiency and consistency, of decision-making in the public sector. The Australian Government has a whole-

* LLB(Hons)/BInst (UNSW), LLM (QUT), PhD (UNSW), Associate Professor in the School of Law, Faculty of Business and Law, at the Queensland University of Technology, Brisbane, Australia. I would like to thank Professor Nicolas Suzor and Dr Hope Johnson for valuable comments on earlier drafts of this article, and the three anonymous reviewers for their helpful suggestions.

1 An ‘automated system’ describes ‘a computer system that automates part or all of an administrative decision-making process’: Commonwealth Ombudsman, ‘Automated Decision-Making: Better Practice Guide’ (Guide, 2019) 5 (‘Better Practice Guide’). Automated systems can include both pre-programmed rules-based processes, and machine-learning processes that learn from patterns and correlations in historic data: Monika Zalnieriute, Lyria Bennett Moses and George Williams, ‘The Rule of Law and Automation of Government Decision-Making’ (2019) 82(3) Modern Law Review 425, 432–5. See further Part III(A).

2021 Addressing Disconnection 1049

of-government vision for digital transformation, and aims to use automation to streamline, link and administer government processes, reducing the need for human manual intervention.2 As Kerr J observed in dissent in the Full Federal Court case of Pintarich v Deputy Commissioner of Taxation (‘Pintarich’), ‘[w]hat was once inconceivable, that a complex decision might be made without any requirement of human mental processes is, for better or worse, rapidly becoming unexceptional’.3 Administrative law provides an important framework for promoting accountability for executive action and protecting individual rights and interests.4 But to what extent are administrative law rules, developed on the assumption that decisions are made by humans and not computers, compatible with computerised government decision-making? Some scholars are optimistic that government agencies’ use of machine-learning technologies ‘can comfortably fit within [the] conventional legal parameters’ of administrative and constitutional law.5 Other commentators have raised concerns that automated decision-making (‘ADM’) is in tension with administrative law rules,6 and rule of law values more broadly.7 The extent to which government agencies’ use of automated systems is congruent with prevailing public law principles is therefore contested, even as governments

continue to automate decision-making processes.

In other domains, the problem of disconnection between the law and technology has been described as ‘both acute and chronic’.8 To date, however, the sources, nature and extent of disconnection between ADM and administrative law, the resulting regulatory gaps, and appropriate solutions remain under-explored in the Australian

2 Philip Hamilton, ‘Public Sector Digital Transformation: A Quick Guide’ (Research Paper, Parliamentary Library, Parliament of Australia, 2 April 2019).

3 [2018] FCAFC 79; (2018) 262 FCR 41, 49 [47] (Kerr J) (‘Pintarich’).

4 See, eg, ‘Overview of the Commonwealth System of Administrative Review’, Attorney- General’s Department Administrative Review Council (Web Page) <https://www.arc.ag.gov. au/Aboutus/Pages/OverviewoftheCommonwealthSystemofAdminReview.aspx>, archived at

<https://web.archive.org/web/20190519133738/https://www.arc.ag.gov.au/Aboutus/Pages/ OverviewoftheCommonwealthSystemofAdminReview.aspx>; Administrative Review Council, ‘Better Decisions: Review of Commonwealth Merits Review Tribunals’ (Report No 39, 14 September 1995) 174 [2].

5 In the United States (‘US’) context, see Cary Coglianese and David Lehr, ‘Regulating by Robot: Administrative Decision Making in the Machine-Learning Era’ (2017) 105(5) Georgetown Law Journal 1147, 1148, 1155.

6 In the Australian context, see, eg, Will Bateman, ‘Algorithmic Decision-Making and Legality: Public Law Dimensions’ (2020) 94(7) Australian Law Journal 520; Lyria Bennett Moses and Edward Santow, ‘Accountability in the Age of Artificial Intelligence: A Right to Reasons’ (2020) 94(11) Australian Law Journal 829; Melissa Perry, ‘iDecide: Administrative Decision-Making in the Digital World’ (2017) 91(1) Australian Law Journal 29; Dominique Hogan-Doran, ‘Computer Says “No”: Automation, Algorithms and Artificial Intelligence in Government Decision-Making’ (2017) 13(3) Judicial Review 345; Katie Miller, ‘The Application of Administrative Law Principles to Technology-Assisted Decision-Making’ [2016] AIAdminLawF 26; (2016) 86 Australian Institute of Administrative Law Forum 20.

7 See, eg, Zalnieriute, Moses and Williams (n 1). Cf Andrew Le Sueur, ‘Robot Government: Automated Decision-Making and Its Implications for Parliament’ in Alexander Horne and Andrew Le Sueur (eds), Parliament: Legislation and Accountability (Hart Publishing, 2016) 183, 191.

8 See, eg, Roger Brownsword, Rights, Regulation, and the Technological Revolution (Oxford University Press, 2008) ch 6 (‘Rights’). See also Lyria Bennett Moses, ‘How to Think about Law, Regulation and Technology: Problems with “Technology” as a Regulatory Target’ (2013) 5(1) Law, Innovation and Technology 1, 7, 19.

1050 UNSW Law Journal Volume 44(3)

context.9 This article contributes to the existing literature by analysing distinctive features of automation that cannot be easily reconciled with administrative law rules, and considering specific doctrinal and regulatory reforms to address these gaps. In particular, it focuses on three salient challenges: (i) the different languages and logics of computer code and law;10 (ii) the variation and potential complexity of ADM, which may take place with limited or no human input;11 and (iii) the opacity and bias risks associated with ‘black-box’ automated systems.12 Parts II, III and IV analyse the extent to which these challenges are in tension with Australian administrative law rules regarding: (i) the legal expectations of statutory interpretation and administrative decision-making; (ii) the legal meaning of a decision; and (iii) the rule against bias and related grounds of judicial review. The features of automation that cannot be accommodated by public law rules need to be recognised in order to craft new legal solutions, and thereby ensure the maintenance of administrative law protections in an increasingly automated environment.

In this article, Part II explores the significant risk of disconnection between the legal expectations of how administrative decision-makers will interpret statutes, and the capacities of automated systems. This risk is compounded by the jurisdictional requirement for a matter requiring a determination of an immediate right, duty or liability before the court can provide an opinion on the correct interpretation of a statute.13 Part III outlines a range of factors that may influence the effect of automation on administrative decision-making including whether a decision is partly or fully automated, the extent of meaningful human involvement, and whether pre-programmed or machine learning-based algorithms are used. It argues that the Full Federal Court case of Pintarich highlights a gap between the legal meaning of a decision, which assumes a human mental process,14 and the reality

9 One recent article which examines the broader public law issues posed by the increasing use of technology in government decision-making in the context of privacy law, freedom of information and judicial review, and proposes relevant options for regulatory reform, is Yee-Fui Ng et al, ‘Revitalising Public Law in a Technological Era: Rights, Transparency and Administrative Justice’ [2020] UNSWLawJl 37; (2020) 43(3) University of New South Wales Law Journal 1041. In a similar vein, Paterson considers how the use of ADM in government decision-making interacts with the key public law frameworks of administrative law, anti-discrimination law and information law, and discusses law reform options: Moira Paterson, ‘The Uses of AI in Government Decision-Making: Identifying the Legal Gaps in Australia’ (2020) 89(4) Mississippi Law Journal 647. Neither of these articles provides in-depth analysis of the three administrative law disconnects, and associated solutions, that are the focus of this article.

10 These challenges have been documented in, for example, the annals of the Artificial Intelligence and Law

journal.

11 An important theme in international discourses about ADM is the desirability of keeping a ‘human in the loop’ to promote, inter alia, rule of law values and human dignity: see, eg, Meg Leta Jones, ‘The Right to a Human in the Loop: Political Constructions of Computer Automation and Personhood’ (2017) 47(2) Social Studies of Science 216; Frank Pasquale, ‘A Rule of Persons, Not Machines: The Limits of Legal Automation’ (2019) 87(1) George Washington Law Review 1.

12 See, eg, Anupam Chander, ‘The Racist Algorithm?’ (2017) 115(6) Michigan Law Review 1023; Frank Pasquale, The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard University Press, 2015); Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104(3) California Law Review 671, 695.

13 Re Judiciary and Navigation Acts [1921] HCA 20; (1921) 29 CLR 257, 265–6 (Knox CJ, Gavan Duffy, Powers, Rich and Starke JJ); CGU Insurance Pty Ltd v Blakeley [2016] HCA 2; (2016) 259 CLR 339, 350 (French CJ, Kiefel, Bell and Keane JJ). See further Part II.

14 Pintarich [2018] FCAFC 79; (2018) 262 FCR 41, 67 [140] (Moshinsky and Derrington JJ).

2021 Addressing Disconnection 1051

of ADM, which often occurs with minimal or no human input. Furthermore, bias in ADM can stem from errors in the coding or underlying data used in automated systems, as well as human deference to automated outputs.15 Yet, as Part IV shows, there are both evidentiary and doctrinal impediments to establishing either actual or apprehended bias in decisions made partially or fully by automated systems. There is thus a range of potential disconnects between ADM and the doctrines of administrative law developed for a human-centric decision-making context. These gaps limit options for judicial review and executive accountability, which is concerning given automated systems’ potential to replicate flawed decision- making processes at an unprecedented scale.

Although this article focuses on three key mismatches between administrative law and ADM, additional legal issues abound. These include questions about: the concept of delegation in the context of computerised decision-making,16 whether explanations of algorithmic decision-making processes satisfy reason- giving requirements,17 whether there is a failure to consider relevant matters if an automated system is relied upon without an ‘active intellectual process’ by a human decision-maker,18 and the appropriateness of courts relying on the weight given to relevant considerations in the context of opaque algorithmic processes.19 These important questions are beyond the scope of this analysis, but underscore that the doctrinal challenges to successfully contesting ADM may well be manifold.

Conversely, it should be acknowledged that not all automated processes will be necessarily inconsistent with existing administrative law frameworks and principles, particularly for well-designed automated systems using appropriate data sets.20 This should not, however, obscure the aspects of automation that are distinctive and are likely to generate pressure for new legal responses. Some administrative law

15 See, eg, Batya Friedman and Helen Nissenbaum, Bias in Computer Systems (1996) 14(3) ACM Transactions on Information Systems 330, 334; Linda J Skitka, Kathleen Mosier and Mark D Burdick, ‘Accountability and Automation Bias’ (2000) 52(4) International Journal of Human-Computer Studies 701.

16 Perry (n 6) 31.

17 Jennifer Cobbe, ‘Administrative Law and the Machines of Government: Judicial Review of Automated Public-Sector Decision-Making’ (2019) 39(4) Legal Studies 636, 648–9; Moses and Santow (n 6) 831–2. Both section 13 of the Administrative Decisions (Judicial Review) Act 1977 (Cth) and section 28 of the Administrative Appeals Tribunal Act 1975 (Cth) impose a requirement to provide reasons. As Bateman observes, statutory requirements for reason giving underscore ‘that the person who actually exercised the statutory power, rather than a lawyer or public relations expert, must provide the reasons’, raising

doubts about the ability of automated systems to produce ‘legally compelling’ reasons: Bateman (n 6) 527 (emphasis in original).

18 Bateman (n 6) 523. On the requirement for an ‘active intellectual process’: see, eg, Tickner v Chapman [1995] FCAFC 1726; (1995) 57 FCR 451, 462 (Black CJ); Chetcuti v Minister for Immigration and Border Protection [2019] FCAFC 112; (2019) 270 FCR 335, 345–6 [55]–[58] (Murphy and Rangiah JJ), discussing Carrascalao v Minister for Immigration and Border Protection [2017] FCAFC 107; (2017) 252 FCR 352, 364 [46]–[48], 367 [60] (Griffiths, White and Bromwich JJ).

19 See Rebecca Williams, ‘Rethinking Deference for Algorithmic Decision-Making’ (Research Paper No 7/2019, Oxford Legal Studies, 31 August 2018) 33.

20 For example, Oswald argues that English administrative law rules around the duty to give reasons, relevant and irrelevant considerations, and fettering discretion, are potentially applicable in the context of ADM: Marion Oswald, ‘Algorithm-Assisted Decision-Making in the Public Sector: Framing the Issues Using Administrative Law Rules Governing Discretionary Power’ (2018) 376(2128) Philosophical Transactions of the Royal Society A 1. The need for further interdisciplinary research on technological solutions to narrow the gap between ADM and administrative law is discussed in Part VI.

1052 UNSW Law Journal Volume 44(3)

doctrines ought to be reframed and recrafted to ensure their continuing relevance in a contemporary decision-making context.21 However, an appropriate balance must be struck between doctrinal development and the need for legal stability, predictability and coherence. Moreover, ad hoc judicial consideration of individual cases is arguably unsuited to addressing systemic issues, underlining the need for regulatory reform. Part V canvasses options for such reform to address the doctrinal disconnects identified in Parts II, III and IV, with reference to the European Union’s (‘EU’) General Data Protection Regulation 2016/679 (‘GDPR’).22 The article concludes by arguing that doctrinal and legislative evolution are both needed to ensure the ongoing relevance of administrative law protections in the digital age.

II WHAT IS THE ‘MATTER’ WITH TRANSLATION ERRORS?

An important consideration in digitising and automating government decision- making processes is whether the computer code and algorithms used in automated systems are congruent with the languages and logic of the authorising statute. When the rules of statutory interpretation are combined with expectations of rationality in administrative decision-making, there is a complex set of requirements that administrative decision-makers are expected to follow to exercise statutory power in accordance with the logic of a statute.23 Critical questions arise as to whether it is technically possible for automated systems to make decisions in accordance with these multi-faceted expectations, and whether this is in fact occurring in practice. These potential doctrinal disconnects are exacerbated as judicial guidance on whether the interpretation of statutory provisions embedded in automated systems aligns with the true construction of a statute is only available if there is a matter before the court,24 not at these systems’ design stage when such guidance is crucially needed. Although the inability to challenge an incorrect interpretation of a statute until there is a matter is a well-known aspect of Australian public law, the implications of this doctrinal restriction are compounded in the case of ADM as errors embedded in computer code can be perpetuated at a far greater scale than flawed human decision-making.

For decades, computer science and law scholars have grappled with the challenge of converting legal language into machine-consumable form.25 This

21 Oswald similarly argues that traditional principles of English administrative law, interpreted and reframed for a new context, can guide algorithmic decision-making in the public sector: Oswald (n 20) 3. See further Part V.

22 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1 (‘GDPR’).

23 Robert French, ‘Statutory Interpretation and Rationality in Administrative Law’ (Speech, Australian Institute of Administrative Law National Administrative Law Lecture, 23 July 2015) 12.

24 Re Judiciary and Navigation Acts [1921] HCA 20; (1921) 29 CLR 257, 265 (Knox CJ, Gavan Duffy, Powers, Rich and Starke JJ).

25 See, eg, the annals of the Artificial Intelligence and Law journal.

2021 Addressing Disconnection 1053

difficulty arises as computer code tends to be more precise, with a narrower vocabulary than natural language.26 Each of the instructions that a programmer embeds in computer code carries a fixed meaning, and programmers prefer binary questions that can be easily translated into code.27 There is, therefore, a real and significant risk that the relatively constrained vocabulary of computer code will not adequately reflect the nuances of statutory provisions.

Exacerbating this challenge, the ordinary meaning of a statutory provision may differ from its legal meaning, which is affected by context and statutory presumptions.28 In contrast to the precise and constrained vocabulary of computer code, statutory provisions are often complex, requiring the weighing of multiple variables, and interpretation of ambiguous terms. Moreover, modern statutes can be hundreds or even thousands of pages in length. Potential errors in translating statutory provisions into code can arise from the substance and breadth of the legislation, its structural and semantic complexity, and determining the appropriate bounds of discretionary powers.29 Moreover, statutory meaning can be subject to change, especially for federal statutes that are frequently amended.30 Computer programmers commonly lack legal and policy expertise, which can limit their appreciation of the potential complexity of the task of accurately translating legislation into computer code.31 Thus, legal meaning may easily become oversimplified, lost or distorted in the encoding process.

Additional translation errors can arise from the choices and constraints that shape the creation of algorithms. Developing an algorithm involves instructing a computer to follow a set of defined steps structured to produce particular outputs.32 Erroneous outcomes can arise if the problem is mistranslated, if there are mistakes in the instructions as to how an algorithm should respond to different inputs, or if there are bugs in the computer program.33 The accuracy and reliability of algorithms are also influenced by the availability and quality of input data, hardware, software, platforms and coding languages.34 The layers of policies and rules that are created as programmers first translate statutes into code, and then develop algorithms

26 James Grimmelmann, ‘Regulation by Software’ [2005] YaleLawJl 31; (2005) 114(7) Yale Law Journal 1719, 1728.

27 Danielle Keats Citron, ‘Technological Due Process’ (2008) 85(6) Washington University Law Review

1249, 1262.

28 Perry (n 6) 32; Project Blue Sky v Australian Broadcasting Authority [1998] HCA 35; (1998) 194 CLR 355, 381 [69] (McHugh, Gummow, Kirby and Hayne JJ).

29 Administrative Review Council, ‘Automated Assistance in Administrative Decision Making: Report to the Attorney-General’ (Report No 46, 12 November 2004) 34 (‘Best Practice Principles’).

30 For example, between 2013 and 2017, on average, the Social Security Act 1991 (Cth) was amended approximately once per month, and the Income Tax Assessment Act 1997 (Cth) was amended every

3.4 weeks: Lisa Burton Crawford, ‘Between a Rock and Hard Place: Executive Guidance in the Administrative State’ in Janina Boughey and Lisa Burton Crawford (eds), Interpreting Executive Power (Federation Press, 2020) 7, 9.

31 Perry (n 6) 32; Citron (n 27) 1261.

32 Rob Kitchin, ‘Thinking Critically about and Researching Algorithms’ (2017) 20(1) Information, Communication and Society 14, 14.

33 Ibid 19. See also Grimmelmann (n 26) 1732–3. Grimmelmann notes that the aim of computer programmers is to remove the most important bugs rather than to eliminate them entirely: at 1738.

34 Kitchin (n 32) 18; Friedman and Nissenbaum (n 15) 334.

1054 UNSW Law Journal Volume 44(3)

based on that code,35 underscores the desirability of addressing interpretive issues during the design process.

In Australia, the difficulty of accurately translating statutory provisions into computer code has constitutional significance. Under the strict separation of judicial power in the Australian Constitution, two exclusively judicial functions are conclusively interpreting the legal meaning of a statute,36 and determining the validity of executive action by reference to the authorising statute.37 Accordingly, Australian courts do not show deference to administrators’ interpretation of ambiguous statutory provisions.38 Rather, as Cane argues, ‘both normatively and strategically, administrators should approach interpretation in precisely the way a court would, applying the same rules, principles and modes of reasoning’.39 As administrative officials should endeavour to mirror the courts’ approach to statutory interpretation in administering statutes within their remit,40 legal rules in automated systems ought to be encoded and applied in accordance with the judicially-approved construction of the enabling statute.41

In an administrative law context, the expectations of decision-makers are rendered even more complex by the ‘generalised requirement for rationality’ in executive government decision-making.42 Writing extra-curially, the Hon Robert French, who was then the Chief Justice of the High Court of Australia, noted that ‘a particular exercise of [a statutory] power [by the executive] must be supported by reasoning which complies with the logic of the statute’, including implied statutory obligations.43 According to his Honour:

The logic of a statute ...

• is a reasoning process – ie, a logical process, albeit it may involve the exercise of a value judgment, including the application of normative standards, and the exercise of discretion;

35 Kenneth A Bamberger, ‘Technologies of Compliance: Risk and Regulation in a Digital Age’ (2010) 88(4)

Texas Law Review 669, 711.

36 Corporation of the City of Enfield v Development Assessment Commission [2000] HCA 5; (2000) 199 CLR 135, 153 (Gleeson CJ, Gummow, Kirby and Hayne JJ) (‘Enfield’); A-G (NSW) v Quin (1990) 170 CLR 1, 36 (Brennan J) (‘Quin’).

37 Enfield [2000] HCA 5; (2000) 199 CLR 135, 152–3 (Gleeson CJ, Gummow, Kirby and Hayne JJ).

38 In obiter dicta in Enfield, the majority of the High Court indicated that the doctrine of deference does not apply in Australia: ibid. The Full Federal Court has affirmed that ‘[i]t is clear that the Chevron doctrine is not a principle that applies in Australia’: Minister for Immigration and Citizenship v Yucesan [2008] FCAFC 110; (2008) 169 FCR 202, 207 [15] (Emmett, Stone and Edmonds JJ).

39 Peter Cane, Controlling Administrative Power: An Historical Companion (Cambridge University Press, 2016) 236 and associated footnotes.

40 It should be noted that, unlike courts, administrative decision-makers can also take into account the merits of a decision, in line with their constitutional role: Quin (1990) 170 CLR 1, 35–6 (Brennan J). This is challenging in the context of automated systems, which are not well-equipped to take into account the merits of individual cases: Carol Harlow and Richard Rawlings, Law and Administration (Cambridge University Press, 3rd ed, 2009) 221.

41 Anna Huggins, ‘Executive Power in the Digital Age: Automation, Statutory Interpretation and Administrative Law’ in Janina Boughey and Lisa Burton Crawford (eds), Interpreting Executive Power (Federation Press, 2020) 111, 119–20 (‘Executive Power in the Digital Age’).

42 French (n 23) 13.

43 Ibid 12.

2021 Addressing Disconnection 1055

• is consistent with the statutory purpose;

• is not directed to a purpose in conflict with the statutory purpose;

• is based on a correct interpretation of the statute, where that interpretation is necessary for a valid exercise of a power – error of law which does not vitiate a decision is thereby excluded;

• has regard to considerations which the statute, expressly or by implication, requires to be considered;

• disregards considerations which the statute does not permit the decision-maker to take into account;

• involves finding of fact or states of mind which are prescribed by the statute as necessary to the exercise of the relevant power;

• does not depend upon inferences which are not open for findings of fact which are not capable of being supported by the evidence or materials before the decision-maker ...

Decision making which complies with the logic of the statute will ... also

• result from the application of processes required by the statute or by implication, including the requirements of procedural fairness.44

These nuanced, multi-faceted and legally complex requirements raise important questions about whether it is technically possible to design automated systems to align with these expectations. After decades of research, scholars of artificial intelligence and law are still yet to devise computational models that comprehensively implement processes of statutory interpretation.45 Considerable progress has been made, however, toward digitising and automating parts of this process.46 This suggests that disconnects between the code and algorithms used in automated systems and expectations of statutory interpretation and rational decision-making in administrative law might be difficult to eliminate entirely – particularly for statutory provisions that are discretionary, vague, syntactically and/ or semantically ambiguous, and subject to legal indeterminacy.47 Further research into the prospects and challenges of building computational models of statutory reasoning, tailored for the Australian constitutional context, is warranted.48

Although departures from principles of statutory interpretation and administrative law can also arise from errors by human decision-makers, the nature of ADM significantly heightens the risks of systemic departures from these principles. Notable features of automated systems include their speed and

44 Ibid 12–13.

45 Kevin Ashley, Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age (Cambridge University Press, 2017) 54. Challenges also arise in developing computational models to reflect case law: at ch 3; Frank Pasquale and Glyn Cashwell, ‘Prediction, Persuasion, and the Jurisprudence of Behaviourism’ (2018) 68(Supplement 1) University of Toronto Law Journal 63.

46 Ashley (n 45) 54.

47 Ibid ch 2; Anna Huggins et al, Select Senate Committee on Financial Technology and Regulatory Technology, Parliament of Australia, Issues Paper Submission (Parliamentary Paper No 196, December 2020) <https://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Financial_Technology_and_ Regulatory_Technology/FinancialRegulatoryTech/Submissions>.

48 See, eg, Huggins et al (n 47) 15–16.

1056 UNSW Law Journal Volume 44(3)

scalability, which allows mass decision-making at an unprecedented scale.49 For example, after the introduction of Centrelink’s online compliance intervention (‘OCI’) system, commonly referred to as ‘robodebt’, 20,000 debt discrepancy notices per week were generated, compared to a previous average of 20,000 income data-match discrepancies per year when manual verification processes were employed.50 If there are flaws in the computer programming or data sources relied upon, automated systems create a risk of errors on a far larger scale than human decision-makers.

The Centrelink OCI controversy exemplifies how the erroneous translation of statutory provisions into computer code can result in systemic departures from the true meaning of a statute. The initial design of the OCI system departed from the correct interpretation of the relevant provisions of the Social Security Act 1991 (Cth) as the algorithm relied on a fortnightly average of Australian Tax Office (‘ATO’) income data, and thus did not respect the requirement for entitlements to be calculated based on the precise amount of income earned each fortnight.51 Whilst previously the process for raising and recovering welfare debts involved manual review by Centrelink compliance officers, the OCI system, introduced in mid-2016, relied on an automated data-matching and assessment process,52 and expected welfare recipients to provide income evidence to disprove a presumed debt. A particularly controversial feature of this system was the reliance placed on averaged ATO annual income data if recipients did not provide evidence of their income reaching back six or more years.53 After more than three years of sustained controversy and criticism,54 in November 2019, Services Australia

49 Richard M Re and Alicia Solow-Niederman, ‘Developing Artificially Intelligent Justice’ (2019) 22(2)

Stanford Technology Law Review 242, 255.

50 Louise Macleod, ‘Lessons Learnt about Digital Transformation and Public Administration: Centrelink’s Online Compliance Intervention’ [2017] AIAdminLawF 21; (2017) 89 Australian Institute of Administrative Law Forum 59, 59. In October 2019, it was reported that approximately 10,000 automated debt discrepancy letters were still being generated each week: Luke Henriques-Gomes, ‘Robodebt Inquiry: How the Coalition Tried to Defend the Indefensible’, The Guardian (online, 13 October 2019) <https://www.theguardian.com/ australia-news/2019/oct/13/robodebt-inquiry-how-the-coalition-tried-to-defend-the-indefensible>.

51 Terry Carney, ‘The New Digital Future for Welfare: Debts without Legal Proofs or Moral Authority?’ [2018] (1) University of New South Wales Law Journal Forum 1, 3–8; Huggins, ‘Executive Power in the Digital Age’ (n 41).

52 Automated decision-making is explicitly authorised by section 6A of the Social Security (Administration) Act 1999 (Cth).

53 This program is described in depth in Richard Glenn, Acting Commonwealth Ombudsman, ‘Centrelink’s Automated Debt Raising and Recovery System: A Report about the Department of Human Services’ Online Compliance Intervention System for Debt Raising and Recovery’ (Report No 2/2017, April 2017) 1, 5–6 <http://www.ombudsman.gov.au/ data/assets/pdf_file/0022/43528/Report-Centrelinks- automated-debt-raising-and-recovery-system-April-2017.pdf> .

54 For example, the OCI system has been the subject of multiple Senate inquiries and Ombudsman reports: see Senate Community Affairs References Committee, Parliament of Australia, Design, Scope, Cost- benefit Analysis, Contracts Awarded and Implementation Associated with the Better Management of the Social Welfare System Initiative (Report, 21 June 2017) <http://www.aph.gov.au/Parliamentary_Business/ Committees/Senate/Community_Affairs/SocialWelfareSystem/Report> Senate Community Affairs References Committee, Parliament of Australia, Centrelink’s Compliance Program (Second Interim Report, September 2020) <https://parlinfo.aph.gov.au/parlInfo/download/committees/reportsen/024338/ toc_pdf/Centrelink’scomplianceprogram.pdf>; Glenn (n 53); Michael Manthorpe, Commonwealth

2021 Addressing Disconnection 1057

announced it would stop raising welfare debts based on sole reliance on averaged ATO income data.55 In June 2021, the Federal Court approved a settlement worth at least $1.8 billion in response to a class action brought by Gordon Legal on behalf of individuals affected by the OCI scheme.56

Importantly, the original design of the OCI system was based on a flawed interpretation of the relevant provisions of the enabling act. Sections 1222A(a) and 1223 of the Social Security Act 1991 (Cth) specify preconditions for raising a debt, which must be established by the Commonwealth as the entity asserting the existence of the debt.57 As Carney persuasively argues, raising a debt ‘has moral and practical consequences for credit worthiness standing and ratings advice’, and according to the High Court’s Briginshaw principle,58 requires ‘an “upwards variation” in the strength of [evidence] required’.59 Sections 1222A(a) and 1223 thus need to be interpreted in the light of relevant case law indicating that the strength of material required to establish a welfare debt must have high probative value. Reliance on averaged ATO income data in a context in which it is well known that many social security recipients have variable income falls well short of the required evidentiary standard.60 This was reinforced by the Amato v Commonwealth court order in late 2019, in which Davies J in the Federal Court noted that a presumed debt arising from averaged ATO income data was not based on ‘probative material’.61 The OCI system thus exemplifies an automated system that applied a decision-making logic which departed from the correct interpretation of the enabling act en masse, with far-reaching consequences for vulnerable Australians.62

To avoid the types of systemic harms that can arise from miscoded automated systems, ideally the courts would offer an advisory jurisdiction in which pro- active judicial advice regarding the correctness of the interpretation of a statute encoded in an automated system is available before that system is implemented.

Ombudsman, ‘Centrelink’s Automated Debt Raising and Recovery System’ (Implementation Report No 1/2019, April 2019) 27 <https://www.ombudsman.gov.au/ data/assets/pdf_file/0025/98314/April-2019- Centrelinks-Automated-Debt-Raising-and-Recovery-System.pdf>.

55 Paul Farrell, ‘Government Halting Key Part of Robodebt Scheme, Will Freeze Debts for Some Welfare Recipients’, ABC News (online, 19 November 2019) <https://www.abc.net.au/news/2019-11-19/robodebt- scheme-human-services-department-halts-existing-debts/11717188>.

56 Prygodicz v Commonwealth of Australia [No 2] [2021] FCA 634. See, eg, Rebecca Turner, ‘Robodebt Condemned as a “Shameful Chapter” in Withering Assessment by Federal Court Judge’, ABC News (online, 11 June 2021) <https://www.abc.net.au/news/2021-06-11/robodebt-condemned-by-federal- court-judge-as-shameful-chapter/100207674?utm_campaign=news-article-share-control&utm_ content=twitter&utm_medium=content_shared&utm_source=abc_news_web>.

57 Senate Community Affairs References Committee (n 54) 109 [6.13].

58 Briginshaw v Briginshaw [1938] HCA 34; (1938) 60 CLR 336, 362 (Dixon J).

59 Carney (n 51) 7–8. On the relevance of this principle to allegations of welfare debt, see, eg, Re Secretary, Department of Education, Employment and Workplace Relations and Kambouris [2008] AATA 221, [30]–[32] (Deputy President Jarvis); Re Johnson and Secretary, Department of Family and Community Services [2000] AATA 424, [38] (Senior Member Bayne).

60 Carney (n 51) 7–8.

61 Order of Davies J in Amato v Commonwealth (Federal Court of Australia, VID611/2019, 27 November 2019) 6 [8.1]–[8.2] (‘Amato’).

62 From July 2016 to April 2019, more than 1,000,000 initial assessment letters were sent out under the auspices of the OCI system: Manthorpe (n 54) 13.

1058 UNSW Law Journal Volume 44(3)

Advice on whether executive interpretations of statutory provisions are correct is available in other jurisdictions.63 In Australia, the provision of such guidance would conform with the courts’ constitutional function of conclusively determining the meaning of a statute. However, it would be inconsistent with Chapter III of the Australian Constitution, which only confers federal jurisdiction to hear matters.64 As confirmed in Re Judiciary and Navigation Acts, the High Court does not have jurisdiction to hear a matter under section 76 of the Constitution ‘unless there is some immediate right, duty or liability to be established by the determination of the Court’.65 The Court, therefore, cannot provide an advisory opinion on the correct interpretation of a statute to be translated into computer code ‘divorced from any attempt to administer [it]’.66 Similarly, under the Administrative Decisions (Judicial Review) Act 1977 (Cth) (‘ADJR Act’), statutory review of a government agency’s interpretation of an act they administer is only available once the interpretation has the ‘real and practical consequences’ associated with reviewable decisions.67 Thus, although judicial guidance on the interpretive choices embedded in automated systems is of critical importance during the design phase, Australian public law doctrine prevents this from occurring.68

In sum, there is significant potential for statutory meaning to be lost or distorted through the encoding process, particularly in light of the complex expectations of statutory interpretation and rationality in administrative decision-making in the Australian public law context. Individual administrators can, and do, make errors, but not at the scale and speed of ADM. As the OCI example underscores, automated systems can replicate a flawed decision-making logic across a large data set, creating a heightened risk of widespread errors. However, in the absence of a matter before the courts, there is no legally reliable means of clarifying the meaning of ambiguous statutory provisions prior to the implementation of an automated system. This disconnect undesirably impedes the identification and correction of errors in automated systems before decisions affecting citizens are made at scale.

63 Courts in the United Kingdom (‘UK’) have been willing to make such declarations where there is a clear public interest in clarifying the content of the law: see, eg, Gillick v West Norfolk and Wisbech Area Health Authority [1985] UKHL 7; [1986] AC 112, 192–4; R v Secretary of State for the Environment; Ex parte Tower Hamlets London Borough Council [1993] QB 632.

64 Australian Constitution s 76; Re Judiciary and Navigation Acts [1921] HCA 20; (1921) 29 CLR 257, 265–6 (Knox CJ, Gavan Duffy, Powers, Rich and Starke JJ); Crawford (n 30) 13–14.

65 [1921] HCA 20; (1921) 29 CLR 257, 265 (Knox CJ, Gavan Duffy, Powers, Rich and Starke JJ). See more recently CGU Insurance Pty Ltd v Blakeley [2016] HCA 2; (2016) 259 CLR 339, 350 [26] (French CJ, Kiefel, Bell and Keane JJ).

66 Re Judiciary and Navigation Acts [1921] HCA 20; (1921) 29 CLR 257, 266 (Knox CJ, Gavan Duffy, Powers, Rich and Starke JJ). See also Re McBain; Ex parte Australian Catholic Bishops Conference [2002] HCA 16; (2002) 209 CLR 372, 389 [5] (Gleeson CJ).

67 Electricity Supply Association of Australia Ltd v Australian Competition and Consumer Commission

[2001] FCA 1296; (2001) 113 FCR 230, 253 [80] (Finn J); Crawford (n 30) 13–14.

68 Moreover, even if the source code and algorithmic specifications are rendered transparent, lawyers and judges may lack the technical literacy to interpret and understand the effect of this data: Jenna Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ [2016] (January– June) Big Data and Society 1, 4. See further Part IV(A).

2021 Addressing Disconnection 1059

III THE COMPLEXITY OF ADM AND THE LEGAL MEANING OF A DECISION

The courts’ multi-faceted expectations of administrative decision-making logics are by no means the only public law doctrines that sit uncomfortably with the new challenges posed by ADM. In addition, the legal meaning of a ‘decision’ operates as a jurisdictional prerequisite that can restrict the range of disputes that are amenable to judicial review. This Part analyses the case of Pintarich, and argues that the majority judgment endorsed a narrow conception of what constitutes a decision that fails to adequately account for the potential variability and complexity of ADM. Although narrow interpretations of what constitutes a decision under the ADJR Act are not new,69 the implications of such interpretations are again compounded in the case of ADM. By its very nature, automation reduces the need for human input, yet if such input is a criterion for reviewability, a growing range of administrative processes will be beyond the purview of judicial scrutiny.

A The Potential Variation and Complexity of ADM

The case of Pintarich was heard against a context in which Australian government agencies’ use of automated tools is accelerating. Expert systems, in which a computer program performs a task for which the intelligence of a human expert is usually thought to be required, have been used by Australian government agencies in diverse policy settings for decades.70 In recent years, the sophistication of the underlying technologies, and the extent of reliance upon them, have shifted significantly. As noted above, the Australian Government now has a whole-of- government Digital Transformation Agenda and aims to use automation to reduce manual intervention wherever possible.71 A growing number of statutes now authorise computers to make administrative decisions on behalf of the responsible officer.72 There is thus a concerted effort to promote automation and reduce human manual input in Australian government agencies.

69 The narrow requirement that a ‘decision’ must be ‘final or operative and determinative’ articulated in Australian Broadcasting Tribunal v Bond (1990) 170 CLR 321, 337 (Mason CJ) restricted the range of cases that are amenable to judicial review under the Administrative Decisions (Judicial Review) Act 1977 (Cth) (‘ADJR Act’), and increased resort to challenges under section 39B of the Judiciary Act 1903 (Cth): Mark Aronson, ‘Is the ADJR Act Hampering the Development of Australian Administrative Law’ (2004) 15(3) Public Law Review 202, 206–7. There are also issues with reviewing fully automated decisions

via section 75(v) of the Constitution, as ‘courts have read in a requirement for a formal appointment of a natural person, and a prohibition against artificial persons’: see Yee-Fui Ng and Maria O’Sullivan, ‘Deliberation and Automation: When Is a Decision a “Decision”?’ (2019) 26(1) Australian Journal of Administrative Law 21, 31–2, and the authorities cited therein.

70 In 2004, the Administrative Review Council published the results of a stocktake of the use of automated systems in Commonwealth agencies, revealing their widespread use: ‘Best Practice Principles’ (n 29) 5, 57–64 Appendix B.

71 Hamilton (n 2).

72 See, eg, Social Security (Administration) Act 1999 (Cth) s 6A; My Health Records Act 2012 (Cth) s 13A; Australian Education Act 2013 (Cth) s 124; Migration Act 1958 (Cth) s 495A; Carbon Credits (Carbon Farming Initiative) Act 2011 (Cth) s 287; Veterans’ Entitlements Act 1986 (Cth) s 4B; A New Tax System (Family Assistance) (Administration) Act 1999 (Cth) s 223; Australian Citizenship Act 2007 (Cth) s 48;

1060 UNSW Law Journal Volume 44(3)

The extent to which automation shapes decision-making can be conceptualised as a spectrum of partial through to full automation. Variations along this spectrum include the use of automated decision-support tools, automated decisions made with human oversight, and automated decisions made without human input after the initial coding process.73 However, even if there is a ‘human in the loop’, their role should be examined critically. There is a risk of nominal or tokenistic human involvement, in which a human effectively ‘rubber stamps’ automated decisions. Even where automated processes are explicitly intended to act as decision-support tools only, due to humans’ trust in automated logic, lack of time and the convenience of relying on pre-processed data, automated processes ‘tend to de facto operate as wholly automated’.74 Close scrutiny of individual cases is required to determine the extent of human input into a decision involving a combination of automated and human processes.

In addition to the spectrum between partial and full automation, two important variants of automated processes need to be distinguished, which further complicates analysis of the role of the human in automated decisions. The first is pre-programmed rules-based processes, and the second is inferences or predictions based on rules a computer program has learned from patterns and correlations in historic data.75 In the context of administrative decision-making, pre-programmed processes appear to have been more commonly used in Australia to date. However, government agencies’ use of machine-learning algorithms is rapidly gaining traction in the United States,76 and Australia appears set to follow suit.77

Pre-programmed processes rely on an ‘if this then that’ logic, which is deterministic and ostensibly suited to non-discretionary decisions.78 Accordingly, a human should be involved if an automated administrative decision requires the

Superannuation (Government Co-contribution for Low Income Earners) Act 2003 (Cth) s 48; National Consumer Credit Protection Act 2009 (Cth) s 242; Paid Parental Leave Act 2010 (Cth) s 305; Australian National Registry of Emissions Units Act 2011 (Cth) s 87; Business Names Registration Act 2011 (Cth)

s 66; Child Support (Assessment) Act 1989 (Cth) s 12A; Child Support (Registration and Collection) Act 1988 (Cth) s 4A; Trade Support Loans Act 2014 (Cth) s 102; Customs Act 1901 (Cth) s 126H; Biosecurity Act 2015 (Cth) ss 280(6)–(7); Export Control Act 1982 (Cth) s 23A(2)(h); Aged Care Act 1997 (Cth) s 23B.4; VET Student Loans Act 2016 (Cth) s 105; National Health Act 1953 (Cth) s 101B;

Military Rehabilitation and Compensation Act 2004 (Cth) s 4A; Safety, Rehabilitation and Compensation (Defence-related Claims) Act 1988 (Cth) s 3A, cited in Simon Elvery, ‘How Algorithms Make Important Government Decisions: And How That Affects You’, ABC News (online, 21 July 2017) <http://mobile. abc.net.au/news/2017-07-21/algorithms-can-make-decisions-on-behalf-of-federal-ministers/8704858> .

73 Zalnieriute, Moses and Williams (n 1) 432.

74 Michael Veale and Lilian Edwards, ‘Clarity, Surprises, and Further Questions in the Article 29 Working Party Draft Guidance on Automated Decision-Making and Profiling’ (2018) 34(2) Computer Law and Security Review 398, 400 (citations omitted). See further Part IV(B).

75 Zalnieriute, Moses and Williams (n 1) 432–5.

76 Coglianese and Lehr (n 5) 1160–7.

77 Carney (n 51) 12. See also Department of Industry, Science, Energy and Resources (Cth) ‘What is the Government Doing in Digital Government?’ (Web Page, 2019) <https://www.industry.gov.au/data- and-publications/australias-tech-future/digital-government/what-is-the-government-doing-in-digital- government>.

78 Mireille Hildebrandt, ‘Algorithmic Regulation and the Rule of Law’ (2018) 376(2128) Philosophical Transactions of the Royal Society A 1, 2.

2021 Addressing Disconnection 1061

exercise of discretion or judgment.79 For example, the automated process that generated the letter that was in dispute in Pintarich, discussed further below, was presumably based on a relatively simple pre-programmed logic. If this was the case, it was arguably inappropriate to use an automated process that applies rigid criteria in connection with the discretionary power in section 8AAG of the Taxation Administration Act 1953 (Cth), which requires the decision-maker to be satisfied of certain specified matters before making a decision about remitting general interest charges.80 As a decision under section 8AAG requires consideration of discretionary elements and situation-specific factors, it cannot be reduced to an ‘if this then that’ logic.

In contrast to predetermined rules-based processes, some automated tools are able to learn from examples, data and experience to make inferences or predictions. Machine learning exemplifies this type of automated process.81 Although machine- learning processes are not deterministic, their probabilistic results are still shaped by the human discretionary choices made in the process of designing and training algorithms.82 Human oversight of machine-learning outputs is more difficult due to their self-learning properties and ‘black box’ nature; indeed, even programmers may not be able to explain the reasoning process underpinning a machine-learning outcome.83 This means that, without human involvement, there may be a paucity of adequate reasons to explain why an outcome was reached in a particular case.84 There is thus arguably a need for meaningful human review of certain types of pre- programmed and probabilistic automated decisions – particularly if such decisions significantly affect individuals.85

B A Doctrinal Disconnect

The majority decision in Pintarich fails to adequately account for this rapidly changing decision-making context. This section begins by explaining both the human and automated inputs that were relevant to the judicial review proceedings. This provides important context for analysing the differing approaches adopted by the majority and minority judges with respect to what constitutes a decision, and the impact of automation on administrative decision-making.

The case of Pintarich considered whether an automated letter from the ATO communicated a decision to reduce the interest charges on a tax debt. The facts of this case were that Mr Pintarich, a taxpayer, owed the ATO outstanding

79 ‘Better Practice Guide’ (n 1) 9–10.

80 For example, subsection (5) provides that the Commissioner may remit all or a part of the charge if he or she is satisfied that ‘there are special circumstances because of which it would be fair and reasonable to remit all or a part of the charge’, or ‘it is otherwise appropriate to do so’: Taxation Administration Act 1953 (Cth) s 8AAG(5).

81 Zalnieriute, Moses and Williams (n 1) 427.

82 Hildebrandt (n 78) 3.

83 Coglianese and Lehr (n 5) 1167; Grimmelmann (n 26) 1734–5; Burrell (n 68).

84 Adequate reasons must explain why a decision was reached in the applicant’s case, and should not be expressed in ‘vague generalities’: see the authorities cited by Leighton McDonald, ‘Reasons,

Reasonableness and Intelligible Justification in Judicial Review’ [2015] SydLawRw 22; (2015) 37(4) Sydney Law Review 467, 480.

85 See further Part V.

1062 UNSW Law Journal Volume 44(3)

tax liabilities of $1.16 million, which comprised primary tax of approximately

$821,000 and general interest charges (‘GIC’) of approximately $335,000.86 Mr Pintarich sought a full waiver of the GIC under section 8AAG of the Taxation Administration Act 1953 (Cth).87 The automated letter at the crux of the dispute in Pintarich was issued on 8 December 2014 (‘the December 2014 letter’). The Deputy Commissioner, Mr Celantano, ‘keyed in’ information into a computer- based template that automatically generated a letter from the ATO headed ‘Payment arrangements for your Income Tax Account debt’ and bore his signature block. Relevantly, the letter stated:

Thank you for your recent promise to pay your outstanding account. We agree to accept a lump sum payment of $839,115.43 on or by 30 January 2015.

This payout figure is inclusive of an estimated general interest charge (GIC) amount calculated to 30 January 2015. Amounts of GIC are tax deductible in the year in which they are incurred.88

This letter indicated, yet did not state unequivocally, that almost all of the interest that Mr Pintarich had originally owed had been waived. Mr Pintarich relied on the December 2014 letter to borrow funds from his bank and paid the ATO the requested sum on 30 January 2015.89 Eight months later, a second authorised officer sent a letter to Mr Pintarich refusing the application for waiver of GIC and stating that the December 2014 letter had been issued in error.90 Subsequently, a decision to grant partial but not full remission of the remaining GIC was made by the first authorised officer on 13 May 2016.91 Mr Pintarich sought to challenge this latter decision, which was less favourable than the ‘decision’ communicated by the automated letter in December 2014.

In the Pintarich case, a majority of the Full Federal Court identified the subjective mental process of reaching a conclusion as a key criterion of a valid decision. The primary issue to be determined on appeal92 was whether, by issuing the computer-generated letter in December 2014, the Deputy Commissioner made a decision to waive almost all of the GIC owing if the taxpayer paid the agreed sum by 30 January 2015.93 Moshinsky and Derrington JJ were persuaded by the reasoning of Finn J in Semunigus v Minister for Immigration and Multicultural Affairs (‘Semunigus’),94 indicating that a valid decision requires two elements to be satisfied: (1) there must be a mental process of reaching a conclusion, and (2) there must be an objective manifestation of that conclusion.95 Their Honours treated the statement of Finn J from Semunigus as a ‘general statement’ of what is involved

86 Pintarich [2018] FCAFC 79; (2018) 262 FCR 41, 55 [91] (Moshinsky and Derrington JJ). 87 Ibid 55 [89]–[92].

88 Ibid 58 [101] (emphasis added). 89 Ibid 58 [101]–[102].

90 Ibid 60 [110].

91 Ibid 60–1 [116].

92 This case was an appeal from an unsuccessful judicial review application in the Federal Court: see

Pintarich v Deputy Commissioner of Taxation [2017] FCA 944.

93 Pintarich [2018] FCAFC 79; (2018) 262 FCR 41, 53 [81] (Moshinsky and Derrington JJ). 94 [1999] FCA 422, [19].

95 Pintarich [2018] FCAFC 79; (2018) 262 FCR 41, 67 [140].

2021 Addressing Disconnection 1063

in the making of a decision,96 which in this instance they applied to the ATO’s automated letter. The majority held that there was no decision associated with the December 2014 letter as, in all the circumstances, there was no mental process to reach a conclusion by the Deputy Commissioner.97 The appeal was, accordingly, dismissed with costs. In October 2018, the High Court refused the taxpayer’s application for special leave on the basis that the proposed appeal had insufficient prospects of success.98

Even though the case arose due to an error in an automated letter, the majority did not engage with the significance of automation for administrative decision- making. Instead, their Honours framed their judgment as narrowly addressing what constitutes a decision under section 8AAG of the Taxation Administration Act 1953 (Cth).99 However, by accepting Finn J’s views in Semunigus as accurately capturing the elements that are generally involved in the making of a decision, and applying them to a situation involving an erroneous output of an automated system, the effect of the majority judgment was to set a broader precedent with implications beyond the circumstances of the Pintarich case.100

In a strong dissenting judgment, Kerr J took into account the impact of automation on administrative decision-making, and argued that the legal conception of what constitutes a decision should not remain static in this context.101 His Honour was sceptical of the utility of applying Finn J’s ‘boiler-plate statement’ from Semunigus to decisions made using automated decision-making systems.102 Rather, Kerr J argued that a determination of whether a decision has been made should be ‘fact and context specific’.103 In this instance, the facts and context included the Deputy Commissioner being allocated to handle Mr Pintarich’s request for the remission of GIC, his conversations with Mr Pintarich and his accountant about the GIC waiver request, and his actions in inputting relevant information into a template to generate an automated letter to be sent to Mr Pintarich, which he did not check before despatching.104 His Honour therefore opined that it was not open to the Deputy Commissioner to renounce the decision conveyed in the December 2014 letter.105

The majority judgment in Pintarich perpetuates an expectation that administrative decisions will involve human mental input, which fails to adequately account for the reality of ADM. As Ng and O’Sullivan argue, ‘the majority in Pintarich, with their narrow conception of what constitutes a decision in a modern decision-making context, failed to properly balance the ability of individuals to

96 Ibid 67–8 [143].

97 Ibid 67 [140].

98 Pintarich v Deputy Commissioner of Taxation [2018] HCASL 322. 99 Pintarich [2018] FCAFC 79; (2018) 262 FCR 41, 67 [140].

100 See, eg, Grass v Slattery [2018] FCA 1719; (2018) 162 ALD 276, 316–18 [197]–[206] (Bromwich J).

101 Pintarich [2018] FCAFC 79; (2018) 262 FCR 41, 48–9 [45]–[52].

102 Ibid 49 [51].

103 Ibid.

104 Ibid 50 [57]–[62].

105 Ibid 50 [63].

1064 UNSW Law Journal Volume 44(3)

challenge unlawful government action in a modern technological world’.106 In contrast, the minority judgment of Kerr J is preferable as it provides a doctrinal solution that recognises that the impact of automation on administrative decisions is potentially far-reaching, complex and varied.

It is implicit in the majority’s judgment that the discretionary power to remit the interest charges on a tax debt under section 8AAG should not have been a subject to a fully automated process due to the requirement for a human mental process.107 As noted above, the multi-faceted discretionary decision under section 8AAG cannot be reduced to an ‘if this then that’ logic.108 Despite this, Mr Celantano despatched an automated letter to Mr Pintarich without first checking or reviewing the letter’s contents,109 meaning that there was no meaningful human oversight of the automated output before the letter was sent. This exposes a regulatory gap as the courts expect an active mental process from a human decision-maker in this situation, yet there is no concomitant requirement for human decision-makers to in fact oversee and review automated outputs.

There is thus a mismatch between ADM which, by its very nature, requires little or no human input after the initial coding decisions have been made, and the legal meaning of a decision. The majority judgment in Pintarich suggests that fully automated discretionary decisions may not be reviewable under the ADJR Act, creating a ‘perverse incentive’ for government agencies to avoid judicial review through fully automating decision-making processes.110 This disconnect creates an unsatisfactory risk that individuals adversely affected by errors in automated communications from government agencies will have limited opportunities for legal redress.111

IV ADM AND NEW BIAS CHALLENGES

Automation also creates new bias risks that are difficult to reconcile with administrative law doctrine. A putative advantage of automated systems is that they reduce opportunities for human bias, prejudices and error, particularly for non-discretionary mass transactional tasks.112 In addition to these potential benefits,

106 Ng and O’Sullivan (n 69) 32.

107 However, the majority did not offer an explicit opinion on the appropriateness of an automated process being used in the context of this discretionary decision.

108 See above n 80.

109 Pintarich [2018] FCAFC 79; (2018) 262 FCR 41, 58 [101] (Moshinsky and Derrington JJ). 110 Ng et al (n 9) 1057–8.

111 This situation becomes more complicated in statutes with legislative provisions deeming a decision made by a computer to be made by a legal decision-maker: see, eg, above n 72. For discussion of the legal implications of these deeming provisions in the wake of Pintarich [2018] FCAFC 79; (2018) 262 FCR 41, see ibid 1059; Bateman (n 6) 528–9.

112 Citron (n 27) 1303; Peter André Busch and Helle Zinner Henriksen, ‘Digital Discretion: A Systematic Literature Review of ICT and Street-Level Discretion’ (2018) 23(1) Information Polity 3, 21; Le Sueur (n 7) 191; Coglianese and Lehr (n 5) 1205. This, of course, assumes that non-discretionary decisions have been correctly translated into code, which is not always the case, as illustrated by the design of the OCI system discussed in Part II.

2021 Addressing Disconnection 1065

there are also distinct bias risks associated with ADM. These risks can arise from

(i) bias in the data or code on which automated decisions are based, and (ii) bias that arises from humans’ susceptibility to defer to automated outputs. This Part shows that these new types of bias raise evidentiary challenges and are unlikely to fit within the ambit of the rule against bias for either partially or fully automated decisions. This disconnect undesirably restricts opportunities for bringing a successful judicial review action in the context of ADM.

A Biases in Code or Data

Developing computer programs requires myriad design choices, which can reflect the conscious or unconscious biases of programmers.113 As Kitchin explains: Whilst programmers might seek to maintain a high degree of mechanical objectivity

– being distant, detached and impartial in how they work and thus acting independent of local customs, culture, knowledge and context – in the process of translating a task or process or calculation into an algorithm they can never fully escape these.114

The data fed into an algorithm can also contain errors or biases, which will then be replicated in decisions made by an automated system. Particularly in the case of machine-learning algorithms, the historical human biases reflected in the data sets upon which they are trained can perpetuate race-based and gender-based discrimination.115 Indeed, some scholars suggest that eliminating bias from machine learning is not possible.116 Unlike biased human decision-making processes, which are likely to affect a relatively small number of cases, automated systems have the potential to apply a flawed decision-making logic to a very high volume of decisions. Importantly, identifying bias in the underlying code or data set of automated processes is difficult due to the opacity of ADM. Of course, administrative decisions made by humans can also be opaque, yet the types of opacity that may arise in ADM are ‘distinct and more complex’ in a number of ways.117 Due to their black-box nature, the internal decision-making logic and mechanism of automated systems, and the choices made in selecting the data and programming the system, are generally hidden.118 As Burrell notes, opacity can arise from the deliberate concealment of an algorithm for reasons such as protecting trade secrets or maintaining competitive advantage.119 In addition, technical literacy challenges experienced by the vast majority of the population who cannot read and write code can render algorithms opaque and incomprehensible, even if their source code is

113 Friedman and Nissenbaum (n 15) 334.

114 Kitchin (n 32) 17–18 (citations omitted).

115 See, eg, Barocas and Selbst (n 12) 695; Chander (n 12) 1025.

116 See Cobbe (n 17) 654 n 129.

117 Joe Tomlinson, Katy Sheridan and Adam Harkens, ‘Proving Public Law Error in Automated Decision- Making Systems’ (Draft Paper, PLP Annual Conference, October 2019) 10.

118 See generally Pasquale (n 12). For an attempt to empirically examine the workings of black-box ADM, see Alice Witt, Nicolas Suzor and Anna Huggins, ‘The Rule of Law on Instagram: An Evaluation of the Moderation of Images Depicting Women’s Bodies’ [2019] UNSWLawJl 20; (2019) 42(2) University of New South Wales Law Journal 557.

119 Burrell (n 68) 3–4.

1066 UNSW Law Journal Volume 44(3)

made transparent.120 Moreover, a machine-learning algorithm that has self-learning properties may produce outcomes that cannot be intuitively explained, even by programming experts.121 The challenges of identifying bias in opaque automated systems were recognised in a 2019 report produced by the Australian Human Rights Commission and the World Economic Forum:

It is difficult to know the decision-making process adopted in an [artificial intelligence] system, because [machine learning] tends to involve opaque proprietary algorithms. Without understanding this process, it is hard to discern whether, when or how such systems are discriminating against a group or individual. This fundamentally challenges the concept of procedural fairness in administrative decision-making.122

The various types of opacity which can affect automated systems thus impede opportunities to identify and contest bias, and indeed other administrative errors, in ADM.

Opaque ADM processes raise new evidentiary challenges for an individual seeking judicial review.123 To comprehensively identify algorithmic biases, access to the relevant source code, algorithmic specifications and data would need to be obtained, which may prove difficult if government agencies are reluctant or unable to release this information for proprietary reasons.124 If access to this information is provided, specialist programming and mathematical knowledge will likely be required to understand how the algorithm works, and isolate programming or data errors. In many instances, the disclosure of the source code or the provision of a complex technical explanation of the underlying algorithm will not be sufficient for an individual to understand the rationale behind a decision affecting them, necessitating further explanation.125 As Cobbe notes, explanations of how algorithmic decisions are made may still fall short of fulfilling a legal obligation to provide reasons as to why a particular decision was made.126 The difficulty of understanding and explaining the way in which opaque automated systems work and produce certain outcomes limits the contestability of automated administrative decisions.

120 Ibid 4–5.

121 Coglianese and Lehr (n 5) 1167; Grimmelmann (n 26) 1734–5; Burrell (n 68) 4–12.

122 Australian Human Rights Commission and World Economic Forum, ‘Artificial Intelligence: Governance and Leadership’ (White Paper, January 2019) 10 (citations omitted).

123 For analysis of these issues in the UK context, see Tomlinson, Sheridan and Harkens (n 117) 12–14.

124 For example, freedom of information exemptions were successfully relied upon to refuse a request for the release of source code for the Australian Electoral Commission’s computer program that is used to count votes in the Senate and other elections in Cordover and Australian Electoral Commission [2015] AATA 956, [25]–[33] (Deputy President Melick and Member Taglieri). Proprietary rights also may be relevant if the design of automated systems is outsourced to private vendors. For analysis of these issues in the US context: see, eg, Robert Brauneis and Ellen P Goodman, ‘Algorithmic Transparency for the Smart City’ (2018) 20 Yale Journal of Law and Technology 103.

125 Article 29 Data Protection Working Party, ‘Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679’ (Guidelines No WP251rev.01, 6 February 2018) 25

<https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612053> (‘A29WP Guidelines’).

126 Cobbe (n 17) 648.

2021 Addressing Disconnection 1067

B Doctrinal Challenges

Even if these practical, evidentiary hurdles are able to be surmounted, there are additional doctrinal impediments to applying judicial review grounds to bias in automated decisions. These challenges apply to both partially automated decisions, where a human relies on automated outputs in making a decision, and fully automated decisions, albeit in different ways.

The rule against bias seeks to prevent decision-makers exercising their power if they are actually or ostensibly biased, but will this extend to an otherwise impartial human decision-maker who has relied on an algorithmic output that is affected by bias? A claim of actual bias requires strong and clear evidence indicating that there is a ‘high probability’ that a decision-maker had a closed mind or has otherwise pre-judged the issues.127 Actual bias would be unlikely to apply to technology- assisted human decision-making unless, for example, there was cogent evidence indicating that the decision-maker knew that certain algorithmic outputs were biased yet nevertheless placed undue reliance upon them.128 Such a scenario seems improbable in practice.

A lower threshold is required to establish apprehended bias, as the court needs only to be satisfied that a fair minded and informed observer might conclude that the decision-maker did not bring an impartial and unprejudiced mind to the issues.129 This raises the question of whether biases introduced by a programmer via coding or data input choices will be sufficient to lead a reasonable observer to conclude that a human decision-maker has approached the issues impartially. In Hot Holdings v Creasy, the majority of the High Court emphasised that an ultimate decision is not necessarily affected if the providers of information to the decision- maker have interests in the decision.130 A similar logic is likely to apply to the early involvement of programmers, who may be sufficiently distant from the ultimate decision-maker that their actions or interests in designing and programming an automated system are unlikely to affect the impartiality of the authorised human decision-maker.131

The phenomenon of ‘automation bias’ also warrants consideration. Automation bias refers to humans’ susceptibility to defer to a computer program’s outputs due to a perception that such outputs are superior or even infallible.132 This type of bias

127 R v Australian Stevedoring Industry Board; Ex Parte Melbourne Stevedoring Co Pty Ltd [1953] HCA 22; (1953) 88 CLR 100, 116 (Dixon CJ, Williams, Webb and Fullagar JJ).

128 This requirement is typically difficult to satisfy unless there is, for example, a clear and public statement of bias, or an admission of guilt from the decision-maker: see, eg, Sun Zhan Qui v Minister for Immigration and Ethnic Affairs (1997) 81 FCR 71, 112–13 (Wilcox J); Gamaethige v Minister for Immigration and Multicultural Affairs [2001] FCA 565; (2001) 109 FCR 424, 442–3 (Stone J); Mark Aronson, Matthew Groves and Greg Weeks, Judicial Review of Administrative Action and Government Liability (Thomson Reuters, 6th ed, 2017) 653.

129 Ebner v Official Trustee in Bankruptcy [2000] HCA 63; (2000) 205 CLR 337, 344–5 (Gleeson CJ, McHugh, Gummow and Hayne JJ) (‘Ebner’).

130 [2002] HCA 51; (2002) 210 CLR 438, 455 (Gaudron, Gummow and Hayne JJ, Callinan J agreeing at 489).

131 Sarah Lim, ‘Re-thinking Bias in the Age of Automation’ (2019) 26(1) Australian Journal of Administrative Law 35, 40 n 62.

132 See, eg, Skitka, Mosier and Burdick (n 15); Nicholas Carr, The Glass Cage: Where Automation Is Taking Us (Random House, 2015); Citron (n 27) 1271–2.

1068 UNSW Law Journal Volume 44(3)

can lead decision-makers to trust and accept the outputs of automated processes without further scrutiny, despite the above-mentioned risks of errors due to flaws in an automated system’s design, data inputs or underlying code. However, it will be difficult to prove that such influences amount to a breach of the rule against bias. As noted above, proof of pre-judgment is required to establish actual bias. For apprehended bias to be made out, there needs to be a logical connection between the source of the alleged bias, and how that factor may cause the decision-maker to approach the task at hand impartially.133 However, predisposition is not prejudgment,134 and evidence will be required to support a reasonable apprehension that the decision-maker had a strongly held and inflexible view regarding the superiority and infallibility of automated outputs.135 Such evidence would need to involve more than simple number crunching; an informed observer would, for example, look beyond statistical evidence regarding the number of decisions a decision-maker has made that follow the outcomes suggested by an automated system.136 If a decision-maker can provide evidence that an automated output was only one of a number of factors considered,137 prejudgment is unlikely to be made out.

Additional doctrinal obstacles arise in situations in which there is no meaningful human review of an automated output before a decision is made. Establishing actual bias requires evidence about the decision-maker’s state of mind, which would presumably only apply to a human decision-maker, rather than a computerised process without consciousness.138 As noted above, the decision- maker’s state of mind, and particularly whether a fair-minded lay person might apprehend that the decision-maker’s mind was not impartial, is also relevant to apprehended bias. Again, the reference to the decision-maker’s state of mind is difficult to reconcile with ADM systems, which do not exercise judgment or make decisions independently of the coding and data parameters set by their programmers.139 Moreover, would the reasonable observer be imputed with enough technical knowledge of potentially complex ADM systems to be aware of any bias introduced through data, coding and algorithmic design choices?140

Writing extra-curially, Perry J has opined that if pre-programmed processes producing predetermined outputs in response to particular inputs141 are used

133 Ebner [2000] HCA 63; (2000) 205 CLR 337, 345 (Gleeson CJ, McHugh, Gummow and Hayne JJ).

134 Minister for Immigration and Multicultural Affairs; Ex parte Jia (2001) 205 CLR 507, 539 (Gleeson CJ and Gummow J).

135 Ebner [2000] HCA 63; (2000) 205 CLR 337, 345 (Gleeson CJ, McHugh, Gummow and Hayne JJ).

136 An informed observer would look beyond statistical analysis of past decisions: ALA15 v Minister for Immigration and Border Protection [2016] FCAFC 30, [38] (Allsop, Kenny and Griffiths JJ).

137 In a related vein, in the US case of Wisconsin v Loomis, 881 NW 2d 749 (Wis, 2016), the Wisconsin Supreme Court found that reliance on an automated tool that produced recommended scores affecting the non-parole period of a sentence did not breach due process rights so long as the output of the machine- learning software was not the only factor considered: at 755.

138 Lim (n 131) 38 n 39.

139 Michael L Rich, ‘Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment’ (2016) 164(4) University of Pennsylvania Law Review 871, 897; Harry Surden, ‘Machine Learning and Law’ (2014) 89(1) Washington Law Review 87, 105–6.

140 Lim (n 131) 41–2.

141 See, eg, Grimmelmann (n 26) 1732.

2021 Addressing Disconnection 1069

for discretionary decisions without human oversight, this could constitute a constructive failure to exercise the discretion, as well as raising questions of prejudgment or bias.142 However, the criteria for a reviewable decision endorsed by the majority in Pintarich require evidence of a human mental process, which will, of course, be lacking for fully automated processes. On balance, therefore, the rule against bias is unlikely to provide a promising avenue of review for either partially or fully automated decisions affected by bias.

More broadly, a judicial review challenge of bias in an individual decision is arguably an inadequate mechanism for addressing systemic biases in the code or data of automated systems. A finding of bias will typically result in the decision being remade without the participation of the decision-maker who may be biased, which would presumably mean that the decision would be remade by a human decision-maker without reliance on the automated system. However, there is no follow-up mechanism to ensure that an automated system exhibiting bias will be reprogrammed or provided with new data to prevent future incidences of biased decisions. Judicial review has limited utility in achieving administrative law ideals at the systemic level,143 which is particularly concerning in the case of automated systems given their potential to replicate errors at a potentially ‘enormous scale if undetected’.144

Another potentially more fruitful ground of review for challenging opaque automated decisions is unreasonableness. Although there is no common law duty to give reasons in Australia,145 a conclusion of unreasonableness may apply to a decision which ‘lacks an evident and intelligible justification’.146 In Minister for Primary Industries and Energy v Austral Fisheries Pty Ltd, a decision based on delegated legislation containing a ‘statistical fallacy’was declared to be unreasonable and therefore invalid.147 Automated decisions may be similarly irrational where miscoding, bugs or other flaws in the system’s algorithmic model or data result in spurious correlations between factors that are not logically connected. In addition, as mentioned above, automated systems may be opaque for a range of reasons, including secret proprietary algorithms, technical illiteracy and complexity.148 Each of these factors may restrict the intelligibility and comprehensibility of the outcome reached, or the reasoning process utilised, in a particular decision. The relevance of this ground of review was illustrated in the Amato court order in which an OCI decision based solely on income averaging was considered to be

142 Perry (n 6) 33.

143 Aronson, Groves and Weeks (n 128) 5–6.

144 Perry (n 6) 30.

145 Public Service Board of New South Wales v Osmond (1986) 159 CLR 656, 662 (Gibbs CJ); Wingfoot Australia Partners Pty Ltd v Kocak [2013] HCA 43; (2013) 252 CLR 480, 497–8 (French CJ, Crennan, Bell, Gageler and Keane JJ).

146 Minister for Immigration and Citizenship v Li [2012] HCA 61; (2013) 249 CLR 332, 367 [76] (Hayne, Kiefel and Bell JJ); Minister for Primary Industries and Energy v Austral Fisheries Pty Ltd (1993) 40 FCR 381, 400–1 (Beaumont and Hill JJ).

147 (1993) 40 FCR 381, 401 (Beaumont and Hill JJ). See also at 384 (Lockhart J).

148 Burrell (n 68).

1070 UNSW Law Journal Volume 44(3)

‘irrational’ and thus unlawful.149 However, despite an expansion of the scope of unreasonableness review in recent years,150 unreasonableness remains difficult to establish and is a ground of ‘last resort’.151 It is therefore unlikely to be regularly utilised in successfully challenging ADM.

In sum, there are a range of practical and doctrinal obstacles which create a risk that biases embedded in opaque automated systems will remain undetected and uncorrected, potentially impacting a large number of citizens. Once again, there is a gap between the novel challenges that are likely to arise in relation to ADM and the doctrines of administrative law developed for a human-centric decision- making context.

V THE NEED FOR DOCTRINAL AND REGULATORY EVOLUTION

The analysis in the previous Parts has underscored the tensions between key distinctive features of automation and traditional public law rules. Of course, automation is not the only disruptive situation that the law must deal with; other examples include environmental degradation and climate change,152 wars, forced migrations and pandemic outbreaks.153 Because such situations do not fit neatly within existing legal doctrines and frameworks, they generate pressure for changes in the law. As a result, ‘legal frameworks must evolve or new authoritative legal frames must be developed’.154 In the context of automation, legal evolution is evident in, for example, the adoption of legislative frameworks that, inter alia, regulate the use of ADM in the European Union (‘EU’) and the United Kingdom (‘UK’),155 and a proposal for legislative intervention in the United States (‘US’).156 There is also a growing number of court cases in Australia and overseas seeking to clarify the

149 Order of Davies J in Amato (Federal Court of Australia, VID611/2019, 27 November 2019) [9].

150 See, eg, McDonald (n 84).

151 Transcript of Proceedings, Pilbara Infrastructure Pty Ltd v Australian Competition Tribunal [2012] HCATrans 52, 1337 (Gummow J).

152 See, eg, Elizabeth Fisher, ‘Environmental Law as “Hot” Law’ (2013) 25(3) Journal of Environmental Law 347, 347–8; Elizabeth Fisher, Eloise Scotford and Emily Barritt, ‘The Legally Disruptive Nature of Climate Change’ (2017) 80(2) Modern Law Review 173; Anna Huggins, ‘The Evolution of Differential Treatment in International Climate Law: Innovation, Experimentation, and “Hot” Law’ (2018) 8(3– 4) Climate Law 195.

153 Fleur Johns, Richard Joyce and Sundhya Pahuja, ‘Introduction’ in Fleur Johns, Richard Joyce and Sundhya Pahuja (eds), Events: The Force of International Law (Routledge, 2011) 1.

154 Fisher, Scotford and Barritt (n 152) 178. See also Brownsword, Rights (n 8) ch 6; Roger Brownsword, Law, Technology and Society: Re-imagining the Regulatory Environment (Routledge, 2019) ch 8 (‘Law, Technology and Society’).

155 GDPR (n 22); Data Protection Act 2018 (UK). In April 2021, the European Commission presented a proposal for a new regulation for artificial intelligence: see European Commission, ‘Proposal for a

Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts’ (Document No 52021PC0206, 21 April 2021).

156 A Bill for an Algorithmic Accountability Act has been proposed by congressional Democrats in the US, which would require certain companies that use ‘high-risk automated decision systems’ to conduct algorithmic impact assessments: Algorithmic Accountability Act, S 1108, 116th Congress (2019).

2021 Addressing Disconnection 1071

interests, rights and responsibilities that apply in the context of ADM in the public sector.157 This Part provides suggestions as to how Australian law might evolve to address the new challenges posed by automated government decision-making.

Despite its relatively static nature, Australian administrative law is capable of developing in novel ways in response to disruptive phenomena. This is evidenced by how it has evolved, and continues to evolve, in response to environmental issues, for example.158 In a similar vein, some administrative law doctrines will need to be refined, recrafted and applied flexibly to address the issues raised by ADM. For instance, the dissenting judgment of Kerr J in Pintarich, discussed in Part III(B) above, exemplifies how existing doctrines, such as the legal meaning of a decision, can be adapted to respond to the changes in administrative decision-making practices precipitated by automation. The fact and context-specific approach to what constitutes a decision proposed by Kerr J is desirable to accommodate the increasing scale, complexity and variation of automated administrative processes.159 Further, in response to the challenges of relying on the bias rule in the context of ADM, Lim proposes modifications to the relevancy and unreasonableness grounds of review, which she argues are more feasible than a substantial revision of the bias rule.160

It must be appreciated, however, that there is a delicate balance to be struck between responding to phenomena such as automation, and maintaining the coherence of legal reasoning and the stability of legal orders.161 As Latour observes, ‘law has a homeostatic quality which is produced by the obligation to keep the fragile tissue of rules and texts intact’,162 underscoring the importance of balancing doctrinal development with the need for legal predictability. In the administrative law context, this homeostatic quality is likely to be intensified by the ‘Australian preference to work within existing historic or doctrinal categories’.163 This is compounded by the existing limitations of judicial review arising from, for example, its narrow statutory jurisdictional prerequisites,164 Australia’s ‘especially rigid’ separation of judicial power, the complicated jurisprudence surrounding jurisdictional error,165 and the technical, remedially-focused judicial review

157 For an overview of the legal challenges to automated government decision-making in the US, see, eg, Hannah Bloch-Wehba, ‘Access to Algorithms’ (2020) 88(4) Fordham Law Review 1265, 1273–9. In Australia, see, eg, Pintarich [2018] FCAFC 79; (2018) 262 FCR 41; Order of Davies J in Amato (Federal Court of Australia, VID611/2019, 27 November 2019); Masterton v Secretary, Department of Human Services (Cth) (Federal Court of Australia, VID73/2019, commenced 4 February 2019).

158 See, eg, Elizabeth Fisher, ‘“Jurisdictional” Facts and “Hot” Facts: Legal Formalism, Legal Pluralism, and the Nature of Australian Administrative Law’ [2015] MelbULawRw 7; (2015) 38(3) Melbourne University Law Review 968.

159 See above Parts III(A) and III(B). 160 Lim (n 131) 43–4.

161 Fisher, Scotford and Barritt (n 152) 200. Roger Brownsword has written extensively about a ‘coherentist’ approach to addressing the gaps between law and technology. For a notable recent contribution, see, eg, Brownsword, Law, Technology and Society (n 154) chs 6, 8.

162 Bruno Latour, The Making of Law: An Ethnography of the Conseil D’etat (Polity Press, 2010) 242–3.

163 Michael Taggart, ‘“Australian Exceptionalism” in Judicial Review’ [2008] FedLawRw 1; (2008) 36(1) Federal Law Review 1, 9. 164 Aronson (n 69) 204–9.

165 Taggart (n 163) 5, 8.

1072 UNSW Law Journal Volume 44(3)

jurisdiction of the High Court.166 If the internal coherence of the law is to be maintained, there are inevitable limits on interpretive flexibility in this context.

Moreover, the myriad challenges associated with ADM cannot be adequately addressed through judicial challenges alone. As previously mentioned, judicial consideration of individual cases has limited utility in addressing the systemic concerns associated with the use of automated systems. By its very nature, judicial review facilitates reactive, piecemeal and ad hoc judicial scrutiny of the administrative law implications of automated decisions. It is thus inherently limited in its ability to achieve system-wide reform.167

This raises the question of whether legislative reform is warranted to complement doctrinal evolution.168 This possibility has been recognised by the Australian Law Reform Commission (‘ALRC’), which proposed a reference to explore law reform options on the topic of ADM and administrative law in December 2019.169 To date, the Australian Government’s primary response to ADM in the public sector has been to issue non-binding policy guidance, including the Administrative Review Council’s Automated Assistance in Administrative Decision Making170 report in 2004, and the Australian Government’s ‘Automated Assistance in Administrative Decision-Making: Better Practice Guide’171 in 2007, which was revised and updated in 2020.172 The Australian Government also released the Artificial Intelligence Ethics Principles in November 2019,173 which provide high level, aspirational principles that apply to both public and private bodies. However, non-binding guidance is

166 Peter Cane, ‘The Making of Australian Administrative Law’ (2003) 24(2) Australian Bar Review 114, 119–22, 131–4.

167 Aronson, Groves and Weeks (n 128) 5–6.

168 As Brownsword notes, a ‘regulatory-instrumental’ approach to addressing disconnections between law and technology emphasises, inter alia, new regulatory measures to narrow this gap: see, eg, Roger Brownsword, ‘Law and Technology: Two Modes of Disruption, Three Legal Mind-Sets, and the Big Picture of Regulatory Responsibilities’ (2018) 14(1) Indian Journal of Law and Technology 1, 19. See

also Paterson’s recent analysis, which argues that legislative reform is warranted to ensure that applicable public law frameworks remain ‘fit for purpose’ in an increasingly automated public sector environment: Paterson (n 9) 665.

169 Australian Law Reform Commission, The Future of Law Reform: A Suggested Program of Work 2020–25 (Report, December 2019) 24–30 (‘Future of Law Reform’). An update on this report, including stakeholder feedback and a summary of recent relevant developments in the law, was provided in October 2020: Australian Law Reform Commission, The Future of Law Reform Update (Report, October 2020). In a related vein, regulatory reform to ensure that the use of artificial intelligence to make administrative decisions complies with human rights has also been recommended by the Australian Human Rights Commission (‘AHRC’): see Australian Human Rights Commission, ‘Human Rights and Technology’ (Final Report, 1 March 2021) ch 5 (‘AHRC Final Report’).

170 ‘Best Practice Principles’ (n 29).

171 Commonwealth Ombudsman, ‘Automated Assistance in Administrative Decision-Making: Better Practice Guide’ (Report, February 2007), archived at <https://web.archive.org/web/20070829185431/http://www. comb.gov.au/commonwealth/publish.nsf/AttachmentsByTitle/aaadm_guide/$FILE/aaadm_guide.pdf>.

172 ‘Better Practice Guide’ (n 1).

173 Department of Industry, Science, Energy and Resources (Cth), ‘AI Ethics Principles’ (Web Page, 2019)

<https://www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ ai-ethics-framework/ai-ethics-principles>.

2021 Addressing Disconnection 1073

not, in isolation, a sufficiently robust response to ensure the appropriate design and deployment of automated systems.174

Although the Australian Government does not support legislative intervention in all instances, its position is that it is an appropriate regulatory response ‘where there is high perceived risk or public interest and achieving compliance is seen as critically important’.175 As has been illustrated by the Centrelink OCI controversy, the stakes of poorly designed automated systems can be high due to their widespread impact,176 their disproportionate adverse impacts on vulnerable individuals,177 and their propensity to erode trust in government decision-making processes.178 This reinforces the merit of exploring new legal frameworks to help manage and address the distinctive features of ADM.

A Regulatory Reform Options

This section considers options for regulatory reform in response to the three doctrinal disconnects analysed in the preceding Parts. Given the risk of interpretive errors being embedded in the code of automated systems, reform is warranted to facilitate review of such systems, both before they are implemented and in any subsequent judicial challenges. As discussed in Part II, there are constitutional and doctrinal hurdles to providing proactive judicial advice on the interpretation of statutory provisions encoded in automated systems during the design phase. Given the potentially high stakes of miscoded automated systems, a worthwhile alternative might be to authorise an independent oversight body to scrutinise and audit such systems before they are implemented.179 In a similar vein, under the GDPR, public and private bodies relying on ADM that is ‘likely to result in a high risk to the rights and freedoms of natural persons’ are required to submit a Data

174 For example, Carney notes that principles 4 and 7 of the ‘Best Practice Principles’ (n 29) report were ‘ignored’ in the design of the OCI system: Terry Carney, ‘Bringing Robo-Debts before the Law: Why it’s Time to Right a Legal Wrong’, Law Society Journal (online, 1 August 2019) <https://lsj.com.au/articles/ why-robo-debt-bringing-robo-debts-before-the-law-why-its-time-to-right-a-legal-wrong/>. In a similar vein, Mittelstadt argues that principles alone cannot guarantee ethical artificial intelligence due to, inter alia, the lack of robust legal and professional accountability mechanisms: Brett Mittelstadt, ‘Principles Alone Cannot Guarantee Ethical AI’ (2019) 1 Nature Machine Intelligence 501.

175 Department of the Prime Minister and Cabinet (Cth), ‘The Australian Government Guide to Regulation’ (Guide, March 2014) 29.

176 See above n 62.

177 Data released by the then Department of Human Services indicates that, as of February 2019, 2030 people died since receiving a Centrelink debt notice, 663 of which were classified as ‘vulnerable’ by the Department: Shalailah Medhora, ‘Over 2000 People Died after Receiving Centrelink Robo-Debt Notice, Figures Reveal’, TripleJ Hack (online, 18 February 2019) <https://www.abc.net.au/triplej/programs/ hack/2030-people-have-died-after-receiving-centrelink-robodebt-notice/10821272>.

178 Paul Henman, ‘Here’s How Centrelink Can Win Back Australians’ Trust after the Robo-Debt Debacle’, ABC News (online, 21 March 2017) <https://www.abc.net.au/news/2017-03-21/how-centrelink-can-win- back-trust-after-the-robo-debt-debacle/8372788>.

179 Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7(2) International Data Privacy Law 76, 98.

1074 UNSW Law Journal Volume 44(3)

Protection Impact Assessment to the relevant Member State regulatory body.180 In certain circumstances, this body has the power to temporarily or permanently ban the use of the system.181 In addition, the developers of automated systems should be required to include a comprehensive audit trail of the coding and design decisions made, including reference to relevant statutes, regulations, policies and case law.182 Such reforms are desirable to minimise the risk that miscoded automated systems will adversely impact citizens, and to facilitate subsequent review of decisions made by these systems. They would also help to address the ALRC’s call for law reform regarding ‘appropriate processes for correction, substitution, audit, and review of automated decisions’.183

Moreover, as was discussed in Part III, the majority decision in Pintarich suggests that administrative decision-making in Australia is ‘still regarded as an inherently human process’,184 yet there are currently no legislative safeguards to ensure that human decision-makers do in fact oversee and review automated outputs. This raises the question of whether there should be a requirement for human involvement for certain types of automated administrative processes.185 This too has been raised by the ALRC, which has identified exploring ‘the degree of human involvement, if any, that should be required for particular types of decisions’ as a potential option for law reform.186

In this regard, valuable lessons can be learned from the EU’s GDPR, which provides a range of protections for individuals affected by ADM. Specifically, article 22 of the GDPR provides a prohibition (with exceptions) against solely automated decision-making that affects individual rights and interests.187 In order to avoid the prohibition on solely automated decision-making, ‘meaningful’ human involvement, including consideration of all the relevant data, is required.188 In situations where exceptions to this general prohibition apply, the interests of affected individuals are protected by, at a minimum, the right to obtain human intervention, to express their views or to contest an automated decision.189

180 GDPR (n 22) art 35(1). In a related vein, the AHRC has recommended that the Australian Government be required to undertake a human rights impact assessment before AI-informed decision-making systems are used to make administrative decisions: ‘AHRC Final Report’ (n 169) 58–9.

181 GDPR (n 22) arts 36(1), 58(2).

182 ‘Better Practice Guide’ (n 1) 24, 26; Miller (n 6) 32.

183 Future of Law Reform (n 169) 24.

184 Monika Zalnieriute et al, ‘From Rule of Law to Statute Drafting: Legal Issues for Algorithms in Government Decision-Making’ in Woodrow Barfield (ed), The Cambridge Handbook on the Law of Algorithms (Cambridge University Press, 2021) 251, 261.

185 The Better Practice Guide indicates that there should be human input for all automated administrative decisions involving the exercise of discretion or judgement: ‘Better Practice Guide’ (n 1) 29.

186 Future of Law Reform (n 169) 24. The AHRC has similarly recommended regulatory reform to ensure ‘rights of [human] review are available for people affected by AI-informed administrative decisions’: ‘AHRC Final Report’ (n 169) 71.

187 GDPR (n 22) arts 22(2)(a)–(c). The prohibition applies to decisions that ‘produces legal effects’ for the data subject or ‘similarly significantly affects him or her’: at art 22(1). For analysis of the scope of this protection in article 22, see, eg, ‘A29WP Guidelines’ (n 125) 20–2.

188 ‘A29WP Guidelines’ (n 125) 20–1.

189 GDPR (n 22) art 22(3). Under articles 22(a)–(c), the general prohibition does not apply if the decision is required to enter or fulfil a contract, is authorised by an EU or member state law which provides suitable safeguards, or if the data subject has provided explicit consent.

2021 Addressing Disconnection 1075

If article 22-type protections applied in Australia, it would not be permissible for the ATO to despatch an automated letter communicating a binding payment plan for a tax debt to a taxpayer without meaningful review by a human decision- maker. Similarly, under the Centrelink OCI system, such protections would require a human officer to manually review automated debt discrepancy notices before they were sent to welfare recipients. Although the limitations of human decision-making must also be acknowledged, involving humans in ADM can help to identify errors in automated outputs and humanise automated processes which are incapable of taking into account individual circumstances or other relevant context.190

Furthermore, legislative safeguards are desirable to address the types of opacity and bias challenges pertaining to ADM discussed in Part IV. In line with transparency ideals, government agencies ought to be required to make the source code of automated systems publicly available.191 However, transparent source code may be insufficient for achieving algorithmic accountability,192 particularly in relation to individual decisions, necessitating additional measures. Again, the GDPR provides an exemplar in this regard. Specifically, articles 13(2)(f), 14(2)(g) and 15(1)(h) of the GDPR create a suite of notification, access and explanation rights for individuals subject to automated decisions. Government agencies and other organisations using automated processes must proactively notify affected individuals of the existence of solely automated decision-making,193 and provide meaningful information about the logic of the decision-making process, and the significance and envisaged consequences of such processing for an individual.194 An individual also has a right to request access to these types of information.195 These provisions recognise that an individual can only challenge a particular decision or express their view if they understand ‘how it has been made and on what basis’.196 Similar safeguards to achieve transparency in relation to automated systems in their entirety, as well as individual decisions made by these systems, are warranted in Australia.197

190 Tim Wu, ‘Will Artificial Intelligence Eat the Law? The Rise of Hybrid Social-Ordering Systems’ (2019) 119(7) Columbia Law Review 2001, 2004–5. In a related vein, in the context of judges, Sourdin argues that ‘[p]roponents of the view that judges can be replaced by AI are arguably missing the point in relation to what judges contribute to society which extends beyond adjudication and includes important and often unexamined issues relating to compliance and acceptance of the rule of law’: Tania Sourdin, ‘Judge v Robot? Artificial Intelligence and Judicial Decision-Making’ [2018] UNSWLawJl 38; (2018) 41(4) University of New South Wales Law Journal 1114, 1124.

191 This would codify the Australian Government’s non-binding guidance that all new source code should be open as a default: Digital Transformation Authority (Cth), ‘Make Source Code Open’, Digital Service Standard Criteria (Web Page, 2019) <https://www.dta.gov.au/standard/8-make-source-code-open/>.

192 See, eg, Mike Ananny and Kate Crawford, ‘Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability’ (2018) 20(3) New Media and Society 973; Deven R Desai and Joshua A Kroll, ‘Trust but Verify: A Guide to Algorithms and the Law’ (2017) 31(1) Harvard Journal of Law and Technology 1; Joshua Kroll et al, ‘Accountable Algorithms’ (2017) 165(3) University of Pennsylvania Law Review 633, 638–9.

193 See also ‘Better Practice Guide’ (n 1) 29. 194 GDPR (n 22) arts 13(2)(f),14(2)(g).

195 Ibid art 15(1)(h).

196 ‘A29WP Guidelines’ (n 125) 27.

197 For analysis of these issues from a human rights perspective, see ‘AHRC Final Report’ (n 169) 60–1.

1076 UNSW Law Journal Volume 44(3)

These types of legislative reforms are not a panacea, and concerns have been raised about the potential limitations and loopholes associated with the GDPR protections.198 Nevertheless, in light of the government’s commitment to digital transformation, and the current doctrinal and evidentiary impediments to successful judicial review of automated administrative decisions, similar regulatory reforms that are appropriately tailored to the Australian public law context199 ought to be prioritised. In this way, the transformative potential of ADM can be at least partially counterbalanced by an appropriate evolution in legal frameworks.

VI CONCLUSION

This article has analysed three distinctive features of ADM that are difficult to reconcile with Australian administrative law doctrine, and which, therefore, generate pressure for new legal responses. First, there is a potential dissonance between the courts’ complex expectations of statutory interpretation and rationality in administrative decision-making, and the way that automated systems are programmed and operate. The risk of miscoded automated systems is compounded by the requirement for a matter before the courts can offer interpretive guidance, which impedes proactive judicial advice on statutory construction to inform coding decisions for automated systems. Secondly, there is a disjuncture between the variation and complexity of ADM in practice and the legal meaning of a decision. Whilst the Pintarich case suggests that discretionary administrative decision- making requires a human mental process, a regulatory gap is evident as there is currently no concomitant requirement for human decision-makers to oversee and review automated outputs for discretionary or high stakes government decisions. Thirdly, the new types of bias risks posed by opaque automated systems are not easily addressed through the bias rule for either partially or fully automated decisions affected by biased code or data, or automation bias. These mismatches between the law and technology impede opportunities for meaningful legal accountability for errors in automated administrative decisions.

The nature and extent of the disconnection between ADM and administrative law needs to be understood to inform appropriate legal and regulatory solutions. Although they have not been a focus of the legal analysis in this article, it should be acknowledged that technological solutions can also play an important role in helping

198 See, eg, Wachter, Mittelstadt and Floridi (n 179); Veale and Edwards (n 74); Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a “Right to an Explanation” Is Probably Not the Remedy You Are Looking For’ (2017) 16(1) Duke Law and Technology Review 18, 44–51.

199 Note that the Court of Justice of the European Union’s preliminary ruling in the ‘Schrems II’ case indicates that the GDPR should be interpreted in light of, inter alia, the European Union’s Charter of Fundamental Rights: Data Protection Commissioner v Facebook Ireland Ltd (Court of Justice of the European Union, C-311/18, ECLI:EU:C:2020:559, 16 July 2020). In the absence of a comparable human rights framework in Australia, the question of how to best enact equivalent protections with similar effect to those in the GDPR warrants further inquiry. I thank Associate Professor Mark Burdon for this insight. See also Ng et al (n 9) Part IV; ‘AHRC Final Report’ (n 169) ch 5.

2021 Addressing Disconnection 1077

to narrow this gap.200 As automated systems continue to evolve, their capacity to produce more sophisticated, nuanced and situation-specific outputs based on complex data sources will no doubt continue to improve. Further interdisciplinary research between public lawyers and computer scientists is needed to enhance the congruence between ADM and administrative law expectations.

In terms of legal solutions, as was alluded to by Kerr J in his dissenting judgment in Pintarich, administrative law doctrine should not remain static in the light of changes in government decision-making processes.201 A preferable outcome is for legal conceptions of what constitutes a decision, and other relevant administrative law doctrines, to be reframed and recrafted to accommodate the reality of how decisions are made in a contemporary context. Any such evolution in legal doctrine will of course need to be balanced with the need for legal stability, predictability and the ‘integrity of the legal edifice’.202 Furthermore, the limits of context-specific judicial responses to the structural and systemic challenges posed by ADM need to be acknowledged.

Regulatory reform is also warranted to address the new and distinctive challenges that automation poses for administrative law. Part V outlined reform options to facilitate systemic oversight of the design of automated systems, promote meaningful human involvement in decisions significantly affecting individuals, and empower affected individuals to contest errors in automated decisions. These suggested reforms ought to be considered as part of a broader suite of measures to address the gaps in public law frameworks arising from the increasing automation of government decision-making.203 Ultimately, doctrinal and regulatory evolution are both needed to ensure that administrative law values and protections remain meaningful in the digital age.

200 As Brownsword argues, it is important to critically examine whether legal rules are ‘fit for purpose’, as well as the extent to which technological solutions can appropriately address legal and regulatory purposes: Roger Brownsword, Law 3.0: Rules, Regulation and Technology (Routledge, 2021) 115; Brownsword, Law, Technology and Society (n 154) 197–8.

201 Pintarich [2018] FCAFC 79; (2018) 262 FCR 41, 49 [49].

202 Latour (n 162) 243; Fisher, Scotford and Barritt (n 152) 200.

203 See also, eg, the proposed reforms to, inter alia, the Privacy Act 1988 (Cth), the ADJR Act 1977 (Cth), and the Freedom of Information Act 1982 (Cth), cited in Ng et al (n 9) 1073–6. See also the suggested reforms to the ADJR Act 1977 (Cth), anti-discrimination laws, and information laws in Paterson (n 9) 664–5.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/UNSWLawJl/2021/37.html