![]() |
Home
| Databases
| WorldLII
| Search
| Feedback
University of New South Wales Law Journal Student Series |
ROBODEBT AND BEYOND: EXPLORING THE COMPATIBILITY OF AUTOMATED DECISION MAKING WITH ADMINISTRATIVE LAW GOALS
INES CELIC[1]
I INTRODUCTION
Government reliance on automated decision-making (‘ADM’) systems has been steadily increasing in Australia and reflects a larger societal trend of embracing technology to improve society.[2] Since the 1980s,[3] the Australian government has used automated systems in diverse areas including social welfare, healthcare, law enforcement and national security.[4] Despite its noble aspirations, the use of ADM in government decision making has attracted significant controversy.[5] This controversy has notably grown since the Robodebt scandal which demonstrated “the potential dangers presented by automated decision-making systems”.[6] In light of government departments’ drive for further automation, this essay will examine whether ADM is compatible with the Australian administrative law system and the values it upholds. This essay ultimately argues that transparency and accountability issues are the most critical challenges to ADM and that future reforms must overcome these challenges for ADM to be compatible with our administrative law values.
This essay will firstly define ADM and outline the administrative law values according to which ADM’s compatibility will be assessed. Secondly, this essay will consider the ways in which ADM is compatible with administrative law and the values that it strengthens, namely efficiency. Thirdly, this essay will argue that the most critical challenges posed by ADM compromise the administrative law values of transparency, accountability and to a certain extent, fairness. Lastly, through examining both Australian recommendations and international examples, this essay will consider certain safeguards and reforms that can be implemented to effectively address these challenges and maintain a level of optimism towards ADM.
A Defining ADM
Broadly speaking, ADM is a term that refers to “a computerised process that either assists or replaces the judgement of human decision-makers”.[7] Many authors stipulate that ADM is a broad term that includes ‘two waves’ of AI. The first wave typically refers to rules-based systems that follow a series of pre-programmed rules that mirror responses of a human decision-maker.[8] Alternatively, constructed knowledge systems ‘learn’ from data to draw inferences about new situations while mimicking human thought processes.[9] Terry Carney’s classification also distinguishes ‘supportive’ ADM systems from ‘replacement’ ADM systems.[10] This reflects the diversity of automation systems which may aid human decision making, make partial decisions or replace human-decision making entirely.[11] While most ADM systems used by Australian agencies are rules-based systems, constrained to simple determinations such as social security entitlements,[12] more advanced machine learning is being increasingly experimented in other jurisdictions and may be implemented in Australia in the near future.[13]
B Identifying Administrative Law Values
It is argued that the major themes of administrative law have not changed since the monumental Kerr reforms that conceived the foundations of our current administrative law system.[14] The Kerr reforms established a system which at its core, is rooted in external scrutiny of administrative decisions primarily by courts and tribunals.[15] According to the Administrative Review Council (‘ARC’), the values and objectives underlying these reforms and our current administrative law landscape include lawfulness, fairness, openness, rationality and efficiency.[16] These values predominantly protect the individual, stemming from the Kerr committee’s strong emphasis on individual rights.[17] However, these values are not given the same significance in merits and judicial review. In Quin, Brennan J expressed that the priority of courts is not to protect individual rights but rather to assess the legality of government decision-making.[18] According to this view, the court does not also advance competing priorities, such as efficiency, with its primary value of assessing the lawfulness of government decisions.[19] Alternatively, genuinely competing values arise in merits review as a pertinent tension exists[20] between the tribunal’s legislative objectives to provide both efficiency and justice.[21]
This essay does not intend to address all administrative law values in equal detail. This essay affirms that external scrutiny is at the heart of our administrative law system as government decisions must be scrutinised to ensure that they are within the bounds of lawfulness and thus uphold the rule of law. This essay considers lawfulness to be inextricably intertwined with the values of transparency and accountability because scrutiny as to lawfulness is simply not possible without accountability and transparency mechanisms. This essay therefore argues that the primary values of accountability and transparency are indispensable to the overarching goal of our administrative law system. As this essay will continue to argue, these are the values most seriously and directly compromised by ADM. This essay will also discuss fairness issues but argues that it is not an issue totally inherent to ADM as unfairness in ADM ultimately stems from human error and not the system per se. Lastly, while efficiency is seen to be upheld by ADM and is significant in merits review, this essay considers it as a secondary value and thus it should not unreasonably compromise the primary values.
II ADVANTAGES OF ADM
ADM is considered to be compatible with administrative law values to the extent that it promotes the efficiency of primary government decision-making by reducing time and cost. Governments have favoured ADM as it is cheaper and offers quicker decision-making, especially in light of the current unprecedented quantity and complexity of legislation which poses significant challenges to human decision-makers and their capabilities.[22] The efficiency of ADM is especially valued in government departments involved in high-volume decision-making.[23] Automated systems have also been seen as especially useful in areas of the law that include “heavily rule-based content”, such as social security.[24] This is because rigid rules and criteria, typically involving simple, straight-forward questions that require binary choices with limited or no discretion, can be more quickly applied by machines than humans.[25] The importance of efficiency should not be understated as peoples’ lives depend on government decisions and therefore undue delay can have serious practical consequences. Furthermore, although not listed as a fundamental value by the ARC, ADM has also been praised for creating greater consistency in primary government decisions.[26] The ‘objective’ nature of ADM systems and the rigidity of algorithmic codes mean that ADM systems are constructed to apply rules in the same way in every case and are thus thought to remove the risk of error caused by inherent human frailties.
III CHALLENGES OF ADM
Despite these advantages, ADM has been widely criticised for being incompatible with our administrative law system to the extent that it lacks transparency, prevents accountability, and compromises fairness. More attention should be given to transparency and accountability challenges for two reasons. Firstly, these two values go to the very essence of our administrative law system based on external scrutiny. Secondly, they are values that, unlike fairness, are directly and inherently affected by ADM systems.
A Transparency Issues
Transparency is considered to be a critical value in administrative law because it is a precondition, or the first step,[27] of the accountability and scrutinising process. Indeed, without decision-making being visible, decision-makers cannot be held accountable if they have acted unlawfully. Additionally, without transparency, individuals seeking review will not know on what grounds they may be able to challenge a government decision. Transparency challenges associated with ADM are multifaceted.[28] This section will explore challenges associated with two forms of transparency; algorithmic disclosure and reasons. Both forms of transparency should be upheld as they are closely interrelated. This is because any availability of reasons depends to some extent on the disclosure of the relevant algorithmic rules.[29] This section will also explore how this transparency may be compromised by technical literacy barriers.
1 Algorithms and Outsourcing
Firstly, algorithms represent the “sequence of instructions or set of rules designed to complete (the) task” of decision-making.[30] It has been argued that for automated decisions to be transparent, algorithms which demonstrate how the automated system functions need to be disclosed.[31]
As government agencies typically lack the technical expertise to create ADM systems independently, government agencies have engaged in outsourcing to the private sector to build these systems. While outsourcing is necessary, this section of the essay argues that outsourcing has obstructed the transparency of algorithms. Outsourcing contractual agreements often mean that the ownership of algorithms by the private sector prevents their disclosure. The Freedom of Information Act 1982 (Cth) (‘FOI Act’)[32] and its state counterpart, the Government Information (Public Access) 2009 (NSW) (‘GIPA Act’),[33] fail to effectively compel the disclosure of ADM system algorithms. The FOI Act generally requires the agency or Minister to give access to requested ‘documents’,[34] which typically extends to algorithms given the broad definition of ‘document’.[35] However, the legislative exemptions to document disclosure have significantly nullified this general obligation to provide access to documents such as algorithms where contractors are involved.[36] The most relevant exemptions refer to documents disclosing trade secrets or commercially valuable information[37] as well as documents containing material obtained in confidence.[38] The s 47 exception was successfully relied upon by the Australian Electoral Commission to deny an FOI request in relation to vote-counting software used by the commission as the AAT found that the software’s release would diminish the software’s commercial value and disclose a trade secret.[39] Additionally, the s 47 exemption can be easily relied upon by governments to not disclose algorithms where the outsourcing contracts contain an in-confidence clause,[40] which is regularly included in government contracts.[41]
Furthermore, s 6C of the FOI Act requires agencies to take ‘contractual measures’ to ensure that they receive a requested document related to the performance of a Commonwealth contract that is in the possession of a contractor,[42] and thus attempts to oblige agencies to commit to contracts that enable disclosure of contractors’ documents. However, the effect of s 6C has also been recognised as ‘limited’[43] since document requests may be refused if an agency or Minister has taken all reasonable steps to obtain the document from a contractor without success.[44] At the NSW level, challenges posed by legislative exemptions are similar. While s 121(1) requires outsourcing contracts to provide the agency with an immediate right of access to the contractor’s information[45], government agencies have relied upon the ‘substantial commercial disadvantage’[46] to refuse the disclosure of software source codes.[47]
Overall, the ‘secrecy’ of algorithms stem from the fact that governments must contract with the private sector to build automated systems. The reality is that the private sector and contractual interests are antithetical to transparency. As Seddon argues, there is an ‘almost perfect’ contradiction between transparency and contracts which are “traditionally about secrecy” and “private parties”.[48] As non-parties to the government contract, individuals are left unable to enforce the contract against contractors and demand disclosure of algorithms. Individuals must then depend on the government to obtain this information. However, the aforementioned legislative exemptions ultimately prioritise private and contractual rights at the cost of transparency. This consequently creates significant transparency barriers as without access to algorithms, scrutiny of the steps taken to reach a decision is not possible.
2 Reasons and Technical Literacy
Beside algorithms which involve transparency of ADM systems at a general level, reasons for particular decisions need to be accessible for there to be transparency at an individual level. In other words, disclosure of an ADM system’s algorithm may not fully satisfy the obligation to provide as to why a particular decision was made.[49] In contrast to algorithms, reasons are the “thread which weave the evidence and the findings together... to the statutory criteria, explaining all the steps in the reasoning process which led to the decision”.[50] Although there is no general common law right to reasons[51], under both the Administrative Appeals Tribunal Act 1975 (Cth) (‘AAT Act’)[52] and the Administrative Decisions (Judicial Review) Act 1977 (Cth) (‘ADJR Act’),[53] decision-makers are expressly required to supply reasons to those ‘aggrieved’ by their decisions, with limited exceptions.
Importantly, it has been argued that the ‘black-box’ nature of ADM systems poses a barrier to meeting the legal requirement for reasons.[54] ADM systems are allegedly ‘black-boxes’ because they reach outcomes while ‘evading intuitive explanation’ as to how they got there in a way that can be readily understood.[55] This means that even the ADM system’s programmers might be unable to explain the reasoning processes underpinning a specific machine-learning outcome.[56] Moreover, even where ADM systems may at least provide some form of ‘reasons’, this may not satisfy the requirements of reasons set out in case law. While the level of adequacy required by the reasons depends on the relevant statute, there must be a ‘minimum level of explanatory value’.[57] This means that reasons must contain sufficient detail to enable a person to understand why a decision went against them.[58] Bennett Moses and Collyer demonstrate these issues while discussing the relatively simple machine-learning method of probabilistic ‘decision trees’.[59] While documents containing decision tree diagrams may attempt to provide some transparency about the reasoning used by the ADM system in a specific case, they are unlikely to satisfy the ‘minimum explanatory value’ requirement as it is challenging to decipher from the document which factors, or ‘input variables’,[60] were determinative in reaching the outcome.[61]
Furthermore, if the ‘reasons’ are provided in the form of complex diagrams or raw data sheets, the minimal requirements of intelligibility of reasons may not be satisfied.[62] For example, decision-tree diagrams that often involve hundreds or thousands of ‘trees’[63] would not be meaningful to an individual affected by an adverse decision[64] if it is too technically complex to be comprehensible. Indeed, the courts have emphasised that reasons should be expressed in clear language so that they are capable of being understood.[65] It is thus clear that the ‘intelligibility’ of reasons is strongly linked to the technical literacy required to interpret these forms of ‘reasons,’ often described as a ‘specialist skill’.[66] This is what Burrell describes as the second form of ‘opacity’[67] as the majority of the public will be unable to extract useful knowledge from these documents for the purposes of contesting the decision.[68] Without access to expert technical assistance, most individuals are unable to understand and properly scrutinise highly technical forms of reasons for automated decisions and thus, such transparency cannot be considered meaningful.
Challenges of intelligibility and technical literacy are also relevant to tribunals who review the original automated decision to arrive at a ‘correct and preferable decision’.[69] Automated decisions that have been subject to merits review have involved tribunals considering primary decisions in the form of unintelligible ‘screen dumps’.[70] Carney notes that in the context of AAT1 social security hearings, documentation tendered includes large quantities of raw, unformatted input information in addition to electronic file notes where pertinent information was ‘buried away’ in the myriad of documents.[71] Thus, if ‘reasons’ are provided in the form of complex technical documents, the issues and legal reasoning pertaining to a decision will not be sufficiently transparent for them to assess primary decisions in merits review.
B Accountability Issues
Even if ADM system algorithms and reasons are sufficiently transparent, there are accountability challenges that prevent review and scrutiny of automated decisions. While multiple accountability concerns have been expressed with respect to judicial review, this section will focus on judicial review through the ADJR Act. This section will concentrate on one particular jurisdictional prerequisite to judicial review which has recently been surrounded with controversy following Pintarich.[72] Namely, this prerequisite demands that an automated system’s ‘decision’ constitutes a decision in the relevant legal sense[73] for judicial review to be available.
In Pintarich, Mr Pintarich received a computer-generated letter in December 2014 from the Australian Taxation Office (‘ATO’) outlining that most of the general interest charge (‘GIC’) on his tax debt would be remitted.[74] Specifically, this letter was automatically generated after the ATO officer Mr Celantano keyed information into an ATO computer template bulk issue letter. However, in May 2016 the ATO communicated to Mr Pintarich that the first letter erroneously remitted the GIC. Notably, Mr Celantano alleged that he had never decided to remit the GIC and that the December 2014 letter did not reflect his intention.[75] Mr Pintarich’s review application under the ADJR Act asserted that the May 2016 decision was ultra vires and should be set aside on the basis that, by the issuing of the initial December 2014 letter, the ATO had already exercised its power to make a decision to remit the GIC.[76] The key issue in this case was thus whether, by issuing the relevant letter, the authorised ATO officer had made a ‘decision’ to exercise the discretionary power in s 8AAG of the Taxation Administration Act 1953 (Cth)[77] to remit GIC.[78] The Court concluded that the first letter did not constitute a ‘decision’ and thus Mr Pintarich was thus unable to hold the ATO accountable for alleged ultra vires decision-making. The majority concluded that this was not a decision as the issuing of the letter did not involve Mr Celantano engaging in a ‘mental process’[79] of deliberating on the issue of the GIC and reaching a conclusion.
The reasoning in Pintarich poses a significant barrier to external security of automated decisions, which, as aforementioned, goes to the heart of our administrative law system and the values it aims to uphold. Based on the majority’s logic, it seems that fully automated decisions would seldom be open to scrutiny in the form of judicial review[80] since removing the need for human mental processes is precisely the modus operandi of such systems. This argument was conveyed in Kerr J’s dissent which expressed the reality that automated decisions occurring independently of human input are ‘rapidly becoming unexceptional’ and that the legal conception of a decision should be broader so as to not enable decision-makers to go back on their decisions.[81] There thus seems to be a misalignment between government objectives for increased ADM and the courts who fail to view automated decisions as decisions in the relevant legal sense. On one hand, an overly broad conceptualisation of a ‘decision’ may undermine the efficiency of the administrative process[82] and this accountability of decision-making needs to be counterbalanced. to some extent, with the efficiency of government administration. However, as already mentioned, efficiency is a secondary value to accountability. Therefore, efficiency should not justify decreased accountability where, in light of the prevalence of ADM, it would seriously deprive so many individuals from holding decision-makers accountable.
On a related note, there are currently at least 29 Commonwealth Acts and instruments that specifically authorise automated decision-making[83] by containing deeming provisions that deem any decisions made through automation to be decisions of the legal, human decision-maker. This means that the human decision maker is expected to retain a level of control over the automated systems and is held accountable for the system’s decisions.[84] It is argued the legislature has used deeming provisions to establish a type of legal fiction as it deems the actions of the computer to be a decision, including by inference the necessary mental element.[85] Notably, the legislation in Pintarich did not contain such a provision rendering the decision of the computer program to be a decision made by the decision-maker. Thus, Mr Celantano was not given ultimate responsibility for the decision. Additionally, Mr Celantano did not retain a meaningful level of control over the ATO’s automated system because his role was limited to simply keying the information as he was unable to review the letter before its dispatch.[86] It is possible that had the ATO’s ADM system included genuine human intervention requiring the officer to confirm the contents of the letter before dispatch, there would be a ‘mental process’ to qualify this as a ‘decision’ capable of judicial review. Either way, the consequences of Pintarich are significant as the court’s reasoning deprives so many individuals subject to automated decisions of scrutiny and accountability through judicial review. As a broader consequence, missing accountability mechanisms can seriously compromise public trust and confidence in government decision making.[87]
C Fairness Issues
Fairness issues implicated in ADM systems have undoubtedly been significant. As this essay argues however, unlike transparency and accountability concerns, fairness is not compromised by ADM systems per se. Rather, fairness and lawfulness issues that have arisen in ADM contexts ultimately stem from human errors and design, as demonstrated by Robodebt and the Dutch childcare benefits scandal.
In accordance with Australian AI Ethics principles, the Royal Commission Report (‘the Report’) into Robodebt outlined that to be fair, ADM systems should be accessible and “not involve or result in unfair discrimination against individuals, communities or groups”.[88] Although the Report discussed multiple aspects of unfairness that were also related to unlawfulness, this section will consider the principal arguments that were effectively made out by applicants at the AAT1 level of review[89] as well as the court’s findings of illegality in Amato v Commonwealth.[90] Firstly, through its calculations of averaged income as opposed to actual income, the Robodebt system found that individuals had received income at times when they had not and thus entitlements were not calculated based on actual fortnightly income as is legally required for YA and NSA entitlements.[91] This was also unfair because many individuals earned money on a casual basis that fluctuated over a year long period and thus the ‘average income’ was an inaccurate determination of their income. Secondly, sections 1222A(a) and 1223 of the Social Security Act 1991 (Cth)[92] were also not complied with. There was no relevant provision automatically creating a debt on the basis that data-matching indicated a discrepancy.[93] Additionally, the onus was on Centrelink to provide sufficient material of high probative value[94] in order to prove a difference[95] between the welfare payment given to an individual and the amount that the person was entitled to. The Report described this shift of onus obligating recipients to establish their earnings by obtaining payslips for periods as long as five years as a ‘fundamental unfairness’[96] considering the extreme impracticality imposed on individuals to obtain this evidence.
Although there were multiple aspects of unfairness, the Report found that this stemmed from one essential factor; the implicit biases that motivated the system.[97] The Robodebt system and its unfairness were ultimately motivated by biases of the policies[98] that viewed welfare recipients as ‘potential cheats’.[99] This finding reflects the view that, while ADM systems have been praised for their alleged mechanical objectivity, free of human frailties, an algorithm can never be fully detached from humans and their biases.[100] While Robodebt has certainly tarnished ADM’s reputation, this undeniable link between the ADM systems and the humans that construct them demonstrates that the unfairness of decisions was not because of the automated system per se. Indeed, the ADM system in Robodebt was not a complex machine-learning system that developed a ‘mind of its own’. Rather, Robodebt involved human-authored formulae[101] in its design process that incorporated the biases of the people that conceived it. The issue here is ultimately human as automated systems ultimately rely on the humans who built the systems having respected legal and fairness requirements.[102]
This argument is further demonstrated by the Dutch childcare-benefits scandal. The Dutch tax administration was fined for an unlawful and unfair system that it applied for the purposes of ending fraud in childcare benefits. The government used a self-learning algorithm to create ‘risk profiles’ and penalised families over mere suspicions of fraud based on these risk indicators.[103] This system was both illegal and discriminatory. Considering the system’s ‘risk indicators’ which included having dual nationality and low income, the European Parliament concluded that this ADM system possessed at its very heart, entrenched bias[104] and ‘institutional racism’.[105] Like Robodebt, this ADM system carried the biases of the tax authority that created the criteria for ‘risk profiles’ and the relevant ministers who at least partially initiated the tax authority’s unfair, ‘tough and illegal approach’.[106]
While these two scandals certainly demonstrate that human design and programming were at the root of unlawfulness and unfairness, it is important to acknowledge that ADM systems exacerbate the scale of errors due to the sheer number of decisions made. The reality is that ADM systems multiply errors far more than humans do because of the speed of automated decision-making.[107] It should also be acknowledged that ADM systems that apply unfair or unlawful algorithms at such a high scale have particularly negative impacts on vulnerable clients, especially considering that ADM systems have commonly been applied in social security contexts because of the rigidity of questions of law. As Carney argues, fairness issues are accentuated by ADM systems when the affected citizens are already disadvantaged.[108] In both Australia and the Netherlands, being wrongfully labelled as ‘fraudsters’ and liable to pay large amounts of debt had dire consequences for affected individuals as it pushed already vulnerable people into poverty and caused extreme mental health issues.[109]
IV PROPOSALS
This section of the essay does not seek to explore the litany of proposals addressed at improving ADM. Instead, this section will consider a few pertinent Australian and international proposals to the aforementioned transparency and accountability challenges.
A A Right to ‘Meaningful Information’ and Reasons: General Data Protection Regulation (‘GDPR’)[110] and Transparency Safeguards
Firstly, the European GDPR promotes several transparency safeguards that have been implemented to address issues relating to algorithms, reasons and technical literacy. Importantly, articles 13, 14 and 15[111] require ‘meaningful information’ about the relevant ADM logic to be provided to citizens subject to the decision. As argued by Malgieri, ‘meaningful information’ suggests more than providing an algorithm or mathematical functionality.[112] Rather, it is a qualified adjective signifying that the information should also be intelligible and comprehensible.[113] This connotation of comprehensibility, as also stipulated in the Working Party Guidelines,[114] suggests that this obligation directly addresses technical literacy challenges which often makes the inner workings of an ADM system opaque. The GDPR thus promotes algorithmic transparency in a meaningful way. However, the GDPR does not address the second form of transparency related to reasons addressing why a system came to an outcome in a particular decision. As already demonstrated, both forms of transparency are required. The French government has attempted to incorporate both levels of transparency in their legislation. In implementing the GDPR, French legislation firstly enables the affected individual to access the algorithmic rules and ‘main characteristics’ of the ADM’s system implementation at request.[115] Secondly, French legislation allows for the ‘weightings of factors’ in a system to be disclosed,[116] thereby suggesting that the explanation of a particular decision must be provided.[117] This demonstrates a significant step in attempting to extract reasoning from ADM systems so that individuals can understand what factors were determinative in this decision that went against them.
B GDPR Accountability Safeguards: A ‘Human in the Loop’
Furthermore, the GDPR also enables citizens to obtain human intervention for solely automated decisions through article 22(3).[118] Genuine human intervention, or a ‘human in the loop’ that ‘checks’ the decision before it is made, may in itself impose a form of accountability and also create the ‘mental process’ required for a decision to be a decision for the purposes of judicial review. In this way, the lacking ‘mental process’ in Pintarich may have been rectified by such a provision mandating human intervention.[119] While there was a human involved in Pintarich, namely Mr Celantano, he was not able to actually ‘intervene’ or review the contents of the letter before dispatch. Furthermore, Carney warns that if human intervention is implemented by legislation, the human decision-maker needs to be a ‘genuinely independent sceptic’ of the automated decision, and not a mere ‘rubber stamper’.[120] Finck, writing in a European context, also warns that human intervention must be genuine to be a real form of accountability. He outlines that human decision-makers can suffer from ‘automation basis’ which makes them inclined to believe that machine-made decisions are superior to human decisions.[121]
C Further European Developments: Safeguards for ADM systems Used in ‘High Risk’ Settings
Moreover, the European Union recently finalised the Artificial Intelligence Act (‘AI Act’) which imposes additional safeguards for ADM systems based on the Act’s nuanced categorisation of risk.[122] The Act considers an ADM system to be of ‘high-risk’ if it is applied in a setting where the system poses “a significant risk of harm to the health, safety or fundamental rights of persons”.[123] As outlined in Annex III of the Act, such settings include government provision of essential public services and benefits. If a system is considered ‘high-risk’, it must comply with a broad range of requirements related to transparency[124] and human oversight.[125]
While article 22(3) of the GDPR introduces a safeguard in response to the risk posed by solely automated systems, the AI Act introduces safeguards where ADM systems are being used in a setting that is considered ‘high-risk’ because of the impacts of the decision on access to essential services. This nuanced approach is insightful as it recognises the serious consequences that poor ADM systems can have on the lives of individuals who are impacted by these decisions. As aforementioned, these grave consequences were acutely demonstrated by the Robodebt scandal where the unlawful system used in the high-risk setting of social services meant that many individuals were pushed into poverty. Importantly, this risk-focused approach has been endorsed by the Australian government in their recently published National Framework for the assurance of artificial intelligence in government.[126]
D Reforms to Current FOI Exemptions
Lastly, the importance of the obstacle posed by the FOI Act’s trade secret exemption[127] should not be understated. Despite the advantages of the French approach to ADM, the legislative safeguards are not required where disclosure of the ADM system functioning would infringe trade secrets.[128] Since this exemption also exists in Australian law, it is somewhat surprising that this obstacle to transparency has received relatively little attention in recent Australian law reform discussions. In 1996, the Industry Commission argued that the ss 45 and 47 exemptions should involve public interest considerations and that the non-disclosure of government information on intellectual property grounds should only be allowed in “some cases”.[129] Since this report was released, relatively little law reform consideration has been given to this issue, especially with respect to disclosure of algorithms. However, Ray et al have more recently argued that government reliance on the ss 45 and 47 exemptions should be restricted through a public interest test which would require the Minister refusing the FOI request to justify the need for secrecy against the public interest in releasing the algorithm.[130] There are already several exemptions that are labelled as ‘conditional’ because they are subject to this public interest test.[131] While more research into reforms is required, designating ss 45 and 47 to the ‘conditional exemptions’ category may be a good step towards restricting reliance on these exemptions to deny disclosure.
V CONCLUSION
ADM undoubtedly offers great benefits as it minimises delay of government decision-making. However, efficiency is just one administrative law value. If a form of decision-making is efficient but severely compromises values like transparency and accountability, it cannot be considered genuinely compatible with our administrative law system. This essay has argued that transparency and accountability issues directly related to ADM should be our main concern because they are central to our ability to scrutinise government decisions and test their legality. Transparency of ADM is compromised by the typical secrecy of ADM system algorithms as well as the challenges of obtaining adequate and comprehensible reasons for particular decisions. Additionally, the judicial reluctance to perceive certain automated decisions as ‘decisions’ seriously obstructs accountability in the form of judicial review. While ADM systems have undoubtedly been implicated in fairness issues, these issues ultimately derive from flawed human design and not the ADM system per se. It is clear that ADM will not be abandoned in the future. However, to make ADM more compatible with our administrative system, transparency and accountability reforms which enable proper scrutiny of government decision-making must be considered.
[1] Final year LLB student at UNSW. I would like to thank Linda Pearson for her feedback on this article. All opinions and errors are solely attributable to the author.
[2] Monika Zalnieriute, Lisa Burton Crawford, Janina Boughey, Lyria Bennett Moses, & Sarah Logan, ‘From Rule of Law to Statute Drafting: Legal Issues for Algorithms in Government Decision-Making’ in Woodrow Barfield (ed), Cambridge Handbook on the Law of Algorithms (Cambridge University Press 2019) 251, 251.
[3] Janina Boughey, ‘Outsourcing Automation: Locking the Black Box inside a Safe’ in Janina Boughey and Kate Miller (eds), The Automated State: Implications, Challenges and Opportunities for Public Law (Federation Press, 2021) 136, 136.
[4] Zalnieriute, Crawford, Boughey, Moses, & Logan (n 2).
[5] Simon Elvery, How Algorithms make important government decisions - and how that affects you (Web Page, 21 July 2017) <https://www.abc.net.au/news/2017-07-21/algorithms-can-make-decisions-on-behalf-of-federal-ministers/8704858>
[6] Toby Murray, Marc Cheong, & Jeannie Paterson, ‘The Flawed Algorithm at the Heart of Robodebt’, (2023) The University of Melbourne: Pursuit (10 July 2023) <https://pursuit.unimelb.edu.au/articles/the-flawed-algorithm-at-the-heart-of-robodebt>.
[7] Information and Privacy Commission New South Wales, ‘Fact Sheet - Automated decision-making, digital government and preserving information access rights - for agencies’ (Web Page, August 2022) <https://www.ipc.nsw.gov.au/fact-sheet-automated-decision-making-digital-government-and-preserving-information-access-rights-agencies#_ftn5>.
[8] Monika Zalnieriute and Felicity Bell, ‘Technology and Judicial Role’ forthcoming in Gabrielle Appleby and Andrew Lynch (eds.), The Judge, the Judiciary and the Court: Individual, Collegial and Institutional Judicial Dynamics in Australia, Cambridge University Press, 2020 1, 6.
[9] Ibid.
[10] Terry Carney, ‘Automation in Social Security: Implications for Merits Review?’ (2020) The University of Sydney Law School Legal Studies Research Paper Series No. 20/16 1, 2.
[11] Information and Privacy Commission New South Wales (n 7).
[12] Andrew Ray, ‘Implications of the Future Use of Machine Learning in Complex-Decision Making in Australia’ (2020) 1(1) Australian National University Journal of Law and Technology 4, 6.
[13] Ibid.
[14] Matthew Groves & Janina Boughey, ‘Part 1: Administrative Law in the Australian Environment’ in Modern Administrative Law in Australia: Concepts and Context’ (Cambridge University Press, 2014) 1, 5.
[15] John McMillan, ‘Ten challenges for administrative justice’ [2010] AIAdminLawF 5; (2010) 61 AIAL Forum 23, 23.
[16] ARC submission to the Senate Legal and Constitutional Legislation Committee 1996, paragraph 15.
[17] Robin Creyke, ‘Administrative Justice - Towards Integrity in Government’ [2007] MelbULawRw 30; (2007) 31(3) Melbourne University Law Review 705, 732.
[18] Quin v Commonwealth (1990) 170 CLR 1, 35 (Brennan J).
[19] Ibid.
[20] Creyke (n 17) 708.
[21] Administrative Appeals Tribunal Act 1975 (Cth) (‘AAT Act’) s 2A(b).
[22] Lyria Bennett Moses, Janina Boughey and Lisa Burton Crawford, ‘Laws for Machines and Machine-made Laws’ in J Boughey and K Miller (eds), The Automated State: Implications, Challenges and Opportunities for Public Law (Federation Press, 2021) 232.
[23] The Hon Justice Melissa Perry, ‘iDecide: the Legal Implications of Automated Decision-making’ (Paper presented at Cambridge Centre for Public Law Conference 2014, University of Cambridge, 15-17 September 2014), 1.
[24] Terry Carney, ‘Artificial Intelligence in Welfare: Striking the Vulnerability Balance?’ 46(2) Monash University Law Review 23, 49.
[25] Bernard McCabe, ‘Automated decision-making in (good) government’ [2020] AIAdminLawF 24; (2020) 100 AIAL 106, 119.
[26] Ray (n 12) 7.
[27] Lilian Edwards & Michael Veale, ‘Slave to the Algorithm? Why a ‘Right to an Explanation’ is probably not the remedy you are looking for’ (2017) 16 Duke Law & Technology Review 18, 41.
[28] Jenna Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3 Big Data & Society 1, 1.
[29]Cary Coglianese & David Lehr, ‘Transparency and Algorithmic Governance’ (2019) 71(1) Administrative Law Review 1-56, 21.
[30] Information Commissioner’s Office (UK) ‘What is automated individual decision-making and profiling’ (Web Page) <https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/individual-rights/automated-decision-making-and-profiling/what-is-automated-individual-decision-making-and-profiling/>.
[31] Coglianese & Lehr (n 29).
[32] Freedom of Information Act 1982 (Cth) (‘FOI Act’).
[33] Government Information (Public Access) Act 2009 (NSW) (‘GIPA Act’).
[34] FOI Act (n 32) s 11A(3).
[35] Katie Miller, ‘The Application of Administrative Law Principles to Technology-Assisted Decision-Making’ [2016] AIAdminLawF 26; (2016) 86 AIAL 20, 28.
[36] Boughey (n 3) 143.
[37] FOI Act (n 32) s 47.
[38] Ibid s 45.
[39] Cordover and Australian Electoral Commission (Freedom of Information) [2015] AATA 956 (11 December 2015).
[40] Andrew Ray, Bridie Adams, Dilan Thampapillai, ‘Access to algorithms post-Robodebt: Do Freedom of Information laws extend to automated systems?’ 47(1) Alternative Law Journal 10, 15.
[41] Nicholas Seddon, Government Contracts: Federal State and Local (Federation Press, 6th ed, 2018) 491-508.
[42] FOI Act (n 32) s 6C(2).
[43] Public Interest Advocacy Centre, ‘Putting public interest at the heart of FOI: Submission in response to the Commonwealth Government’s exposure draft of the Freedom of Information Amendment (Reform) Bill 2009 and the Information Commissioner Bill 2009’ (19 May 2009) <https://www.piac.asn.au/wp-content/uploads/09.05.19-PIAC-FedFOISub.pdf> 18.
[44] FOI Act (n 32) s 24A(2)(c).
[45] GIPA Act (n 33).
[46] Ibid s 121(2)(c).
[47] Information and Privacy Commission New South Wales, ‘Case Summary on Automated decision making and access to information under the GIPA Act’ (Web Page) <https://www.ipc.nsw.gov.au/information-access/gipa-case-studies>.
[48] Nicholas Seddon, ‘The Interaction of Contract and Executive Power’ [2003] FedLawRw 21; (2003) 31(3) Federal Law Review 541.
[49] Anna Huggins, ‘Addressing Disconnection: Automated Decision-Making, Administrative Law and Regulatory Reform’ (2021) 44(3) UNSW Law Journal 44(3): 1048, 1066.
[50] ARC, Decision Making: Evidence, Facts and Findings (Best-Practice Guide No. 3) (2007), ARC, Best Practice Guide No. 4, 8.
[51] Public Service Board of NSW v Osmond (1986) 159 CLR 656.
[52] Administrative Appeals Tribunal Act (‘AAT Act’) s 28.
[53] Administrative Decisions (Judicial Review) Act 1977 (Cth) (‘ADJR Act’) s 13.
[54] Coglianese & Lehr (n 29) 17.
[55] Ibid 14.
[56] Huggins (n 49) 1061.
[57] Leighton McDonald, ‘Reasons, Reasonableness and Intelligible Justification in Judicial Review’ [2015] SydLawRw 22; (2015) 37(4) Sydney Law Review 467, 480; Kocak v Wingfoot Australia Partners Pty Ltd [2012] VSCA 259; (2012) 35 VR 324, 343 [55].
[58] Our Town FM Pty Ltd v Australian Broadcasting Tribunal [1987] FCA 301; (1987) 16 FCR 465, 484.
[59] Lyria Bennett Moses & Anna Collyer, ‘Accountability in the Age of Artificial Intelligence: A Right to Reasons’ (2020) 94 Australian Law Journal 829, 830.
[60] Coglianese & Lehr (n 29) 16.
[61] Bennett Moses & Collyer (n 59).
[62] Minister for Immigration and Citizenship v Li [2012] HCA 61; (2013) 249 CLR 332, 367 [76] (Hayne, Kiefel and Bell JJ).
[63] Coglianese & Lehr (n 60).
[64] Bennett Moses & Collyer (n 59) 831.
[65] Comcare Australia v Lees [1997] FCA 1415; (1997) 151 ALR 647, 656.
[66] Tarek Besold & Sara Uckleman, ‘The What, the Why, and the How of Artificial Explanations in Automated Decision-Making (2018) Cornell University 1, 13.
[67] Jenna Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3 Big Data & Society 1, 1.
[68] Monika Zalnieriute, Lyria Bennett Moses & George Williams, ‘The Rule of Law and Automation of Government Decision-Making’ (2019) 82(3) Modern Law Review 1-26, 15.
[69] Drake v Minister for Immigration and Ethnic Affairs (1979) 2 ALD 60, 68 (Bowen CJ and Deane J).
[70] Carney (n 24) 28.
[71] Carney (n 10) 5.
[72] Pintarich v Federal Commissioner of Taxation [2018] FCAFC 79 (‘Pintarich’).
[73] The Hon Justice Melissa Perry, ‘iDecide: Digital Pathways to Decision’ (Speech, CPD Immigration Law Conference Canberra, 21-23 March 2019), 1.
[74] Anna Huggins, “Automated Processes and Administrative Law: The Case of Pintarich” on AUSPUBLAW (14 November 2018) <https://www.auspublaw.org/blog/2018/11/the-case-of-pintarich> .
[75] Robin Woellner, ‘It is a bad look’ (2020) eJournal of Tax Research 18(2) 508-30, 512.
[76] Pintarich (n 72) [120].
[77] Taxation Administration Act 1953 (Cth)
[78] Huggins (n 74).
[79] Semuningus v Minister for Immigration and Multicultural Affairs [1999] FCA 422.
[80] Huggins (n 49)1064.
[81] Pintarich (n 72) [46]–[47], [49] (Kerr J).
[82] Australian Broadcasting Tribunal v Bond (1990) 170 CLR 321, 336–337 in Yee-Fui Ng & Maria O’Sullivan, ‘Deliberation and Automation - When is a Decision a “Decision”?’ (2019) 26 AJ Admin L 21, 32.
[83] Zalnieriute, Burton Crawford, Boughey, Bennett Moses & Logan (n 2) 270.
[84] Yee-Fui Ng & Maria O’Sullivan, ‘Deliberation and Automation: When is a Decision a “Decision”?’ (2019) 26 AJ Admin L 21, 31.
[85] Ibid.
[86] Kalmin Datt & Robin Woellner, ‘The Pinatrich Saga: Technical and Ethical Issues’ (2021) 27(6) James Cook University Law Review 87, 87.
[87] Huggins (n 74).
[88] Royal Commission into the Robodebt Scheme (Final Report, July 2023) vol 1, 479.
[89] Ibid Appendix 9.
[90] Amato v Commonwealth (Federal Court of Australia, VID611/2019, 27 November 2019) 6 [8.1] - [8.2].
[91] Terry Carney, ‘Robo-Debt Illegality: A Failure of Rule of Law Protections?’ (Web Page, 30 April 2018) Australian Public Law (2018) <https://collie-dinosaur-8jag.squarespace.com/blog/2018/04/robo-debt-illegality>.
[92] Social Security Act 1991 (Cth).
[93] Ibid s 1222A(a).
[94] McDonald v Director-General of Social Security [1984] FCA 59.
[95] Social Security Act (n 92) s 1223.
[96] Royal Commission into the Robodebt Scheme (n 88) xxvi.
[97] Ibid 28.
[98] Murray, Cheong, & Paterson (n 6).
[99] Royal Commission into the Robodebt Scheme (n 88) 330-1.
[100] Huggins (n 49) 1065.
[101] Zalnieriute, Burton Crawford, Boughey, Bennett Moses & Logan (n 2) 254.
[102] Bennett Moses, Boughey & Burton Crawford (n 22) 232.
[103] Melissa Heikkila, ‘Dutch scandal serves as a warning for Europe over risks of using algorithms’, Politico (Web Page, 29 March 2022) <https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/>.
[104] European Parliament, ‘The Dutch childcare benefit scandal, institutional racism and algorithms’ (Web Page, 26 February 2022) <https://www.europarl.europa.eu/doceo/document/O-9-2022-000028_EN.html>.
[105] Ibid.
[106] Dutch Parliamentary Report, ‘‘Final report on childcare allowance re-search submitted’ (Web page) (17 December 2020) <https://tweedekamer.nl/nieuws/kamernieuws/eindverslag-onderzoek-kinderopvangtoeslag-overhandigd>.
[107] Huggins (n 49) 1052.
[108] Terry Carney, ‘The New Digital Future for Welfare: Debts Without Legal Proofs or Moral Authority’ (2018) UNSW Law Journal Forum 1, 12.
[109] Heikkila (n 103).
[110] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1 (‘GDPR’).
[111] Ibid arts 13(2)(f), 14(2)(g), 15(1)(h).
[112] Gianclaudio Malgieri & Giovanni Comande, ‘Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation’ (2017) 7(3) International Data Privacy Law 1, 22-3.
[113] Ibid.
[114] Article 29 Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 [2018] WP251, 27 <https://ec.europa.eu/newsroom/article29/items/612053>.
[115] Loi pour une République numérique (Loi no. 2016-1321) (France) JO, (7 October 2016) L.311-3-1.
[116] Décret n° 2017-330 du 14 mars 2017 relatif aux droits des personnes faisant l’objet de décisions individuelles prises sur le fondement d’un traitement algorithmique (France) JO, 14 March 2017, R311-3-1-1.
[117] Lilian Edwards & Michael Veale, ‘Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”?’ (2018) 16(3) IEEE Security and Privacy 46, 50.
[118] GDPR (n 110).
[119] Huggins (n 49) 1075-6.
[120] Carney (n 24) 25, 30.
[121] Michele Finck, ‘Automated Decision-Making and Administrative Law’ (2019) Max Planck Institute for Innovation and Competition Research Paper No. 19-10, 19.
[122] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) [2024] OJ L 2024/1689, art 6.
[123] Ibid.
[124] Ibid art 13.
[125] Ibid art 14.
[126] Australian Government, ‘National framework for the assurance of artificial intelligence in government’ (21 June 2024).
[127] FOI Act (n 32) s 45.
[128] Gianclaudio Malgieri, ‘Automated decision-making in the EU Member States: The right to explanation and other “suitable safeguards: in the national legislations’ (2019) 35 Computer Law & Security Review 1, 25.
[129] Industry Commission, Competitive Tendering and Contracting by Public Sector Agencies Report No. 4 (24 January 1996) 6.
[130] Ray, Adams & Thampapillai (n 40) 16.
[131] FOI Act (n 32) ss 47B, 47C, 47D, 47E, 47F, 47G, 47H, 47J.
AustLII:
Copyright Policy
|
Disclaimers
|
Privacy Policy
|
Feedback
URL: http://www.austlii.edu.au/au/journals/UNSWLawJlStuS/2024/24.html