![]() |
Home
| Databases
| WorldLII
| Search
| Feedback
Law, Technology and Humans |
Digital Governance and Neoliberalism: The Evolution of Machine Learning in Australian Public Policy
Harry Jobberns*
Australia
Michael Guihot**
Queensland University of Technology, Australia
Abstract
Keywords: Automated decision-making; neoliberalism; legislation; review.
1. Introduction
Governments have always been record-keepers and derive power from the information they hold.[1] Since the introduction of computers for record-keeping in the 1970s, the amount of data collected by governments on, about or affecting its citizens has grown, and in the last 20 years, this growth has been exponential.[2] Statistical interpretations of data trajectories now inform decision-making processes, replacing or assisting human decision-makers since at least the 1980s.[3] Galloway notes that ‘big data exponentially expands the possibilities or the scope of government operations’,[4] pushing them beyond the confines of existing legal frameworks. In an era of Big Data,[5] governments now maintain extensive repositories of information, encompassing citizen details such as names, ages, addresses, phone numbers, tax records, social security data, health records, travel histories, marital status and even photographic identification. Government departments and agencies now employ computers to make critical decisions related to social security entitlements,[6] migration,[7] customs,[8] and taxation[9] matters.[10] The advent of computerised record-keeping has not only increased the volume and diversity of data held but has also transformed the ways in which this data is accessed and utilised. In this landscape, the neoliberal approach to Australian governance has exerted a defining influence, shaping policies and initiatives aimed at streamlining government operations and reducing public expenditure.
Neoliberalism, characterised by its emphasis on privatisation, deregulation and market-driven policies, has necessitated a fundamental re-evaluation of how government services are delivered. At the heart of this lies the pursuit of cost savings – a cornerstone of neoliberal philosophy that seeks to enhance the efficiency and effectiveness of government services while minimising financial outlay. This quest for cost savings has precipitated a significant transformation in the way government institutions operate, with a particular focus on technology integration. Successive Australian governments, leveraging artificial intelligence, have harnessed their vast stores of Big Data to expand their decision-making capabilities. Given the impact of the decisions made by automated decision-making systems, the increased use by government departments and agencies of machine learning systems should raise concerns, or at least give some pause for thought.
This article highlights the impact of neoliberalism on the adoption of technology in public administration. Part 2 lays out the blunt legislative framework that permits computers to make decisions in multiple pieces of legislation. Part 3 highlights the practical administrative law implications of applying legislative provisions to computer-generated decisions such as transparency, lawfulness and fairness, and discusses Robodebt as a case in point. Part 4 then discusses the findings from a review of parliamentary debates and explanatory memoranda related to legislation authorising computer-based decision-making. It reveals the minimal level of discussion that accompanies such significant legislative changes and critiques the lack of debate and consideration of the consequences of such laws. Part 5 sets out some case law on the legitimacy of automated decision-making using machines. Part 6 outlines some possible strategies to ensure the responsible use of AI in government decision-making that should form part of discussions at every level, including some of the recommendations made in the Robodebt Royal Commission. Part 7 concludes the article. The aim is to contribute to the discourse on the role of technology in government, advocating for a cautious approach to the adoption of machine learning in decision-making processes.[11]
2. The Government Authorises Computers to Make Decisions
Our research showed that the legislative provisions that authorise computers to make decisions in government agencies are broad and, on their face, unfettered. We identified 35 pieces of legislation that allow the use of computers to make decisions for the decision-maker.[12] These powers apply in a wide range of policy areas including social security,[13] intellectual property,[14] citizenship[15] and migration;[16] health;[17] and customs.[18] The vast majority of these provisions are simply drafted and provide for the broad power of decision-making by computers. They take the form of a single section with two parts. The first part of each section authorises government departments to use computers to make decisions on behalf of the ultimate decision-maker. The second part of the section then deems the decision-maker to have made the decision that the computer has made.
These provisions are often legislated in blocks. They are included as part of omnibus Bills that include a raft of other amending legislation. As such, they often escape full consideration and discussion in parliament. For example, The Family and Community Services and Veterans’ Affairs Legislation Amendment (Debt Recovery) Bill 2001 was an omnibus Bill that included various amendments to several different pieces of legislation. A small part of that omnibus Bill inserted a new section 6A of the Social Security (Administration) Act 1999 (Cth) to allow for debt recovery of overpayments made by use of computers.[19] In the same Bill, section 223 of the A New Tax System (Family Assistance) (Administration) Act 1999 (Cth) was inserted, again to make it easier to recover overpayments of family assistance. These sections not only make it more efficient for computers to recover debts; they are also so broadly drafted that the authorisations give sweeping power to the relevant agency for computers to make decisions on their behalf. They apply to all decisions – discretionary and non-discretionary, ultimate or intermediate and so on – and once the computer makes the decision, the ultimate decision-maker is deemed to have made the decision. Section 6A of the Social Security (Administration) Act 1999 (Cth) is as follows:
6A Secretary may arrange for use of computer programs to make decisions
(1) The Secretary may arrange for the use, under the Secretary’s control, of computer programs for any purposes for which the Secretary may make decisions under the social security law.
(2) A decision made by the operation of a computer program under an arrangement made under subsection (1) is taken to be a decision made by the Secretary.
A new section 223 of the A New Tax System (Family Assistance) (Administration) Act 1999, inserted under the same omnibus Bill, is in essentially the same form:
223 Secretary may arrange for use of computer programs to make decisions
(1) The Secretary may arrange for the use, under the Secretary’s control, of computer programs for any purposes for which the Secretary or any other officer may make decisions under the family assistance law.
Note: The definition of decision in subsection 3(1) covers the doing of any act or thing. This means, for example, that the doing of things under subsection 162(1) or (2) are decisions for the purposes of this section.
(2) A decision made by the operation of a computer program under an arrangement made under subsection (1) is taken to be a decision made by the Secretary.
Only two years later, in 2003, the Superannuation (Government Co-contribution for Low Income Earners) Act 2003 was amended to include an almost identical section allowing computers to make decisions in relation to superannuation. Six years after that, in 2009, the National Consumer Credit Protection Act 2009 was enacted and included section 242, which authorises the use of computers to make decisions in terms identical to those previously listed.
In 2012, in another omnibus Bill, new section 12A was inserted in the Child Support (Assessment) Act 1989:
12A Use of computer programs to make decisions
(1) The Secretary of the Department of which the Registrar is an employee may arrange for the use, under the Registrar’s control, of computer programs for any purposes for which the Registrar may make decisions under this Act.
(2) A decision made by the operation of a computer program under an arrangement made under subsection (1) is taken to be a decision made by the Registrar.
and new section 4A was inserted in the Child Support (Registration and Collection) Act 1988:
4A Use of computer programs to make decisions
(1) The Secretary of the Department of which the Registrar is an employee may arrange for the use, under the Registrar’s control, of computer programs for any purposes for which the Registrar may make decisions under this Act.
(2) A decision made by the operation of a computer program under an arrangement made under subsection (1) is taken to be a decision made by the Registrar.
The apparent effect of these sections is unsettling, but it is consistent with a neoliberal approach to welfare payments. What is equally concerning is that our research, set out in Part 4 below, shows that the process by which the legislation has been enacted to authorise these changes often takes place without adequate (or often any) consideration or even discussion about the possible adverse impacts of automated decision-making. First, though, it is important to consider what impacts computer decision-making might have.
3. What are the Problems with Such a Process?
There are growing concerns about the use of, and outcomes achieved by, some automated decision-making processes. These concerns centre on the lack of transparency of these systems and the potential for errors and biases to which they are susceptible.[20] The complex applications that use decision-making algorithms can be ‘black boxes’;[21] as such, the decisions they make are often opaque and cannot be examined or easily explained. Further, the data used to train the algorithms is sometimes itself flawed or incomplete, or may even reflect existing biases.[22] The issues surrounding the use of government decision-making in Australia were most recently outlined in the Commonwealth Ombudsman’s Automated Decision-making Better Practice Guide.[23] However, these issues have been apparent to successive governments for around 20 years. The 2020 Better Practice Guide replaces a 2007 Guide[24] that was based on a 2004 report by the now defunct Australian Administrative Review Council to the Attorney General on Automated Assistance in Administrative Decision-Making (ARC Report).[25] The ARC Report and the 2007 Better Practice Guidelines concentrated on expert systems. Even in 2004, the ARC recognised the existence of neural networks and realised that there was a possibility of them being used in decision-making. The ARC Report recommended that computer systems should only be able to make final decisions if the decision involved no element of discretion.[26] In 2004, the ARC recognised that even expert systems ‘do not easily provide reasons for their decisions, which means they are not generally suitable for administrative decision making’.[27] The 2020 Better Practice Guide is a timely replacement for the 2007 Guide and recognises that, in the 20 years since the ARC Report was issued, there have been exponential advances in the capability of computers and the complexity of computer decision-making systems that are no longer limited to expert systems.
Over the past several years, many other national and international bodies have prepared reports highlighting concerns about the use of AI or machine learning, including in automated decision-making.[28] Several in Australia include the 2019 Office of the Victorian Information Commissioner’s Closer to the Machine report,[29] the Australian Human Rights Commission’s Technical Paper, ‘Using Artificial Intelligence to Make Decisions: Addressing the Problem of Algorithmic Bias’, issued in 2020,[30] and the Office of the Australian Information Commissioner’s Guide to Data Analytics and Australian Privacy Principles, issued in 2018.[31] Each of these reports discusses real concerns about AI or machine learning systems making decisions that affect humans. Several themes arise in these reports that centre around fairness and equity, beneficence, sustainability, transparency and accountability, robustness and resilience, and privacy and data protection.[32]
Machine learning systems using neural networks to process the data are often impenetrable to attempts at review. Given the state of knowledge about the problems with computer decision-making, each time the government is asked to legislatively authorise their use, there should be some reasoned consideration and discussion of the possible effects of computer decision-making in individual government agencies. For example, decisions made on immigration issues, social security benefits or debt, or tax liabilities will arguably have a more fundamental human impact than those decisions made about the registration of business names. Yet it is in these very areas that a neoliberal ideology preys. Computer decision-making using machine learning, especially in these areas, poses risks to fundamental principles of transparency, lawfulness and fairness in administrative decision-making.[33]
Chief among concerns regarding automated administrative decision-making is the lack of transparency in the automated processes.[34] A challenge presented by automated systems, particularly those incorporating machine learning, is their notoriety as ‘black boxes’.[35] This means it is difficult to query and understand exactly how a result was reached or a decision was made due to the complex nature of the neural networks that are contained in the machine learning model, as well as assertions of proprietary interests by the third-party provider of the system based on trade secrets.[36] Some of the more advanced automated computer systems with machine learning often deliver results that ‘cannot support causal explanations of the kind that underlie the reasons traditionally offered to justify governmental action’.[37] Such systems are also described as opaque and operating outside the scope of meaningful scrutiny and accountability.[38]
Transparency in government decision-making is seen as central to strengthening public confidence in the integrity of government decisions.[39] Over the last 30 years, a trend towards open government has seen the principle of transparency emerge as one of the most prominent in the Australian administrative law landscape.[40] This is now reinforced by statutory requirements for administrative decision-makers to provide reasons,[41] and freedom of information laws.[42] In an administrative decision-making context, Danaher labels this the ‘threat of algocracy’, whereby ‘algorithm-based systems [limit] the opportunities for human participation in, and comprehension of, public decision-making’.[43] Algocracy refers to a situation where algorithmic tools, such as machine learning systems, render the conclusions drawn by the system ‘incapable of reduction to explanation in human language terms’.[44] This may become increasingly acute over time as human operators cede data processing functions to computers, losing the skills and understanding required to meaningfully check the computer’s recommendations.[45] Our capacity to review decisions made by government, including their agencies, is central to upholding the rule of law[46] and relies on our ability to examine and understand the systems used for making decisions. The opacity of machine learning systems using neural networks and complex algorithms poses a serious risk to transparent decision-making.
3.2 Lawfulness
Another key concern surrounds the legality of automated administrative decisions. Under the Administrative Decisions (Judicial Review) Act 1977, a person who is the subject of a decision may seek review of that decision if ‘the decision was not authorised by the enactment in pursuance of which it was purported to be made’[47] or the making of the decision was ‘an improper exercise of the power conferred by the enactment’.[48] Equally, if an automated system is being used to make part or all of that decision, it must be authorised.[49] The 2020 Better Practice Guide also notes the need for legislative authority for automated decisions made ‘without the use of human judgement at the point of decision’. It sets out that:
The authority for making such decisions will only be beyond doubt if specifically enabled by legislation. The construction of such an authorisation should nominate a position or title of a person with ultimate responsibility for the decision, such as the Secretary of the relevant department.[50]
This is indeed the nature of the sections we have studied and set out in Part 2 above, thus putting questions of lawfulness ‘beyond doubt’.
We entrust our government to make laws in accordance with its powers under the Constitution and for the better governance of its citizens. Parliament is the proper place to decide these issues. However, as Perry notes, authorisations such as those set out in Part 2 above leave legal uncertainties.[51] These include whether a computer can, in fact, make a decision. Can a computer program act independently from its programmer, or the relevant government agency, to meet the requirement of independence in delegated decision-making? Perry asks: if a computer cannot make a decision, who would be deemed to have been the decision-maker – the secretary, the computer or the programmer? On a strict reading of the sections discussed in Part 2, the ultimate decision-maker in the agency is deemed to have made the decision of the computer. Bateman argues that there is a disconnect between the types of clauses set out in Part 2 and public law doctrine. He maintains that ‘orthodox principles of statutory interpretation would require’ an interpretation that gives primacy to these computer-attribution provisions.[52]
In an administrative law sense, fairness generally refers to principles of natural justice or procedural fairness.[53] Procedural fairness connotes a ‘flexible obligation to adopt fair procedures which are appropriate and adapted to the circumstances of the particular case’.[54] General principles afforded under procedural fairness include the right to argue one’s case (hearing rule),[55] the right to have decisions made based on probative evidence (evidence rule)[56] and the right to have decisions unaffected by bias (bias rule).[57] Computer decision-making using machine learning has the potential to compromise these fairness principles if considered thought is not given to system development and how that system is used.[58]
Bias in machine learning systems is another major concern raised in an administrative law context and in the many reports referred to above. Algorithms are often championed for their capacity to circumvent human biases by rationally analysing data. However, there are concerns that sexism, racism and other forms of discrimination are being built into the models used to make decisions.[59] These biases are difficult to detect in these models[60] and there is a risk that it can ‘become part of the logic of everyday algorithmic systems’.[61] As an example of the possibility of bias in machine learning systems, in law enforcement in the United States, machine learning systems creating recidivism risk scores for defendants have given disproportionately higher risk scores to Black defendants, at almost double the rate of white defendants.[62] These failures appear to be due to issues with both content and oversight. Lehr and Ohm take a detailed look at eight stages of machine learning model development before the model is applied to the data, including issues in defining the problem, collecting the data, cleaning the data, statistics review, data partitioning for training, model selection and training the model.[63] It is in these stages that biases, inaccuracies and errors can be introduced into the system. For example, where the data used to train the model is the product of an already biased system, then the model will share those biases.[64] Once in, they are hard to eradicate and, more importantly, to argue against.
Another bias arises when we use machines to make decisions. Machines are often assumed to be right.[65] It is difficult for the person affected by an erroneous decision to prove otherwise without knowledge of how each of the steps set out above were carried out. It is equally difficult for a human in the loop to reject a decision made by a machine without undertaking a thorough critical analysis and review of the decision made.[66] It is unlikely that this will be done without a requirement to do so under legislation. Calo and Citron ask whether we have just ‘traded the possibility of human bias for the guarantee of systemic bias’.[67] This reinforces the need for vigilance in designing and implementing these systems in some of our most important areas of public decision-making.[68]
3.4 Case Example: Robodebt
A prominent example of problems involving automated decision-making in the pursuit of efficiency is the controversy surrounding Centrelink’s Online Compliance Initiative (OCI), commonly known as ‘Robodebt’. Although it did not involve one of the sections discussed in this article, it is a good example of what can go wrong when a government with a neoliberal bent uses technology to enforce its will on vulnerable citizens. The OCI system was introduced in July 2016, with the aim of automating the process of raising and recovering social security overpayment debts. It operated by data-matching a fortnightly average income figure calculated from ATO information on a welfare recipient’s annual income against the income reported to Centrelink by the recipient.[69] The initiative’s intention was to identify discrepancies more efficiently in recipients’ income reporting and to recover any overpayment.[70] The problem with the system was that the model used flawed statistical logic to make its decisions. The problem was exacerbated when that flawed decision was used as evidence of a reporting discrepancy. Averaging a person’s income would likely be more accurate for someone with a consistent income across the year, a scenario that is particularly unlikely for welfare recipients. Once a debt was raised, the onus was placed on the recipient to disprove the discrepancy.
The Royal Commission found that the human-initiated instructions behind the system were unlawful. Prior to automation, the debt-recovery process would trigger a manual review of approximately 20,000 cases per year that would be investigated and, if evidence supported it, action was taken.[71] After the OCI was implemented, there were an estimated 783,000 cases raised in 2016–17.[72] While certainly representing an increase in the number of cases able to processed, there were real concerns about the procedural fairness implications with the system placing an undue burden on some of our society’s most vulnerable members.[73] In what is perhaps a hallmark of neoliberal pushes for economic efficiency, this error affected the least powerful and most vulnerable citizens.[74] The response from some affected citizens was to ‘simply throw up their hands, and assume Centrelink knows that there really is a debt’.[75] Further, in many cases Centrelink was unable to provide sufficient evidence establishing the debt when challenged, revising down or wiping at least 20,000 debts.[76] In November 2020, the Australian government agreed to pay $1.2 billion to settle a class action that had sought to recover the claimed debts from the government.[77] The outcome of this program provides a clear example of the need for oversight as more government decisions become automated. A Royal Commission into the Robodebt scheme was established in 2022 and the findings and recommendations of the Royal Commission were published in a report in 2023 (Robodebt Report).[78] The recommendations of the Robodebt Report are discussed further in Part 6.
Given the concerns raised here in relation to the use of machine learning in computer decision-making, the government should pay acute attention to its ramifications when authorising its use. However, our research has shown that very little to no consideration has been given to these issues before the provisions have been legislated.
4. Parliamentary Discussion
In the 1990s, when governments began to approve the use of computers for decision-making in their operations, computers were comparatively quite basic. Since then, computers have evolved significantly, becoming much more sophisticated and powerful. As a result, the initial permission granted by the legislature to use computers in government decision-making now comes with much more complicated consequences due to the advanced functionalities possessed by modern computers. This begs the question of why successive Australian governments would legislate sweeping powers that authorise computers to make decisions on behalf of agency heads without considering the possible implications of such powers.
The process of government law-making in Australia follows a timeworn path. First, Bills are introduced into the House of Representatives, then they are read a second time. Members have some time both to get public input and to study the Bill before debating and voting on it.[79] The second reading debate allows for debate about the Bill where each clause of the Bill can be considered in detail. During this stage, members can again speak to the proposal clause by clause. Each discussion is recorded in the Parliamentary Hansard. When interpreting a provision of a statute, the interpretation that best achieves the purpose is preferred.[80] The things that can assist that interpretation include the text of the document, Royal Commission or Law Reform reports, parliamentary committee reports considered before the section was enacted, treaties, explanatory memoranda and second reading material.[81] In this process, ‘detailed debate is considered unnecessary for many bills which are supported by all parties or, in a technical or drafting sense, are very limited in scope’.[82] However, this article argues that, even though these amending Bills are slight, and the changes seemingly innocuous, their reach and impact will be multiplied as government agencies increasingly shift to automated decisions based on machine learning systems processing citizen data.
We reviewed the explanatory memoranda and Hansard of all second reading speeches of legislation that has authorised this use of computers by government departments and agencies since the process began in 2001.[83] In many cases, discussion and debate about the use of computers to make decisions were either non-existent or extremely limited. Our research identified two things: the breadth and unfettered nature of the automated decision-making powers provided for by legislation in Australia (as discussed in Part 2 above); and the limited extent to which parliament has debated or considered the risks associated with automated decisions when enacting these powers. We recognise that there are other avenues for parliamentary discussion on such issues, including various standing committees; however, these fell outside the scope of the research undertaken. Our research concentrated on discussions in the explanatory memoranda and second reading material, but we did not discover any Royal Commission reports, treaties or parliamentary committee reports, that could have been included. Therefore, for the purposes of this study, we focused on those things that might best illuminate the intention of the provisions within the interpretative framework set by parliament.
We first searched for legislative provisions that authorised the use of computers to make government decisions. This was done using the advanced search function on the federal legislation registry.[84] We searched the registry for phrases including ‘use of computer programs’ and ‘computerised decisions’, as this was common wording identified from previous section examples. The search covered Commonwealth Acts in all portfolios, registered at any time and presently in force. After identifying legislative provisions that allowed for computerised decision-making, we identified the Bill introducing the section on the Parliament of Australia website.[85] We then accessed and reviewed the second reading speeches and explanatory memoranda to determine the level of parliamentary discussion surrounding the introduction of the relevant section. A score was given from 1 to 3 to indicate the level of debate on the provision, with 1 meaning no discussion, 2 meaning no discussion beyond a restatement or description of the section and 3 indicating that some substantive discussion occurred beyond mere description.
Our research shows the extent to which parliament has engaged with the risks and concerns surrounding computer-automated decisions in the customary process of our parliamentary system – through parliamentary debate – before these provisions were enacted. The results are set out in the Schedule to this article.
Discussion Level 1(No Discussion)
The current method of legislative authorisation for computers to make decisions in government agencies began around 2001, when authorising sections were inserted into the Social Security (Administration) Act 1999 (Cth) and the A New Tax System (Family Assistance) (Administration) Act 1999 (Cth) in identical terms in one omnibus Bill, the Family and Community Services and Veterans’ Affairs Legislation Amendment (Debt Recovery) Bill 2001.[86] At that time, the government wanted to pursue perceived overpayments of social security and family assistance payments and sought authorisation to use computers to do this.[87] The amending legislation authorised the use of computers to make decisions. The explanatory memorandum for the Family and Community Services and Veterans’ Affairs Legislation Amendment (Debt Recovery) Bill 2001 noted that:
Many of the powers and functions provided for by the social security law can be effectively exercised by the operation of a computer program. Schedule 2 provides for decisions to be made by computer program where the decision is one which the Secretary could have made pursuant to the social security law.
... The amendments made by Schedule 2 mean that, where the Secretary could make a decision in accordance with the social security law and that decision is made as a result of the operation of a computer program, that decision is taken to be a decision made by the Secretary.[88]
Given the reach and power of these authorisations for computers to make decisions for government agencies, it might have been appropriate for these amendments that allow computers to make decisions on behalf of the Secretary of the Department to be circumscribed in some manner or to cause some debate in parliament. However, apart from the brief descriptions in the explanatory memorandum, there was no debate on the second reading speech or in the Senate that related to this broad permissive amendment.
In 2003, the Superannuation (Government Co-contribution for Low Income Earners) Act 2003 was amended to include an almost identical section allowing computers to make decisions on behalf of the Commissioner and for any such decision to be deemed the decision of the Commissioner. The only reference in the explanatory memorandum to the amending Bill was that ‘Computer programs may be used to make these decisions, which are taken to be decisions of the Commissioner. A decision includes a decision not to make a determination under sections 13, 15, or 19.’[89] There was no discussion in the second reading speech or in the Senate.
Six years later, in 2009, legislation in the form of the National Consumer Credit Protection Act 2009 was enacted that included section 242. This section authorises the use of computers to make decisions in terms identical to those previously listed. The explanatory memorandum noted at paragraph 5.74 that ‘ASIC may arrange for the use of computer programs which are under their control for any purposes relating to making decisions under the Credit Bill [Part 5-5, Division 2, subsection 242(1)]. A decision made by such a computer program is taken to be a decision of ASIC.’ Again, there was no discussion in the second reading speech or in the Senate.
Nor was there any discussion in relation to section 305 of the Paid Parental Leave Act 2010, which was set out in identical terms.[90] Again, there was no discussion in the second reading speech, but the explanatory memorandum claimed that the second part of the section ensured ‘that a computer-generated decision is subject to review by deeming it to be a decision by the Secretary’.[91] This does not represent the full effect of the deeming section and in some respects may be misleading. The 2020 Better Practice Guide sets out the need for legislative authority for automated decisions.[92]
In 2011, the Business Names Registration Act 2011 was enacted; it included section 66, which gave ASIC the power to arrange for computers to make decisions for any purposes and, again, for the decision to be taken as a decision of ASIC. A decade after the amendments to the Social Security (Administration) Act 1999 (Cth) were inserted in 2001, section 66 of the Business Names Registration Act 2011 (Cth) was enacted in the now established form. The explanatory memorandum noted that the section did not apply to the review of any decisions but otherwise confirmed the broad nature of the power under the section.[93] There was no discussion in the second reading speech or in the Senate.
Similar levels of discussion were had in relation to the amendment introducing section 7 of the Therapeutic Goods Act 1989, inserted by the Therapeutic Goods Amendment (2009 Measures No. 1) Bill 2009; section 87 of the Australian National Registry of Emissions Units Act 2011, introduced by the Clean Energy (Consequential Amendments) Bill 2011; the Carbon Credits (Carbon Farming Initiative) Act 2011 (original Act) and five other Acts between 2011 and 2016.
Discussion Level 2 (Low-level Discussion)
Our review shows that 13 of the Bills amending legislation to allow computers to make decisions were introduced through omnibus Bills that also included a raft of other legislative changes unrelated to automated decision-making. As an example, the Social Security and Other Legislation Amendment (2012 Budget and Other Measures) Bill 2012[94] was subtitled ‘A Bill for an Act to amend the Social Security Act 1991 and other legislation, and for related purposes’.[95] The main purposes of the Bill were to implement changes brought about by the 2012 Budget. This entailed amending multiple Acts including the Social Security Act 1991, the Veterans’ Entitlements Act 1986, the A New Tax System (Family Assistance) Act 1999 and the Social Security (Administration) Act 1999. The purposes for the amendments listed as Schedules 1 to 6 were to amend what was ‘Excluded Income’ in the Social Security Act 1991,[96] to make ‘Adjustments to portability and other periods’,[97] to clarify ‘Age/study rules for children for family assistance payments’,[98] to confine the application of the ‘Family Tax Benefit and reasonable maintenance action’[99] and to amend the percentage of care rules for children under the A New Tax System (Family Assistance) Act 1999,[100] to limit the low income supplement under the Clean Energy (Household Assistance Amendments) Act 2011[101] and ‘Other Amendments’.[102] Those other amendments included amendments to the A New Tax System (Family Assistance) Act 1999 and the A New Tax System (Family Assistance) (Administration) Act 1999. The amendments that inserted the contentious sections relevant to this article in both the Child Support (Assessment) Act 1989 and the Child Support (Registration and Collection) Act 1988 were set out at the very end of the Bill on pages 38 and 39. This way of introducing important changes with such broad effect does not promote the level of discussion that we suggest is necessary.
The Explanatory Memorandum accompanying the Social Security and Other Legislation Amendment (2012 Budget and Other Measures) Bill 2012 explained in relation to the contentious sections that:
This provision is intended only to confirm the validity of fully automated decisions. It is not intended to apply to computer-assisted decisions where the Registrar, or a delegate, makes the final decision on the basis of information supplied by a departmental computer system. Computer-assisted decision-making in the child support context is consistent with general administrative law principles and does not require statutory authorisation.[103]
Even with this prompt in the Explanatory Memorandum, during discussions in the second reading speech for the Bill, the following was the level of discussion:
Senator FIFIELD: This bill also makes some small amendments, including amending the child support legislation, to clarify the authority for the practice of automated decision-making using computer programs. I may well ask the minister at the next estimates if he can take us through a little more of what that involves.
Senator Kim Carr: Decision making.
Senator FIFIELD: Decision making; that is right. But there is a gap in my knowledge there that I am sure the minister will be able to fill in the next Senate estimates.[104]
Later in the same second reading speech discussion, Senator McKenzie contributed this:
Senator MCKENZIE: I come to schedule 7. It covers other amendments, including the authority for the practice of automated decision making using computer programs. Like Senator Fifield, it is with bated breath that I await estimates so we can pursue that topic. I would hate to interrupt the minister and his conversation on the other side of the chamber to see what he thinks about that, so we will wait for estimates.[105]
The Bill passed into legislation without amendment. There was no record of discussion in Senate estimates. Even though this was a very low level of discussion, we still allocated this to Level 2 because there was at least some discussion of the sections. This legislation passed under the then Labor government. Other Bills have received the same treatment under Coalition governments. The discussion was framed as authorising ‘the practice’ of automated decision-making, intimating that it was legislative approval of current practices. Yet no member of parliament seemed to know what that practice was. Further, in the pieces of legislation for which some debate could be found, there was no discussion of the amendment inserting the automated decision-making section into the Act beyond a rereading of the relevant section.
Discussion Level 3
The only (albeit limited) debate that has occurred in relation to some of the issues that can arise when computers make decisions happened in relation to Bills introduced after Centrelink’s ‘Robodebt’ controversy.[106] In 2017 the government sought to introduce computer decision-making amendments to the Military Rehabilitation and Compensation Act 2004, the Safety, Rehabilitation and Compensation (Defence‑related Claims) Act 1988 and the Veterans’ Entitlements Act 1986 through the Veterans’ Affairs Legislation Amendment (Digital Readiness and Other Measures) Bill 2017. There was some debate in the second reading speech over several sittings in March 2017 about the implications of the use of computers to make decisions in these sensitive areas that stretched to nine pages of text including some limited discussion[107] and more detailed discussion.[108] Some Members raised concerns about avoiding the issues that arose in the Centrelink ‘Robodebt’ controversy. This included Amanda Rishworth (Kingston), who said:
Australian Defence Force Association were particularly concerned about the ability of the computer programs to handle the nuances of claims processes and discussed the recent issues with the Centrelink automation process. The department has addressed some of these concerns in its submission to the committee and emphasised that the only decisions which would be suitable for computerised decision-making are those that can be converted into an algorithm and generated based on information that is not open to interpretation.[109]
It is difficult to understand what is meant by decisions that can be ‘converted into an algorithm and generated’. This perhaps betrays a failure to completely understand the computer decision-making process.
Mr Hart (Bass) said:
In some cases it can be fairly said that automation of business processes within a department might have been used as an aid to decision-making, whilst not displacing the matrix required to be assessed as part of an administrative decision. It is perfectly appropriate, where automation of decision-making is to be elevated beyond an aid to decision-making, that there is legislative sanction for the use of computer programs to make decisions or exercise other functions.[110]
The administrative law issues with computers making decisions raises concerns about whether the decision is a final or operative decision, as discussed in Australian Broadcasting Tribunal v Bond.[111] There is no distinction in the legislation between decisions that assist the decision-maker, and decisions made wholly by the computer and the legislation would appear to authorise both kinds of decisions.
Presumably based on concerns raised after the Robodebt controversy, the supplementary explanatory memorandum explained some attempts at limiting the breadth of application of the sections. It noted that decisions related to whether injury or disease was contributed to by a person’s employment would not be authorised under the legislation, and that ‘a computer program cannot be authorised to reject a claim for liability’.[112] The result of this debate was notable, as these three pieces of legislation were the only ones of the 35 reviewed that provided for some limitation to the use of computers for automated decision-making – that is, where the limited list of decisions require discretionary judgement.
However, even among this more considered debate, the discussion was littered with content that betrayed the lack of understanding of some members of the problems associated with machine learning processes in automated decision-making. For example, in his discussion about amendments to the National Health Act 1953, Senator Canavan only outlined advantages of computer decision-making but did not canvass the problems associated with it, including those outlined in Part 3 above. In what is textbook bias toward technological solutionism, Senator Canavan said that, ‘Automated processing reduces errors, increases accountability and generates easily auditable transaction records. Users can be confident that decisions are uniform and fair.’[113] These were the same words used by Minister Ley (Minister for Sport and Minister for Health and Aged Care) the day before in the House of Representatives.[114] These types of misunderstandings about the power and infallibility of computers should be discussed more broadly, and the dangers of over-trusting and promoting technology as a solution debated if governments are to come to some reasoned conclusions about computer decision-making.
In Australia, the bicameral process of parliament to make legislation should ensure that there is detailed debate and oversight before laws are made. However, our research indicates that there is a lack of review and debate in relation to allowing computers to make decisions in our government agencies. Our research found that 13 of the 35 provisions to include the use of computers to make decisions were introduced through omnibus Bills that also included a raft of other unrelated legislative changes. We do not argue that these amendments were included in large and complex omnibus Bills to purposefully obfuscate or avoid discussion of this important issue, but such a process inevitably has such a limiting effect. Hildebrandt notes that it should be a ‘precondition’ to discuss these issues openly and fully in parliament before enacting this type of legislation.[115]
On one view, it is arguable that the 35 instances of legislation authorising computers to make decisions as detailed in this article have only been legislated to ensure that decisions made using a computer would survive review on a claim that the use of a computer was ultra vires. But the opportunity to review and debate these issues should have ensured that it was appropriate to allow such a system in the particular decision-making process at issue and to which the legislation applied, while also allowing debate about the administrative law principles that might be affected.
We recognise that parliamentary debate is not the only government-level consideration of automated decision-making. Senate estimates committees review the potential cost of implementing government legislation.[116] However, our research could not locate any discussion of the relevant legislation in Senate estimates records either. As discussed too, the 2020 Better Practice Guide offers some limited but non-enforceable guidance to administrative agencies when implementing computer decision-making systems and processes. Despite these other avenues for political discussion, the obligations of our parliament to properly consider and authorise automated decision-making in government agencies should not be a tick-the-box exercise for such an important change in our social development. This then begs the question of why governments would give themselves such unfettered power to make decisions using computers despite the many obvious concerns raised over the last 20 years.
So what is the effect of this legislative imprimatur on computer decision-making in government agencies?
5. Legitimacy of Automated Decision-making
As discussed in Part 3, when authorised under one of these provisions, the relevant government agency can use computer systems to make final decisions, or the computer may be used to assist human decision-makers. One argument in response to claims of over-reach is that computer decisions will be acceptable if they are only used to assist the decision-maker – that is, there will always be a human in the loop; the computer processes the data and produces a decision, but the human is the ultimate decision-maker and will decide whether to agree with or ignore the computer output. There is a danger in this process of automation bias,[117] a tendency to trust the outcome of algorithmic decision-making aids.[118] Mozier and Skitka define automation bias as ‘the tendency to use automated cues as a heuristic replacement for vigilant information seeking and processing’.[119]
This means that even where errors are occurring, those using the decision-making tools will not promptly identify the issues and will be slow to undo the damage. Studies have shown this to be common across a variety of situations, including aviation,[120] medical diagnosis,[121] process control[122] and military operations.[123] While this bias has little effect where automated systems are functioning properly, where errors do occur, it has been shown to increase the detrimental effect of that error.[124] Pinder and Lloyd argue that there is a risk of an apprehension of bias if there has been a ‘significant amount of “pre-digestion” by a computer program – and especially where a specific recommendation has been offered and a decision-maker has accepted that recommendation’.[125] The effect of this can be extreme, with one study blaming computer error and automation bias for US forces inadvertently killing friendly air crews in the Iraq war.[126] The question, then, is how influenced by the machine the human is when making their final decision. Uncertainty remains concerning whether ‘regulating by robot’ can be accommodated within existing legal norms.[127] There is some case law on the legitimacy of decisions made by computers under the sections discussed in this article. These cases reinforce the need for governments to take care when introducing sweeping authorisations without implementing some guardrails on their ultimate use.
In Re Swinburne v ASIC,[128] the Administrative Appeals Tribunal discussed the operation of section 66 of the Business Names Registration Act 2011 (Cth) – that is, again, in the standard form that was set out in Part 2 above. The Deputy President of the Tribunal in Swinburne appeared to baulk at the breadth of these sections authorising computers to make decisions.[129] However, he conceded that the intention of the legislation was that ASIC had to the power to decide “when and to what extent computer programs are to be used in making decisions under the Act”.[130] While the discretion to which the Tribunal member referred is a discretion to use computers to make decisions or not, there is no such discretion in the legislation once the decision is made to apply the computer to the act of making a decision. Once the decision is made, it is deemed to be the decision of the ultimate decision-maker in that agency. The Tribunal in Swinburne set aside the decision under review. But this was done so on an analysis of the meaning of ‘nearly identical’ in the Act, rather than as a rejection of the ability of computers to make the decision.
Similarly, in B & L Whittaker Pty Ltd and ASIC and Anor,[131] the Administrative Appeals Tribunal considered whether a computer decision that had considered that a proposed business name ‘Cairnscrete Pumping’ was not ‘nearly identical’ to an already registered business name ‘Cairns Concrete Pumping’ for the purposes of the Business Names Registration Act 2011 (Cth).[132] The Tribunal found that ASIC decisions of this nature are ‘made by the application of a computer programme rather than by the intervention of human agency’.[133] Deputy President Hack noted that the relevant section permitted the use of computers to make decisions, but he was compelled by the legislation to affirm the decision even though he considered it ‘quite absurd’.[134] Justice Kerr has questioned the purpose of merits review if the human decision-maker is powerless to do anything about such an ‘absurd decision’.[135]
In 2013, in Kampf and Secretary, Department of Families, Housing, Community Services and Indigenous Affairs,[136] the Tribunal held that a decision made by a computer that the applicant was not an Australian resident at the time the decision was made was a valid decision pursuant to section 6A of the Social Security (Administration) Act 1999 (Cth).[137]
In Re Paoli and Secretary, Department of Social Services (Social services second review),[138] the applicant’s pension was automatically cancelled when she did not reply to computer-generated notices requiring relevant information about her eligibility for the pension. The Tribunal found that the applicant was given notice by electronic means of the notices and that her failure to respond triggered the automatic cancellation of her pension. However, the Tribunal found that special circumstances existed that allowed it to use its discretion to hold that the automatic cancellation provision under the Act did not apply to the applicant. This is the administrative law working as it should to ameliorate harsh effects of the administrative state. However, how many other automatic decisions like this one go unchallenged because those affected by the decisions are the most vulnerable, least powerful and most accepting of the state and its technologies?
Pintarich v Deputy Commissioner of Taxation[139] was not a decision based on one of the sections that have been set out in this article; however, it demonstrates the complexity involved in making decisions with the involvement of computers. In this case, Mr Pintarich received a computer-generated letter from the Deputy Commissioner of Taxation appearing to waive the general interest charge (GIC) on a tax debt owed to the Australian Taxation Office (ATO). This letter had been generated using a computer-automated template and sent without human review. Relying on this letter, the taxpayer sought finance from a third-party bank to repay his outstanding tax debt minus the GIC. When the Deputy Commissioner later issued a claim that included the GIC, the taxpayer sought judicial review of that decision under the Administrative Decisions (Judicial Review) Act 1977 (Cth). The application was dismissed. On appeal to the Full Federal Court, it was held by a two to one majority that the letter issued by the automated system was not a ‘decision’ for the purposes of the Taxation Administration Act 1953 (Cth) (TAA) as it did not constitute ‘the reaching of a conclusion as a result of a mental process’.[140] The result was that the Deputy Commissioner could reverse the original letter and require payment of the GIC. The most pertinent part of this case for the argument laid out in this article is in the dissent of Kerr J. He warned that ‘what was once inconceivable, that a complex decision might be made without any requirement of human mental processes is, for better or worse, rapidly becoming unexceptional’.[141] He warned that the increasing automation in government decision-making posed real challenges to maintaining traditional administrative law principles. The use of a computer to make the decision in the Pintarich case highlights both the failure of the Deputy Commissioner for Taxation to properly oversee the computer decision-making process that was used in that case, and the pitfalls in this process when computer decision-making is not legislatively sanctioned.
Pinder and Lloyd suggest that it is the role of the courts to ‘appropriately’ develop and apply administrative law principles.[142] However, given the requirement for legislative authorisation of these types of decisions, it should not be left to the courts to police this issue alone ex post. Further scrutiny of legislation before it is put into law is a role that parliamentarians should take on. Despite this onus, our study – as outlined in Part 4 – suggests that parliament is not equipped to consider the ramifications of increasing automated decision-making in government. Once the provision is in place in the form set out in this article, it is a matter for the individual agency exactly how, and under what controls, it implements the decision-making process.
There are a raft of guides, ethical practice statements, and reports that, if heeded, can add to the responsible use of computers in making automated decisions. The 2018 AI Now Institute report warned that these systems risk introducing and reinforcing ‘unfair and arbitrary practices’ for which governments will be held accountable. The report called on governments to be cautious and to ensure proper oversight, transparency and accountability.[143] This article investigates the extent to which our legislators are scrutinising the increased use of automated decision-making in government agencies and argues that, at the very least, more oversight is needed.
The 2020 Better Practice Guide provides sensible guidance to agencies about the use of computers to make decisions. However, it is only a guide, provided by the ombudsman, with no real enforceability and with no repercussions if it is not followed.[144] Different agencies are responsible for following the Guide (or not) as they see fit. This will be more or less of a problem depending on the scope of the agency’s remit. For example, the Department of Home Affairs is responsible for the following functions: national security and law enforcement policy; emergency management, including crisis management and disaster recovery; countering terrorism policy and coordination; cyber security policy and coordination; countering foreign interference; critical infrastructure protection; multicultural affairs; countering violent extremism programs; and transport security.[145] A large number of decisions can be made in those areas that have potentially far-reaching effects on citizens and migrants.
By way of example, in an interview in 2018, the secretary of the Department of Home Affairs, Michael Pezzullo, discussed what he called the ‘golden rule’ of the Department in relation to automated decision-making. He said:
Your freedom to move, your ability to move between jurisdictions, your ability to travel, your ability to open a bank account, your ability to drive on the road, ultimately is going to be impacted by officials of the state saying you either can or can’t do something.[146]
The golden rule, the secretary said, is that:
No robot or no artificial intelligence system should ever take away someone’s right, privilege or entitlement in a way that can’t ultimately be linked back to an accountable human decision-maker.[147]
Secretary Pezzullo also said that AI systems will not always make final decisions but can also be used to assist decision-making. The golden rule implies that negative decisions always go through a human.[148] He said that the golden rule would mean that human officers ‘might be prompted by an AI, they might be assisted by AI, but it’s a human that will deny your visa’.[149] The key phrase in the statement is ‘in my view’. This betrays the discretion that is given to departments such as the Department of Home Affairs to allow computers to make decisions that, in the view of the secretary of the day, are appropriate. In the future, the decision to apply the golden rule might be reversed or not applied. It also brings into question the presence of automation bias referred to in Part 5 above. The authority granted in the sections of the amending legislation is not discretionary and is only limited by the discretion of the agency or department to use computers or not in decision-making. The authority to use computers is not so fettered. Enforcing the golden rule, then, is entirely at the discretion of the agency.
In a further example of the use of computer decision-making in human-centred decisions, the Department of Immigration and Border Protection’s Technology Strategy of 2020 emphasised the increasing use of ICT systems in the department. The rationale for this was stated as being to protect Australia from ‘terrorism, illicit materials, illegal migration and organised crime by utilising real-time data matching, intelligence, identity and biometrics, operational capability technologies, and automated decision-making systems’.[150] This emotive language is used to further entrench the understanding that further use and trust in technology are the answer.
This illustrates that not all government decisions are appropriate for automation. As discussed in the 2020 Better Practice Guide, it is not appropriate for decisions that require ‘the exercise of judgement or discretion, to ensure that elements of decision-making involving the use of discretion or judgement uphold the administrative law values of legality and fairness’.[151]
However, if such decisions are legislated – and, as we argue, this can be done without much consideration – then such decisions would, on an interpretation of the statute, be given primacy.[152] Each of the sections highlighted in this article contains this authority. Further, there is no reference to discretion in the sections. Each section deems the computer decision to be made by the head of the relevant agency.
There is some argument that discretionary decisions should not be automated and any technological assistance surrounding such decisions must be carefully designed. Despite this, just three of the 35 legislative instruments authorising automated decision-making provided any such limitation. As discussed in Part 4, the three sections we found that do include such a limitation were introduced by the Veterans’ Affairs Legislation Amendment (Digital Readiness and Other Measures) Act 2017 (Cth), which was subject to notable debate on the risks of applying automated decision-making to claims that require discretion and determine issues of liability.[153] This provided the first example of the parliament clearly evaluating the risks associated with automation. The result was the inclusion of an amendment that prevented the automation of decisions determining issues of liability. Unfortunately, this process was not engaged in for any previous provisions, which remain unfettered.
6.1 Possible Fetters
Legislation
The 2020 Better Practice Guide was referred to throughout the Robodebt Report as being best practice in the regulation of automated decision-making in government. The report noted that:
While the fallout from the Robodebt scheme was described as a ‘massive failure of public administration,’ the prospect of future programs, using increasingly complex and more sophisticated AI and automation, having even more disastrous effects will be magnified by the ‘speed and scale at which AI can be deployed’ and the increased difficulty of understanding where and how the failures have arisen.
Recommendation 17 of the Robodebt Report urged the government to ‘consider legislative reform to introduce a consistent legal framework in which automation in government services can operate’.[154] The report recognised that numerous laws had been amended to allow automated decision-making but lacked ‘the necessary further amendments establishing standards for which decisions should be automated and which should not’. The Robodebt Report recommended that such changes should establish:
A cohesive and accessible legislative legal framework, aimed at ensuring that algorithms and automated critical decision systems are fit for purpose, lawful, fair, and do not adversely affect human and legal rights, is particularly important where the interests of vulnerable people are concerned. Such legislative reform would involve amendment to existing legislation, and could involve the introduction of new legislation.
Aspects of the Australian AI Ethics principles could be included in legislative reform by way of a requirement that where automated decision making is used by a government agency, this is documented in a publicly-accessible format (for example, on the agency’s website).[155]
As discussed, without requirements such as these being included in legislation, guides and ethics statements will have varying and unpredictable effect. Agencies and departments such as the Department of Home Affairs will be free to apply their own interpretation of the guides as they see fit.
Audit
The Robodebt Report also recommended that, the government should ‘consider establishing a body, or expanding an existing body, with the power to monitor and audit automated decision-making processes with regard to their technical aspects and their impact in respect of fairness, the avoiding of bias, and client usability’.[156] Scrutiny is essential to ensuring the powers vested in departments to automate decisions will not result in a derogation of the administrative law principles of lawfulness, fairness, rationality, openness and efficiency. It was exactly this that, in 2004, led the ARC to recommend the establishment of an independent scrutiny panel to identify errors as soon as possible.[157] Currently, this power appears to rest with the Commonwealth Ombudsman. However, this is largely exercised only as a reaction to public complaint.[158]
Other jurisdictions have considered this approach – for example, the House of Commons Science and Technology Committee in the United Kingdom called for such an oversight body to be established.[159] The argument in support of this is that such a body will be to ensure that the use and outcomes of automated decisions consistently adhere to the administrative law principles of lawfulness, fairness, rationality, openness and efficiency.[160] However there is a growing mistrust of the use of audits to hold algorithms to account. Sloane argues that:
Audits, which on their face sound rigorous, can end up as toothless reputation polishers, or even worse: They can legitimize technologies that shouldn’t even exist because they are based on dangerous pseudoscience.[161]
Despite the concerns raised by Sloane, we argue that, at the very least, there must be some oversight of these processes in government agencies before problems arise. A more proactive auditing process is required so that errors and inappropriately designed systems are identified before they cause detriment to the extent seen in Centrelink’s Robodebt controversy. This can be done by giving some form of automation oversight body an audit role, whereby it systematically audits the operation of existing automated decision processes to ensure that they adhere to administrative law principles and the guidelines set out in the 2020 Better Practice Guide.
Increased Transparency
Transparency around the implementation and use of automated decisions in government will be essential to ensuring public confidence. This could be managed better by increasing the transparency surrounding the use of automation, not just for a particular decision but at a departmental level. Such a model would require agencies to note the use of automation in the reasons provided for a particular decision as well as a general register for departmental use of algorithms that have significant public impacts. As Miller notes, publishing such records would:
allow public debate and scrutiny of technology-assisted decision-making to ensure that it is being used in a way that enhances, rather than undermines, the quality and integrity of administrative decision-making. Failing to do so will breed resentment and suspicion about ‘black boxes’ being created by government.[162]
To address the machinations of power in a neoliberal state, we must shine a light on the way that power is attained and used.
Our research investigated the level of discussion at the parliamentary level of legislative authorisation of computer decision-making in government. We found it to be lacking. This article shows that, despite serious concerns with the power and reach of automated decision-making, there is little to no debate or even consideration of the potential mischief in these processes. The result – whether deliberate or inadvertent – has been a quiet accrual of power and erosion of transparency in government agencies that administer the fundamental rights of citizens. This should be a concern as it is likely to continue to lead to cases of serious administrative injustice. The unrelenting pursuit of government efficiency coupled with the ever-growing capacity of AI computer systems should be subject to careful consideration, debate and oversight. Examples such as the Centrelink ‘Robodebt’ controversy and Pintarich[163] serve as warnings to our policy-makers that a review of government automation is needed to ensure the administrative law principles of lawfulness, fairness, rationality and openness[164] are observed before disputes about its use are brought to the courts. To respond, parliamentarians must come to terms with the issues raised by multiple reports on the potential issues with automated decision-making before passing such sweeping powers into legislation. Additionally, they should heed the calls for more proactive oversight and scrutiny of the use of these powers and the tools they authorise to ensure their use is ‘in a way that enhances, rather than undermines, the quality and integrity of administrative decision-making’.[165]
Bibliography
AI Now Institute. Litigating Algorithms: Challenging Government Use of Algorithmic Decision Systems. Canberra: AI Now Institute, 2018.
Angwin, Julia, Jeff Larson, Surya Matu and Lauren Kirchner. “Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks.” ProPublica (2016). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Australian Government. Automated Assistance in Administrative Decision Making: Better Practice Guide. Canberra: Australian Government, 2007.
Australian Government. “Bills and Legislation.” (2024).
Australian Government. “Federal Register of Legislation ‘Advanced Search’.” https://www.legislation.gov.au/AdvancedSearch.
Australian Human Rights Commission. Using Artificial Intelligence to Make Decisions: Addressing the Problem of Algorithmic Bias. Canberra: Australian Human Rights Commission, 2020.
Bahner, J Elin, Anke-Dorothea Hüper and Dietrich Manzey. “Misuse of Automated Decision Aids: Complacency, Automation Bias and the Impact of Training Experience.” International Journal of Human-Computer Studies 66, no 9 (2008): 688–699. https://doi.org/10.1016/j.ijhcs.2008.06.001.
Barocas, Solon and Andrew D Selbst. “Big Data’s Disparate Impact.” California Law Review 104 (2016): 671–732.
Bateman, Will. “Algorithmic Decision-Making and Legality: Public Law Dimensions.” Australian Law Journal 94 (2020): 520–530.
Bayamlioglu, E. “Contesting Automated Decisions.” European Data Protection Law Review 4, no 4 (2018): 433–446. https://doi.org/10.21552/edpl/2018/4/6.
Calo, Ryan and Danielle Keats Citron. The Automated Administrative State: A Crisis of Legitimacy. Social Science Research Network, 9 March 2020. https://papers.ssrn.com/abstract=3553590.
Carney, Terry. “The New Digital Future for Welfare: Debts Without Legal Proofs or Moral Authority?” UNSW Law Journal Forum 1 (2018). https://www.unswlawjournal.unsw.edu.au/wp-content/uploads/2018/03/006-Carney.pdf.
Citron, Danielle Keats. “Technological Due Process.” 85(6) Washington University Law Review 85, no 6 (2008): 1249–1313.
COAG. Intergovernmental Agreement on Identity Matching Services. COAG, 2017. https://www.coag.gov.au/about-coag/agreements/intergovernmental-agreement-identity-matching-services.
Cobbe, Jennifer. “Administrative Law and the Machines of Government: Judicial Review of Automated Public-Sector Decision-Making.” Legal Studies 39, no 4: 636–655. https://doi.org/10.1017/lst.2019.9.
Coglianese, Cary and David Lehr. “Regulating by Robot: Administrative Decision Making in the Machine-Learning Era.” The Georgetown Law Journal 105 (2017): 1147–1223.
Commonwealth Government. Automated Assistance in Administrative Decision Making: Issues Paper. Canberra: Administrative Review Council, 2003.
Commonwealth Government. Automated Assistance in Administrative Decision Making: Report to the Attorney-General. Canberra: Administrative Review Council, 2004.
Commonwealth Government. “About | Data.” https://data.gov.au/page/about.
Commonwealth Government. Royal Commission into the Robodebt Scheme. Canberra: Commonwealth Government, 2023. https://robodebt.royalcommission.gov.au/publications/report.
Commonwealth Ombudsman. Automated Decision-Making Better Practice Guide. https://www.ombudsman.gov.au/publications/better-practice-guides/automated-decision-guide#sec-4.
Crawford, Kate. “Artificial Intelligence’s White Guy Problem.” The New York Times, June 25, 2016. https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html.
Cummings, Mary. “Automation Bias in Intelligent Time Critical Decision Support Systems.” AIAA 1st Intelligent Systems Technical Conference (2004). https://doi.org/10.2514/6.2004-6313
Danaher, John. “The Threat of Algocracy: Reality, Resistance and Accommodation.” Philosophy & Technology 29, no 3 (2016): 245–268. https://doi.org/10.1007/s13347-015-0211-1.
Department of Home Affairs. “Directory.” https://www.directory.gov.au/portfolios/home-affairs/department-home-affairs.
Department of Immigration and Border Protection. Technology Strategy 2020. Canberra: Department of Immigration and Border Protection, 2020. https://www.homeaffairs.gov.au/commitments/files/technology-strategy-2020.pdf.
Doran, Matthew. “Federal Government Ends Robodebt Class Action with Settlement Worth $1.2 Billion.” ABC News, November 16, 2020. https://www.abc.net.au/news/2020-11-16/government-response-robodebt-class-action/12886784.
Ensign, Danielle, Sorelle A Friedler, Scott Neville, Carlos Scheidegger and Suresh Venkatasubramanian. “Runaway Feedback Loops in Predictive Policing.” arXiv:1706.09847. http://arxiv.org/abs/1706.09847.
Fourcade, Marion and Jeffrey Gordon. “Learning Like a State: Statecraft in the Digital Age.” Journal of Law and Political Economy 1, no 1 (2020): 1–108. https://doi.org/10.5070/LP61150258.
Galloway, Kate. “Big Data: A Case Study of Disruption and Government Power.” Alternative Law Journal 42, no 2 (2017): 89–95. https://doi.org/10.1177/1037969X17710612.
Glenn, Richard. Centrelink’s Automated Debt Raising and Recovery System. Canberra: Commonwealth Ombudsman, 2017.
Goddard, Kate, Abdul Roudsari and Jeremy C Wyatt. “Automation Bias: Empirical Results Assessing Influencing Factors.” International Journal of Medical Informatics 83, no 5 (2014): 368–375. https://doi.org/10.1016/j.ijmedinf.2014.01.001.
Guihot, Michael and Lyria Bennett Moses. Artificial Intelligence, Robots and the Law. Sydney: LexisNexis, 2020.
Hildebrandt, Mireille. “The Magic of Data Driven Regulation: An Evening with Mireille Hildebrandt.” Sydney: UNSW, 13 December 2018.
House of Commons Science and Technology Committee (UK). Algorithms in Decision-Making. London: UK Parliament.
Karp, Paul. “Ombudsman Failed to Check Legality of Robo-Debt, Former Tribunal Member Says.” The Guardian, April 6, 2018. https://www.theguardian.com/australia-news/2018/apr/07/ombudsman-failed-to-check-legality-of-robo-debt-former-tribunal-member-says.
Kerr, The Hon Justice Duncan. “Foreword.” In The Automated State: Implications, Challenges and Opportunities for Public Law, edited by Janina Boughey and Katie Miller, Sydney: Federation Press, 2021.
Kitchin, Rob. The Data Revolution: Big Data, Open Data, Data Infrastructures & Their Consequences. Thousand Oaks, CA: Sage, 2014.
Lehr, David and Paul Ohm. “Playing with the Data: What Legal Scholars Should Learn About Machine Learning.” UC Davis Law Review 51 (2017): 653–717.
Miller, Katie. “The Application of Administrative Law Principles to Technology-Assisted Decision-Making.” AIAL Forum 86 (2016): 20–34.
Mozier, Kathleen L and Linda J Skitka. “Human Decision Makers and Automated Decision Aids: Made for Each Other?” In Automation and Human Performance: Theory and Applications, edited by Raja Parasuraman and Mustapha Mouloua, Boca Raton, FL: CRC Press, 1996.
Ng, Yee-Fui and Maria O’Sullivan. “Deliberation and Automation – When is a Decision a ‘Decision’?” Australian Journal of Administrative Law 26, no 1 (2019): 21–34.
Office of the Australian Information Commissioner. Guide to Data Analytics and the Australian Privacy Principles. Canberra: Office of the Australian Information Commissioner, 2018. https://www.oaic.gov.au/privacy/guidance-and-advice/guide-to-data-analytics-and-the-australian-privacy-principles.
Parasuraman, Raja and Dietrich H Manzey. “Complacency and Bias in Human Use of Automation: An Attentional Integration.” Human Factors: The Journal of the Human Factors and Ergonomics Society 52, no 3 (2010): 381–410. https://doi.org/10.1177/0018720810376055.
Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press, 2015.
Perry, Justice Melissa. “iDecide: Administrative Decision-Making in the Digital World.” Australian Law Journal 91 (2017): 29–34.
Pinder, Julian and Sophie Lloyd. “Computer Says No: Automated Decision Making and Administrative Law’ [2015] Law Society of NSW Journal (2015). https://lsj.com.au/articles/computer-says-no-automated-decision-making-and-administrative-law.
Reisman, Dillon, Jason Schultz, Kate Crawford and Meredith Whittaker. Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability. Canberra: AI Now Institute, April 2018. https://ainowinstitute.org/publication/algorithmic-impact-assessments-report-2.
Rieke, Aaron, Miranda Bogen and David G Robinson. Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods. Upturn, February 26, 2018. https://www.upturn.org/work/public-scrutiny-of-automated-decisions.
Sarter, NB and B Schroeder. “Supporting Decision Making and Action Selection Under Time Pressure and Uncertainty: The Case of in-Flight Icing.” Human Factors 43, no 4 (2001): 573–583. https://doi.org/10.1518/001872001775870403.
Sloane, Mona. “The Algorithmic Auditing Trap.” Medium, March 17, 2021. https://onezero.medium.com/the-algorithmic-auditing-trap-9a6f2d4d461d.
Smith, Matt. “Thousands of South Australian Families to Be Targeted by ATO Over Childcare.” The Advertiser, June 29, 2019. https://www.adelaidenow.com.au/news/south-australia/thousands-of-south-australian-families-to-be-targeted-by-ato-over-childcare/news-story/68d0e7cde4c3d61d7bff96fe2dfff404.
Walsh, Toby et al, Closer to the Machine: Technical, Social, and Legal Aspects of AI. Melbourne: Office of the Victorian Information Commissioner, 2019. https://ovic.vic.gov.au/closer-to-the-machine-ai-publication.
Wickens, Christopher D, Benjamin A Clegg, Alex Z Vieane and Angelia L Sebok. “Complacency and Automation Bias in the Use of Imperfect Automation.” Human Factors 57, no 5 (2015): 728–739. https://doi.org/10.1177/0018720815581940.
Witenberg, Rivka T. “A Refugee, Like Me: Why the Golden Rule Matters in an Era of Mass Migration.” The Conversation, November 26, 2015. https://theconversation.com/a-refugee-like-me-why-the-golden-rule-matters-in-an-era-of-mass-migration-50957.
Wroe, David. “Top Official’s ‘Golden Rule’: In Border Protection, Computer Won’t Ever Say No.” Sydney Morning Herald, July 15, 2018. https://www.smh.com.au/politics/federal/top-official-s-golden-rule-in-border-protection-computer-won-t-ever-say-no-20180712-p4zr3i.html.
Zalnieriute, Monika, Lyria Bennett Moses and George Williams. “The Rule of Law and Automation of Government Decision-Making.” The Modern Law Review 83, no 3 (2019): 425–452. https://doi.org/10.1111/1468-2230.12412.
Zalnieriute, Monika, Lisa Burton and Janina Boughey. “From Rule of Law to Statute Drafting: Legal Issues for Algorithms in Government Decision-Making.” SSRN Electronic Journal, January 2019. https://doi/org/10.2139/ssrn.3380072.
Zerilli, John, Alistair Knott, James Maclaurin and Colin Gavaghan. “Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?” Philosophy & Technology 32, no 4 (2019): 661–683. https://doi.org/10.1007/s13347-018-0330-6.
Appendix: Acts authorising the use of computer automated decision-making (most recent first)
Act title
|
Section
|
Amending Act (No.)
|
Amending Act title
|
Level of discussion
|
---|---|---|---|---|
4A
|
ad No 28, 2017
|
3
|
||
3A
|
ad No 28, 2017
|
3
|
||
4B
|
ad No 28, 2017
|
3
|
||
101B
|
ad No 16, 2017
|
3
|
||
105
|
Original Act
|
1
|
||
Export Control (Dairy Produce Tariff Rate Quotas) Order 2016
|
36
|
Original Act
|
|
No debate found
|
Export Control (Sheepmeat and Goatmeat Export to the European Union Tariff
Rate Quotas) Order 2016
|
25
|
Original Act
|
|
No debate found
|
Export Control (Beef Export to the USA Tariff Rate Quota) Order 2016
|
19A
|
ad F2016L01423
|
|
No debate found
|
Export Control (High Quality Beef Export to the European Union Tariff Rate
Quotas) Order 2016
|
42
|
ad F2016L01423
|
|
No debate found
|
23B-4
|
ad No 19, 2016
|
2
|
||
Export Control (Japan-Australia Economic Partnership Agreement Tariff Rate
Quotas) Order 2016
|
19
|
Original Act
|
|
No debate found
|
23A(2)(h)
|
ad No 167, 2015
|
2
|
||
280(6)–(7)
|
Original Act
|
1
|
||
126H
|
ad No 41, 2015
|
2
|
||
102
|
Original Act
|
1
|
||
124
|
Original Act
|
1
|
||
4A
|
ad. No. 98, 2012
|
2
|
||
12A
|
ad. No. 98, 2012
|
2
|
||
13A
|
Original Act
|
1
|
||
66
|
Original Act
|
1
|
||
287
|
Original act
|
1
|
||
87
|
Original Act
|
1
|
||
305
|
Original Act
|
1
|
||
242
|
Original Act
|
1
|
||
48
|
Original Act
|
Superannuation (Government Co-contribution for Low Income Earners) Bill
2003
|
1
|
|
495A
|
ad. No. 58, 2001
|
Migration Legislation Amendment (Electronic Transactions and Methods of
Notification) Bill 2001
|
2
|
|
48
|
Original Act
|
Australian Citizenship Bill 2007 brought over from previous Australian
Citizenship Act which was amended by: Migration Legislation Amendment
(Electronic Transactions and Methods of Notification) Bill 2001
|
2
|
|
233
|
rs No 38, 2001
|
|
No debate found
|
|
6A
|
6A – ad No 47, 2001 83 and 103 – Original Act
|
s6A - Family and Community Services and Veterans' Affairs Legislation
Amendment (Debt Recovery) Bill 2001
|
1
|
|
223A
|
No 77 2018
|
2
|
||
222A
|
No 77 2018
|
Intellectual Property Laws Amendment (Productivity Commission Response Part
1 and Other Measures) Bill 2019
|
2
|
|
135A
|
no 77 2018
|
Intellectual Property Laws Amendment (Productivity Commission Response Part
1 and Other Measures) Bill 2020
|
2
|
|
87
|
No 132, 2011
|
1
|
||
76B
|
No 77 2018
|
Intellectual Property Laws Amendment (Productivity Commission Response Part
1 and Other Measures) Bill 2020
|
2
|
|
7C
|
No 76 2009
|
Therapeutic Goods Amendment (2009 Measures No. 1) Bill 2009
|
1
|
*Former student at Queensland University of Technology.
**Associate Professor Queensland University of Technology.
1 Fourcade, “Learning Like a State.” Fourcade and Gordon argue that state bureaucracies classify, identify and measure data to “make the world legible, prepare it for intervention, and sustain the functions of government.”
[2] There are around 188 federal government departments and agencies – see https://www.directory.gov.au/departments-and-agencies. Even the data that government makes public – a small subset of its total data – is itself a very large body of seemingly unconnected datasets collected by government agencies over time: see Commonwealth Government, “About.”
[3] COAG, Intergovernmental Agreement on Identity Matching Services. Under the IGA dated 5 October 2017 the states agreed to give to the federal government drivers’ licence photos and other “identity information” they held. The IGA and the states would share in the surveillance capabilities that the federal government was planning to implement, including face verification services, face identification services and a facial recognition analysis utility service.
[4] Galloway, “Big Data.”
[5] Kitchin, The Data Revolution, 68. Big Data describes approaches, techniques and methods of processing high volumes of data with velocity and variety.
[6] Social Security (Administration) Act 1999 (Cth) s 6A.
[7] Migration Act 1958 (Cth) s 495A.
[8] Customs Act 1901 (Cth) s 126H.
[9] A New Tax System (Family Assistance) (Administration) Act 1999 (Cth) s 223; Smith, “Thousands of South Australian Families to be Targeted.”
[10] Guihot, Artificial Intelligence, 138.
[11] The authors were aided by an LLM to edit the introduction. The author is grateful to Professor Kieran Tranter for his feedback, particularly in relation to the neoliberal nature of Australian government.
[12] See the list in the Schedule to this article.
[13] See for example: Social Security (Administration) Act 1999 (Cth) ss 6A, 103; Veterans’ Entitlements Act 1986 (Cth) s 4B.
[14] Patents Act 1990 (Cth) s 223A; Patents Act 1990 (Cth) s 222A.
[15] Australian Citizenship Act 2007 (Cth) s 48.
[16] Migration Act 1958 (Cth) s 495A.
[17] National Health Act 1953 (Cth) s 101B.
[18] Customs Act 1901 (Cth) s 126H.
[19] See the discussion in Item 26 of Schedule 1 of the Explanatory Memorandum for the Family and Community Services and Veterans' Affairs Legislation Amendment (Debt Recovery) Bill 2001, p 16 – ‘Where a person is receiving a social security payment and has an outstanding debt owed to the Commonwealth, a computer program will automatically make deductions from the person’s payments. These deductions will be automatically applied to reduce the amount of the outstanding debt. As this procedure is automated, that action will always be a cost-effective means for the Commonwealth to recover debts. The same approach will be applied in relation to family assistance payment debts’.
[20] Bahner, “Misuse of Automated Decision Aids”; Zalnieriute, “The Rule of Law”; Coglianese, “Regulating by Robot”; Rieke, Public Scrutiny of Automated Decisions; AI Now Institute, Litigating Algorithms; Perry, “iDecide”; Australian Human Rights Commission, Using Artificial Intelligence; Zerilli, “Transparency in Algorithmic and Human Decision-Making”; Bayamlioglu, “Contesting Automated Decisions”; Cobbe, “Administrative Law and the Machines of Government.”
[21] Pasquale, The Black Box Society.
[22] Barocas, “Big Data’s Disparate Impact.”
[23] Commonwealth Ombudsman, Automated Decision-Making Better Practice Guide.
[24] Australian Government, Automated Assistance in Administrative Decision Making.
[25] Administrative Review Council, Automated Assistance in Administrative Decision Making: Report to the Attorney-General (No 46, Administrative Review Council, November 2004) (‘Automated Assistance in Administrative Decision Making’), (ARC Report).
[26] See Kerr, “Foreword” v, ix for a discussion of this point.
[27] ARC Report, p 9 (emphasis added).
[28] Guihot, Artificial Intelligence, 61–67.
[29] Walsh, Closer to the Machine.
[30] Australian Human Rights Commission, Using Artificial Intelligence.
[31] Office of the Australian Information Commissioner, Guide to Data Analytics.
[32] Guihot, Artificial Intelligence, 69–77.
[33] Administrative Review Council, Automated Assistance: Issues Paper.
[34] Zalnieriute, From Rule of Law to Statute Drafting, 14–15; Danaher, “The Threat of Algocracy.”
[35] Frank Pasquale, The Black Box Society.
[36] Ibid.
[37] Coglianese, “Regulating by Robot,” 1167.
[38] Reisman, Algorithmic Impact Assessments, 3.
[39] Australian Government, Automated Assistance in Administrative Decision Making, 44.
[40] Australian Government, Automated Assistance in Administrative Decision Making, 44.
[41] See Administrative Decisions (Judicial Review) Act 1977 (Cth) s 13; Administrative Appeals Tribunal Act 1975 (Cth) s 28.
[42] Freedom of Information Act 1982 (Cth).
[43] Danaher, “The Threat of Algocracy,” 247.
[44] Danaher, “The Threat of Algocracy,” 248.
[45] Citron, “Technological Due Process,” 1272.
[46] Church of Scientology v Woodward [1982] HCA 78; (1982) 154 CLR 25 at 70 (Brennan J).
[47] Administrative Decisions (Judicial Review) Act 1977 (Cth) s 5(1)(d).
[48] Administrative Decisions (Judicial Review) Act 1977 (Cth) s 5(1)(e). The ADJR Act does not define a “decision.” However, a decision to which the Act applies must “be administrative in character,” and, among other things, must “be made, proposed to be made, or required to be made under an enactment”: ADJR Act, s 3(1).
[49] Perry, “Administrative Law: iDecide,” 31. Principle 5 of the 2004 ARC Report recommended that, ‘The use of an expert system to make a decision – as opposed to helping a decision maker make a decision – should be legislatively sanctioned to ensure that it is compatible with the legal principles of authorised decision making’: Administrative Review Council, Automated Assistance in Administrative Decision Making: Issues Paper, viii.
[50] Commonwealth Ombudsman, Automated Decision-Making Better Practice Guide.
[51] Perry, “iDecide,” 31.
[52] Will Bateman, “Algorithmic Decision-Making,” 529.
[53] Administrative Review Council, Automated Assistance in Administrative Decision Making: Report, 24.
[54] Kioa v West [1985] HCA 81; (1985) 159 CLR 550 at 585.
[55] Kioa v West [1985] HCA 81; (1985) 159 CLR 550 at 585.
[56] Australian Broadcasting Tribunal v Bond (1990) 170 CLR 321 at 359.
[57] Ebner v Official Trustee in Bankruptcy [2000] HCA 63; (2000) 205 CLR 337 at 657.
[58] Zalnieriute, “The Rule of Law,” 14–15.
[59] Crawford, “Artificial Intelligence’s White Guy Problem.”
[60] Lehr, “Playing with the Data.”
[61] Crawford, “Artificial Intelligence’s White Guy Problem,” 89.
[62] Angwin, “Machine Bias.” A counter-argument was raised about the findings of the ProPublica report, but ProPublica has published a retort that stood by its claims.
[63] Lehr, “Playing with the Data,” 669–700.
[64] Ensign, “Runaway Feedback Loops in Predictive Policing.”
[65] Parasuraman, “Complacency and Bias in Human Use of Automation,” 391.
[66] Discussed more fully in Part 5 below.
[67] Calo, The Automated Administrative State.
[68] Zalnieriute, From Rule of Law to Statute Drafting.
[69] Carney, “The New Digital Future for Welfare.”
[70] Galloway, “Big Data,” 93.
[71] Glenn, Centrelink’s Automated Debt Raising and Recovery System, 5.
[72] Glenn, Centrelink’s Automated Debt Raising and Recovery System, 5.
[73] Carney, “The New Digital Future for Welfare,” 3.
[74] Carney, “The New Digital Future for Welfare,” 3.
[75] Carney, “The New Digital Future for Welfare,” 3.
[76] Karp, “Ombudsman Failed to Check Legality of Robo-Debt.”
[77] Doran, ‘Federal Government Ends Robodebt Class Action.”
[78] Commonwealth Government, Royal Commission into the Robodebt Scheme.
[79] Commonwealth Government, Royal Commission into the Robodebt Scheme.
[80] Acts Interpretation Act 1901 (Cth), s 15AA.
[81] Acts Interpretation Act 1901 (Cth), s 15AB.
[82] The Bill again goes through three readings in the Senate with similar scope for detailed debate. When the Bill has passed the Senate, the Senate returns it to the House, either with or without amendments. Parliament of Australia, “Infosheet 7.”
[83] Hansard was reviewed for 29 of the 35 instruments identified. The remaining six were export control orders.
[84] Australian Government, Federal Register of Legislation, “Advanced Search.”
[85] Australian Government, Bills and Legislation. We reviewed Hansard for 29 of the 35 instruments identified. The remaining six were export control orders and the A New Tax System (Family Assistance) (Administration) Act 1999 (Cth) for which no Hansard or explanatory memorandum could be found regarding the relevant section.
[86] Family and Community Services and Veterans’ Affairs Legislation Amendment (Debt Recovery) Bill 2001.
[87] Senator Chris Evans, Second Reading Speech for the Family and Community Services and Veterans' Affairs Legislation Amendment (Debt Recovery) Bill 2001, 8 November 2000, 19372.
[88] Explanatory Memorandum to the Family and Community Services and Veterans’ Affairs Legislation Amendment (Debt Recovery) Bill 2001, 22.
[89] Explanatory Memorandum to the Superannuation (Government Co-contribution for Low Income Earners) Bill 2003, 32, item 1.114.
[90] Explanatory Memorandum Paid Parental Leave Bill 2010, 126.
[91] Explanatory Memorandum Paid Parental Leave Bill 2010, 126.
[92] Commonwealth Ombudsman, Automated Decision-Making Better Practice Guide.
[93] Explanatory Memorandum Business Names Registration Bill 2011, 46.
[94] Australian Government, Social Security and Other Legislation Amendment (2012 Budget and Other Measures) Bill 2012, https://www.aph.gov.au/Parliamentary_Business/Bills_Legislation/Bills_Search_Results/Result?bId=r4830.
[95] Social Security and Other Legislation Amendment (2012 Budget and Other Measures) Bill 2012, 1.
[96] Social Security and Other Legislation Amendment (2012 Budget and Other Measures) Bill 2012, Schedule 1.
[97] Social Security and Other Legislation Amendment (2012 Budget and Other Measures) Bill 2012, Schedule 2.
[98] Social Security and Other Legislation Amendment (2012 Budget and Other Measures) Bill 2012, Schedule 3.
[99] Social Security and Other Legislation Amendment (2012 Budget and Other Measures) Bill 2012, Schedule 4.
[100] Social Security and Other Legislation Amendment (2012 Budget and Other Measures) Bill 2012, Schedule 5.
[101] Social Security and Other Legislation Amendment (2012 Budget and Other Measures) Bill 2012, Schedule 6.
[102] Social Security and Other Legislation Amendment (2012 Budget and Other Measures) Bill 2012, Schedule 7.
[103] Explanatory Memorandum to the Social Security and Other Legislation Amendment (2012 Budget and Other Measures) Bill 2012.
[104] Senator Fifield, Second Reading Speech for the Social Security and Other Legislation Amendment (2012 Budget and Other Measures) Bill 2012, 26 June 2012, 4545.
[105] Second Reading Speech for the Social Security and Other Legislation Amendment (2012 Budget and Other Measures) Bill, Tuesday, 26 June 2012, 4545 and 4550.
[106] See Commonwealth Government, Royal Commission into the Robodebt Scheme.
[107] See National Health Act 1953 (Cth) s101B inserted by National Health Amendment (Pharmaceutical Benefits) Act 2016; Explanatory Memorandum’, National Health Amendment (Pharmaceutical Benefits) Bill 2016 (Cth) at 1, 4. Commonwealth, Parliamentary Debates, House of Representatives, 20 March 2017, at 2392–2393 (Steve Georganas); Commonwealth, Parliamentary Debates, Senate, 27 March 2017, at 2177 (Helen Polley).
[108] See Military Rehabilitation and Compensation Act 2004 (Cth) s4A; Safety, Rehabilitation and Compensation (Defence‑related Claims) Act 1988 (Cth) s 3A; Veterans’ Entitlements Act 1986 (Cth) s4B; inserted by Veterans’ Affairs Legislation Amendment (Digital Readiness and Other Measures) Act 2016 (Cth); Explanatory Memorandum’, Veterans’ Affairs Legislation Amendment (Digital Readiness and Other Measures) Bill 2016 (Cth) at 9; Commonwealth, Parliamentary Debates, House of Representatives, 2 March 2017, at 2128–2129 (Amanda Rishworth), 2138–2139 (Ross Hart), 2152 (Dan Tehan); Commonwealth, Parliamentary Debates, Senate, 20 March 2017, at 1339 (Scott Ludlam), 1347 (Skye Kakoschke-Moore), 1351–1352 (David Fawcett), 1356–1357 (Jacqui Lambie); Commonwealth, Parliamentary Debates, Senate, 27 March 2017, at 2320 (James McGrath).
[109] Commonwealth, Parliamentary Debates, House of Representatives, 2 March 2017, 2128 (Amanda Rishworth).
[110] Commonwealth, Parliamentary Debates, House of Representatives, 2 March 2017, 2138 (Ross Hart).
[111] (1990) 170 CLR 321.
[112] Supplementary explanatory memorandum to the Veterans' Affairs Legislation Amendment (Digital Readiness and Other Measures) Bill 2017.
[113] Senator Canavan (Queensland—Minister for Resources and Northern Australia) (19:05), 1694, Hansard, Senate, Tuesday, 21 March 2017.
[114] Ms Ley (Farrer – Minister for Sport and Minister for Health and Aged Care) (09:51) – 4311, Hansard, House of Representatives, Thursday, 24 November 2016.
[115] Hildebrandt, “The Magic of Data Driven Regulation.”
[116] See Parliamentary Education Office, “Senate Estimates”, https://www.peo.gov.au/learning/fact-sheets/senate-estimates.html.
[117] Wickens, “Complacency and Automation Bias in the Use of Imperfect Automation.”
[118] Parasuraman, “Complacency and Bias in Human Use of Automation,” 391.
[119] Mozier, “Human Decision Makers and Automated Decision Aids,” 205.
[120] Sarter, “Supporting Decision Making and Action Selection.”
[121] Goddard, “Automation Bias.”
[122] Bahner, “Misuse of Automated Decision Aids.”
[123] Cummings, “Automation Bias in Intelligent Time Critical Decision Support Systems.”
[124] Parasuraman, “Complacency and Bias in Human Use of Automation,” 394.
[125] Pinder, “Computer Says No.”
[126] Parasuraman, “Complacency and Bias in Human Use of Automation,” citing Cummings, “Automation Bias in Intelligent Time Critical Decision Support Systems.”
[127] Ng, “Deliberation and Automation.”
[128] Re Swinburne v ASIC [2014] AATA 602.
[129] Re Swinburne v ASIC [2014] AATA 602, [81].
[130] Re Swinburne v ASIC [2014] AATA 602, [81]–[84].
[131] B & L Whittaker Pty Ltd and ASIC and Anor [2014] AATA 302; (2014) 106 IPR 361.
[132] B & L Whittaker Pty Ltd and ASIC and Anor [2014] AATA 302; (2014) 106 IPR 361at [1].
[133] B & L Whittaker Pty Ltd and ASIC and Anor [2014] AATA 302; (2014) 106 IPR 361 at [1].
[134] B & L Whittaker Pty Ltd and ASIC and Anor [2014] AATA 302; (2014) 106 IPR 361 at [16].
[135] Kerr, “Foreword,” viii.
[138] [2021] AATA 1703 (11 June 2021).
[139] Pintarich v Deputy Commissioner of Taxation [2018] FCAFC 79 (Pintarich). On 17 October 2018, the High Court refused an application for special leave to appeal the decision. See Pintarich v Deputy Commissioner of Taxation [2018] HCASL 322.
[140] Pintarich, [143].
[141] Pintarich, [47].
[142] Pinder and Lloyd (n 125) 71.
[143] AI Now Institute, “AI Now Report 2018,” 22.
[144] Except, possible ex-post administrative law repercussions.
[145] Department of Home Affairs, “Directory.”
[146] Wroe, “Top Official’s ‘Golden Rule’.”
[147] Wroe, “Top Official’s ‘Golden Rule’.”
[148] The irony of calling this policy the golden rule is that, in immigration, the ‘golden rule’ is that you do unto others as you would have them do unto you – see Witenberg, “A Refugee, Like Me.”
[149] Wroe, “Top Official’s ‘Golden Rule’.”
[150] Department of Immigration and Border Protection, Technology Strategy 2020.
[151] Commonwealth Ombudsman, Automated Decision-Making Better Practice Guide, 9.
[152] Bateman, “Algorithmic Decision-Making and Legality,” 529.
[153] Commonwealth, Parliamentary Debates, House of Representatives, 2 March 2017, 2139 (Ross Hart); Commonwealth, Parliamentary Debates, Senate, 20 March 2017, 1347 (Skye Kakoschke-Moore), 1356–57 (Jacqui Lambie).
[154] Commonwealth Government, Royal Commission into the Robodebt Scheme, iv.
[155] Commonwealth Government, Royal Commission into the Robodebt Scheme, 512.
[156] Commonwealth Government, Royal Commission into the Robodebt Scheme, iv.
[157] Australian Government, Automated Assistance in Administrative Decision Making, 48.
[158] See, for example, the investigation into Centrelink’s Robodebt: Glenn, Centrelink’s Automated Debt Raising and Recovery System.
[159] House of Commons Science and Technology Committee, Algorithms in Decision-Making, 3.
[160] Australian Government, Automated Assistance in Administrative Decision Making, 6.
[161] Sloane, “The Algorithmic Auditing Trap.”
[162] Miller, “The Application of Administrative Law Principles,” 31.
[163] Pintarich v Deputy Commissioner of Taxation [2018] FCAFC 79.
[164] Australian Government, Automated Assistance in Administrative Decision Making, 6.
[165] Miller, “The Application of Administrative Law Principles,” 31.
AustLII:
Copyright Policy
|
Disclaimers
|
Privacy Policy
|
Feedback
URL: http://www.austlii.edu.au/au/journals/LawTechHum/2024/10.html