AustLII Home | Databases | WorldLII | Search | Feedback

University of New South Wales Law Journal

Faculty of Law, UNSW
You are here:  AustLII >> Databases >> University of New South Wales Law Journal >> 2020 >> [2020] UNSWLawJl 37

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Ng, Yeo-Fui; O'Sullivan, Maria; Paterson, Moira; Witzleb, Normann --- "Revitalising Public Law in a Technological Era: Rights, Transparency and Administrative Justice" [2020] UNSWLawJl 37; (2020) 43(3) UNSW Law Journal 1041


REVITALISING PUBLIC LAW IN A TECHNOLOGICAL ERA: RIGHTS, TRANSPARENCY AND ADMINISTRATIVE JUSTICE

YEE-FUI NG,[*] MARIA O’SULLIVAN,[**] MOIRA PATERSON[*** ]AND NORMANN WITZLEB[****]

This article examines how public law should be revitalised in light of the increasing use of technology in government decision-making. As the recent controversy concerning the implementation of an automated debt recovery system by the Department of Social Services illustrates, the automation of government decision-making engages fundamental legal principles such as transparency, procedural fairness and reviewability. The use of technology in administrative decision-making in Australia therefore raises a number of critical, and interlocking, questions: Is Australian public law fit for purpose to protect individual rights in automated governmental decision-making? If not, what reforms are necessary and how should they be instituted? This article will consider these issues in relation to three specific areas of public law: privacy law, freedom of information, and judicial review. In doing so, it sets out concrete recommendations for the revitalisation of Australian public law so that it may become more value-compliant and consistent with emerging international best practice standards.

I INTRODUCTION

The trend towards the increased automation and use of artificial intelligence (‘AI’) in government decision-making poses complex challenges to existing substantive and procedural rights. As the recent controversy concerning the implementation of an automated debt recovery system by the Department of Social Services (‘Robodebt’) illustrates, the automation of government decision-making engages fundamental legal principles such as transparency, procedural fairness, reviewability and administrative justice.[1]

Public law is central to the protection of individual rights against the state in Australia. In particular, administrative law mechanisms can be used to provide a means for individuals to challenge unlawful government decisions and for courts to limit the arbitrary exercise of power. Privacy laws limit the use of personal data by government agencies, and freedom of information (‘FOI’) laws provide citizens with access to government information. However, Australian public law is unusual in that it must do much of the heavy lifting in rights protection while being isolated from the human rights discourse that has become ‘a global metanarrative in the evaluation of governmental action’.[2] In contrast, North American and European jurisdictions that have had to grapple with similar challenges, have been able to do so within explicit human rights-based constitutional frameworks, resulting in administrative law regimes that are more adaptable to new challenges.

The increasing use of technology in administrative decision-making in Australia therefore raises a number of critical, and interlocking, questions: Is Australian public law fit for purpose to protect individual rights in automated governmental decision-making? If not, what reforms are necessary and how should they be instituted? Is there a way in which systemic deficiencies and group rights can be protected in addition to individual rights? Given the fact that technological change is a global phenomenon, but that administrative law frameworks differ, how much assistance can be derived from developments in other jurisdictions that share similar legal values?

This article will consider these questions by focusing on three specific examples where automation and public law interact:

(i) the regulation of data input and processing;

(ii) the transparency of the systems of automation (focusing on reasons for decisions and access to information); and

(iii) the way in which affected persons can seek a remedy pursuant to judicial review.

This framework allows us to examine the interaction between these interlinked components of automation. Due to the fact that automation of government decisions in areas such as tax, social security and veterans’ entitlements is system-wide, deficiencies in the design, implementation or operation of automated systems have the potential to violate the rights of a large number of individuals. Therefore, public law should ideally adopt an approach that is also capable of addressing systemic issues, in addition to protecting the rights of particular individuals to access information or obtain redress.

In discussing these issues, we acknowledge the extensive literature that already exists on technology and law, such as reports and initiatives setting out the ethical standards which should apply to AI[3] or the regulatory framework for the private sector.[4] However, our focus is squarely on the public law domain – its values, laws and institutions. We have also chosen the three key legal issues above because they arise at various points in the continuum of automation: data input → information and explanation → remedy. The chosen issues demonstrate that the different stages of technology use must be viewed and analysed as a whole in order to fully understand the gaps in current legal and institutional frameworks. Thus, for instance, we note that the efficacy of any review mechanism will often depend on the laws relating to the collection and use of data in the first instance and the transparency of the particular automated system.

Interconnected Stages of Automation’s Interaction with Public Law
→ data collection (privacy and data sharing)
→ transparency and explanation (FOI and reasons)
→ review and remedy for harm (judicial review)

In addition, we do not seek to provide a comprehensive discussion of deficiencies in the mechanisms that we discuss. Instead we focus on some emerging issues arising from automation that require attention. We further recognise that automation also touches on other aspects of public law that require further consideration. For instance, in relation to accountability and review, we acknowledge that important roles are undertaken by integrity and complaint-handling institutions such as the Ombudsman and the review role undertaken by tribunals such as the Administrative Appeals Tribunal (‘AAT’). However, as a means of illustrating the role played by public law at key points in the interaction between a citizen and the digital environment, we felt it was most useful to use judicial review as a means of illustrating the legal issues which arise when individuals seek to challenge automated government decisions.

Based on this analysis of our three key areas, this article will make proposals to revitalise public law using values and principles both from the ‘bottom up’ and ‘top down’. In the ‘bottom up’ analysis, we revisit the basics of public law to examine what values should inform the development of automation. We believe that going back to the underpinning values is a useful way to assess the future reforms which need to be made to public law to ensure that it meets the challenges of automation. As UK public law commentator, Paul Daly, has noted, ‘values provide the motor for administrative law’.[5] We will also draw insights from the literature relating to the foundations of our current administrative law regime in the 1970s, which identified the principles and values underpinning a package of new Commonwealth laws described as the ‘new administrative law’.[6]

In its ‘top down’ analysis, the article will identify how international developments may inform the further development of our public law system. Given the global nature of technological developments, we believe there is substantial benefit in studying the solutions developed or discussed in other jurisdictions. While Australian public law does not need to mimic other legal systems, critical engagement with overseas developments is likely to assist in crafting solutions for Australia that are benchmarked against international best practice. By engaging in the dual analysis from the ‘bottom up’ and the ‘top down’, the article contributes to the developing literature on automation and administrative law in Australia[7] and overseas[8] and illustrates how domestic values and international approaches can and should shape a ‘new’ technological public law in Australia.

The analysis is structured as follows: first, the article will set out in Part II the values which we consider important in revitalising public law from the ‘bottom up’, by reference to the foundational principles, doctrines and values underpinning Australian public law. Part III then discusses the ways in which automation has affected citizens along the continuum of technological involvement: from the initial point of data collection and processing (privacy and data sharing), to the availability of information about the decision-making process (reasons for decisions and FOI) and, finally, to the point where a person seeks access to review and a remedy (judicial review). Part IV will then consider how Australian public law may be informed from the ‘top down’, by reference to international principles and global frameworks. Here, we use a case study from the Netherlands on its automated surveillance system for detecting welfare fraud.[9] The Dutch courts struck down the automated system as being incompatible with human rights, which provides a contrast to the current federal litigation on digital welfare checks in Australia, which is unable to rely on Commonwealth human rights legislation. Part V will then conclude the analysis by recommending ways in which Australian public law should be revitalised to become more value-compliant and consistent with global best practice.

II ANALYTICAL FRAMEWORK – THE ‘BOTTOM UP’ VALUES OF AUSTRALIAN PUBLIC LAW

It is important to establish an analytical framework for assessing how public law can be strengthened in the era of rapid advances of digital technologies. In this section we discuss the broad set of principles that the judiciary and commentators have recognised as public law ‘values’, before then addressing the particular fundamental institutional frameworks established under Australian administrative law in the 1970s and 1980s later in the article.

A Public Law Values

Former High Court Chief Justice French, writing extra-judicially, has listed the following values of public law:

• Lawfulness;

• Rationality;

• Fairness;

• Process – accessibility, equitable cost, timeliness, intelligible explanation of decisions;

• Accountability, transparency, consistency, participation.[10]

In particular, the Hon Robert French has emphasised the importance of rationality and transparency where administrative decisions are made by artificial intelligence.[11]

Referring to Justice French’s writing,[12] the 2004 Administrative Review Council Report on automated decision-making identified lawfulness, fairness, rationality, openness or transparency, and efficiency as ‘crucial elements of the administrative law system’.[13] Since then, a number of judicial officers and academic commentators have further elaborated on the ‘values’ of public law.[14]

An alternative formulation of the values of Australian public law, suggested by Federal Court Chief Justice Allsop, refers to

reasonable certainty, so power can be understood, known and exercised, and branches of government take responsibility for its exercise, in a workably efficient and fair way. Secondly: honesty and fidelity to the Constitution, and to the freedoms and free society that it assumes, reflecting the constant of a principle of legality. Thirdly: a rejection of unfairness, unreasonableness and arbitrariness. Fourthly: equality. Fifthly: humanity, and the dignity and autonomy of the individual, as the recognition of, and respect for, the reciprocal human context of the exercise of power and the necessary humanity of the process ...[15]

Significantly, Chief Justice Allsop has underlined that these values should not be seen as a list of separate conceptions but, in fact, as interrelated.[16]

These public law values have also been referred to by courts and commentators under the umbrella concept of ‘good administration’. For instance, Finn J observed:

In the law, securing good administration can properly be said to be an organising idea for a group of principles which, in exacting procedural fairness, are designed to maintain public confidence in the integrity of administrative government ... In Commonwealth administration ... The reforms of the last decade and more ... have seen an accentuated emphasis on service delivery, performance and results.[17]

Another helpful concept to assess automation is ‘administrative justice’. This concept is difficult to define,[18] but there appears to be a level of consensus amongst scholars that it draws from several widely accepted principles such as accountability, transparency, consistency, rationality, impartiality, participation, procedural fairness and reasonable access to judicial and non-judicial grievance mechanisms.[19] Administrative justice is, in turn, a fundamental aspect of the rule of law.[20] Within the context of automated decision-making, there are certain aspects of the rule of law which are of particular significance:

• The need for laws to be open, accessible and clear;[21]

• an independent judiciary, with power to review the actions of government to ensure that it conforms to the rule of law;[22]

• adequate protection of fundamental human rights;[23] and

• compliance by the state with its obligations under international law (as well as national law).[24]

The latter two points about international law and human rights will be discussed further in Part IV of this article. It is not the purpose of this article to analyse the human rights implications of automation because this issue has been addressed at length in existing literature.[25] However, we note that the absence of a federal human rights charter in Australia severely constrains the ability of persons affected by automation to seek review of those decisions on human rights grounds. Australia is also not a party to a regional human rights mechanism such as that which operates in Europe via the European Convention on Human Rights (‘ECHR’).[26] As a result, Australian litigants must rely on domestic administrative law mechanisms. We will further explore this difference in Part IV, which compares the Robodebt litigation and recent Dutch litigation on welfare automation as a case study. In that Part, we also address whether international legal principles are currently reflected in Australian public law in an appropriate or adequate way.

In addition to these broad public law values, we note that specific principles were enunciated by the landmark Kerr Committee and Bland Reports in the 1970s.[27] The 1971 Kerr Committee Report is of interest as it emphasised efficiency in addition to justice, noting the need to ‘ensure the establishment and encouragement of modern administrative institutions able to reconcile the requirements of efficiency of administration and justice to the citizen’.[28] For our purposes, the comments by Justice Brennan in 1977 neatly illustrate the enduring objectives which guided these reforms:

the traditional reticence of the administrative decision-maker is replaced by his written expression of reasons; access to the Court is simplified and facilitated. The citizen is thus enabled to challenge, and to challenge effectively, administrative action which affects his interests.[29]

These core principles of provision of reasons and access by affected persons to a mechanism that allows challenging governmental decisions speak to the current issues of transparency, lawfulness and review of automated decisions which are discussed in this article.

III AUTOMATION AND EXISTING GAPS IN AUSTRALIAN PUBLIC LAW

A Background

As early as 2003, the Administrative Review Council listed a large range of major federal government agencies, including Comcare, the Department of Defence, the Department of Veterans’ Affairs, and the Australian Taxation Office, that made use of automated systems in governmental decision-making.[30] Since then, there have been major advances in technologies, including Big Data Analytics (‘BDA’), AI and machine learning, which provide new opportunities for government authorities to develop and employ automated decision-making tools. The Australian public sector is now using technology-assisted decision-making in a wide range of contexts, with Centrelink’s automated debt raising and recovery system and the Australian Border Force’s SmartGate identity checking at Australian airports being some of the best-known examples.

The use of automation in the administrative processes can improve efficiency, certainty, predictability and consistency.[31] Automated systems have the capacity to ‘process large amounts of data more quickly, more reliably and less expensively than their human counterparts’ and have a useful role to play ‘when high frequency decisions need to be made by government’.[32] New technologies are also being assessed for their potential to improve access to justice and provide support for marginalised groups.[33]

However, the use of emerging technologies is rarely unproblematic. The nature and extent of problems varies according to the specific technology used, the role which it plays in the decision-making process and the types of decisions to which it is applied. Our focus is on the use of ‘algorithms’,[34] including AI-based algorithms, to automate either the whole or part of administrative decision-making processes. AI refers to the ‘[t]he ability for a computer to do something that requires intelligence, such as learning or problem solving’, while machine learning is a subset of AI that refers to ‘the ability for a computer to perform tasks without being given explicit instructions how, instead “learning” how to perform those tasks by finding patterns and making inferences’.[35] Automated systems based on machine learning differ from rules-based systems that apply rigid criteria to factual scenarios.[36]

These technologies may be used to automate decisions either in part or in whole. The extent to which this raises potential problems depends on the extent, if any, that the decision-making requires the exercise of discretion and the seriousness of its consequences for the individuals affected.

The use of AI to automate decision-making raises a whole range of rule of law issues, in particular in relation to procedural fairness,[37] the transparency of decision-making,[38] the protection of personal privacy[39] and the right to equality.[40] Some automated systems have been controversial because they experienced problems in their design and implementation.[41] These controversies, because they were highly publicised or affected vulnerable populations, have the potential to undermine public trust and the acceptance of new initiatives aimed at improving the efficiency of administrative services.[42]

Commonwealth departments have accordingly generally adopted a cautious approach. Departments that use technology to make high volume decisions, such as the Department of Home Affairs and Department of Veterans’ Affairs, have reported that they use the ‘golden rule’ in automating decision-making.[43] Under this rule, decisions that have a beneficial outcome for citizens may be automated, while negative decisions are subject to human intervention.[44] Therefore the principle underpinning the use of automation in this context is that it will be used only as a ‘triage’ tool to make the granting of positive decisions to applicants more efficient.

It should be noted, however, that certain decisions are more amenable to the golden rule than others, particularly where there is a clear beneficial outcome to the applicant, eg, a decision to grant a visa. In other cases, such as social security and tax, where the amount of a benefit or debt is at issue, the golden rule is more difficult to apply.

In addition, as technology advances inexorably forward, there will be an increasing temptation to further automate complex decisions to enhance the efficiency of government decision-making. The Commonwealth Government has adopted a digital transformation strategy that aims to use automated systems, where possible, ‘to eliminate manual processing and case management, reducing the need for bespoke systems’.[45] The NSW Government has said it will start to ‘[t]est AI/cognitive/machine learning for service improvement’ and aims to achieve ‘[f]ull automation where appropriate’.[46] It is therefore imperative to more closely scrutinise the interaction between the government’s use of new technologies and administrative law frameworks, such as judicial review, rights to reasons for decisions, FOI, and public sector privacy laws.

B Data Processing and Sharing – Public Sector Privacy Protections

Government decision-making in individual cases will almost inevitably involve the collection, use or storage of personal information. Australian public sector agencies processing personal information are generally subject to various forms of information privacy laws. The relevant laws differ from jurisdiction to jurisdiction and include the Privacy Act 1988 (Cth) (‘Privacy Act’), the Privacy and Personal Information Protection Act 1998 (NSW) and the Privacy and Data Protection Act 2014 (Vic).[47] Despite significant variation in detail, these laws generally have in common that they impose requirements to comply with stated Information Privacy Principles, unless an exemption or exception applies. These Principles are informed by internationally recognised best practice guidelines on data handling. Under these Principles, agencies have an obligation to inform individuals about their privacy policies, and they are constrained in the means and purposes for which data is collected, in how it is handled, in the circumstances in which it may be disclosed, in how it is to be stored, and so forth. Australian law currently does not contain any specific requirements regarding automated decision-making, which means that the laws that are generally applicable to government-handling of personal information also apply to handling in the form, and for the purposes of, automated decision-making.

A further issue to be considered is that automated decision-making may often involve data sharing between agencies. This was also the case in the Robodebt scenario, where the Department of Social Services used annual income data information obtained from the Australian Taxation Office to calculate average fortnightly incomes.[48] Following the Productivity Commission’s Inquiry into data availability and use,[49] the Federal Government is reforming its data governance framework to better realise the economic and social benefits of increased data use, while maintaining public trust and confidence. If and when enacted, the planned federal Data Availability and Transparency Act will sit alongside existing data sharing and release legislation in New South Wales, South Australia and Victoria, with other Australian jurisdictions also currently in the process of establishing new frameworks for enabling responsible information sharing between public sector agencies.[50] Although the new federal legislation is still under consideration, the Government has committed that sharing under the legislation will not be allowed for compliance, national security or law enforcement purposes.[51] This exclusion is intended to address concerns over the privacy effects of data sharing for investigations, monitoring and taking action targeted at individuals. While ‘service delivery’ will remain a permitted purpose for data sharing, this category is more directed at the generalised improvements of government services that have low privacy risks, rather than assurance and compliance in an individual case.

C Transparency: Reasons for Decisions and Access to Information

Transparency, or openness, is recognised as one of the core administrative values identified above. Both openness and ‘explainability’ also have a pivotal role to play in ensuring adherence to other key values including lawfulness, fairness and rationality.

It is broadly recognised that the fact, extent and operation of automation in decision-making should be transparent. As the UK House of Lords noted in its landmark report on AI: ‘Each individual should ... have access to the rationality behind a decision being made. The process needs to be transparent and easily understood by society’.[52]

However, a major challenge associated with big data, and algorithmic and automated decision-making, is their opacity. Algorithmic decision-making can be opaque in two ways. The first is its invisibility; people often do not realise that they are interacting with the technology, and generally know little about the programs that are used to make decisions about them. The second is the complexity of its functioning. This leads to what is commonly known as the ‘black box’ problem, whereby ‘it is possible to observe incoming data (input) and outgoing data (output) in algorithmic systems, but their internal operations are not very well understood’.[53] As highlighted by Oswald, incorporating an algorithm into decision-making ‘may come with the risk of creating “substantial” or “genuine” doubt as to why decisions were made and what conclusions were reached’.[54]

There are two main mechanisms that promote transparency in respect of administrative decision-making in Australia: legislative requirements to provide written reasons for decisions and the Freedom of Information Act 1982 (Cth) (‘FOI Act’). We discuss these in turn.

1 Right to Reasons for Decisions

In the absence of a common law right to obtain reasons, there are two key laws that impose obligations to provide reasons for decisions, on request by an applicant. Both the Administrative Decisions (Judicial Review) Act 1977 (Cth) (‘ADJR Act’) and the Administrative Appeals Tribunal Act 1995 (Cth) (‘AAT Act’) require decision-makers on request to provide ‘a statement in writing setting out the findings on material questions of fact, referring to the evidence or other material on which those findings were based and giving the reasons for the decision’.[55]

These requirements are explained in a set of guidelines published by the Administrative Review Council.[56] As summarised by Groves, a statement of reasons must ‘do more than simply list evidence and state the decision reached’.[57] It must also provide an explanation of ‘the logic or “intellectual process” by which evidence was used to reach the decision’.[58] Moreover, as stated in Campbelltown City Council v Vegan, ‘where more than one conclusion is open, it will be necessary ... to give some explanation of [the] preference for one conclusion over another’.[59]

2 Freedom of Information

The FOI Act provides a right of access to documents in the possession of public sector bodies[60] subject to various exceptions and exemptions. It also requires those bodies proactively to publish specified information,[61] including their ‘operational material’[62] (the material that assists them to perform or exercise their functions or powers in making decisions or recommendations that affect members of the public).[63]

These requirements provide a potential avenue of obtaining crucial information about the software used to automate decisions, the circumstances in which it was created or purchased, and, where relevant, the materials that were used to train it and any tests run to gauge its accuracy.

However, the FOI Act does not contain any specific requirements concerning the creation or retention of data. There is no general statutory requirement to create documents and whether or not they are retained is determined by the requirements in the Archives Act 1983 (Cth).

3 Implications for Automated Decision-Making

(a) Reasons

The statutory requirements to provide reasons for decisions have been drafted and interpreted to date on the assumption that a decision is made entirely by an individual. There is some lack of clarity concerning the extent to which the ADJR Act applies to automated decisions (as discussed in Part III(D) below) and also, to the extent that it does apply, what specifically it requires in relation to such decisions. This is particularly the case where the automation involves machine learning, given the difficulties involved in explaining the reasoning process involved.[64] In Re Schouten and Secretary, Department of Education, Employment and Workplace Relations (‘Re Schouten’),[65] the departmental representative at the tribunal hearing was unable to explain how the applicant’s rate of youth allowance was determined because replicating the algorithm used to calculate the ‘reduction of actual means’ test was no longer possible given the complexity of the database and its programming.[66] While it was ultimately established that the rate was calculated correctly, AAT Senior Member Britton highlighted the need for greater transparency where a decision is automated because a ‘citizen will not understand and therefore be unable to challenge a decision about which they feel aggrieved unless provided with a plain English explanation of the basis for the decision’.[67]

While there is continuing debate as to what is required to make AI-based decision-making meaningfully transparent,[68] the evolving research into explainable AI[69] may provide useful guidance concerning best practice in ensuring that such decision-making lends itself to the provision of reasons (or at least some meaningful equivalent).

(b) Freedom of Information

The FOI Act is likewise imperfectly drafted to respond to the modern context of AI-driven decision-making. A key underlying deficiency is that it applies to documents rather than information, unlike, for example, the Freedom of Information Act 2000 (UK). The requirement to specify the documents to be accessed creates difficulty for applicants in the case of complex decision-making processes that are not well understood by an applicant. Furthermore, these documents can be accessed only to the extent that they continue to exist, which may be problematic, for example, in the case of training materials, in the absence of specific legal requirements to retain them.

A further difficulty is that much of the documentation that sheds light on the algorithms that underlie AI-based decision-making is likely to qualify for exemption under section 47(1) of the FOI Act. This applies where a document would (a) disclose ‘trade secrets’[70] or (b) ‘any other information having a commercial value that would be, or could reasonably be expected to be, destroyed or diminished if the information were disclosed’.[71] This exemption can operate to protect the commercial interests of agencies as well as third parties.

The expression ‘trade secrets’ in section 47(1)(a) has been expansively defined and the alternative test in section 47(1)(b) is broad-ranging as the expression ‘diminished’ is unqualified by any test of seriousness and there is no requirement to consider the public interest in disclosure. Relevantly, in Re Cordover and Australian Electoral Commission,[72] the Tribunal considered the application of this test in relation to the ‘source code’ for vote counting software which had been developed by the Australian Electoral Commission (‘AEC’) at substantial cost and was licensed out by it. The Tribunal concluded that the source code constituted a trade secret; this was based on evidence that the AEC had taken precautions to limit its dissemination,[73] and that it had commercial value and was used in trade.[74]

It should be noted that article 15 of the General Data Protection Regulation (‘GDPR’) of the European Union (‘EU’) provides data subjects with a right to request access to information about ‘the existence of automated decision-making’ and also to ‘meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject’.[75] This is arguably a narrower right than the Australian statutory rights to reasons but operates in addition to any general Member State law to provide reasons for an administrative decision and may be more practicable as a means of shedding light on AI-based decision-making.

We now turn to examine the way in which affected persons can seek review for an automated government decision under administrative law mechanisms, by focusing on judicial review.

D Review and Remedy for Automated Decision-Making: Judicial Review

There are two main avenues of federal judicial review in Australia. The first is the ADJR Act, which contains a simplified statutory procedure for review by the Federal Court that applies in relation to decisions of an administrative character made under Commonwealth enactments. The second is the constitutional review mechanism under section 75(v) of the Commonwealth Constitution, which confers original jurisdiction on the High Court of Australia ‘[i]n all matters ... in which a writ of Mandamus or prohibition or an injunction is sought against an officer of the Commonwealth’.[76] Each will be examined in turn.

1 ADJR Act

In order to challenge a government decision under the ADJR Act, an applicant must establish three elements to enliven the jurisdiction of the relevant court: that there is a ‘decision’, ‘of an administrative character’, ‘made under an enactment’.[77] One of the contested issues with automation is whether automated decisions are ‘decisions’ for this purpose.

The High Court has held that a ‘decision’ under the ADJR Act is ‘final or operative and determinative’ of an issue of fact falling for consideration.[78] On the other hand, ‘stepping stone’ determinations that are not authorised by statute are not reviewable ‘decisions’ under the Act, although errors of law made in these intermediate determinations can inhere in and be challenged as part of the ultimate determination.[79]

The applicability of this position to automated decisions arose in the 2018 case of Pintarich v Deputy Commissioner of Taxation (‘Pintarich’).[80] Here the question was whether the Deputy Commissioner had made a ‘decision’ on a taxpayer’s request for remission of his general interest charge liabilities. The taxpayer had received a letter issued by the Australian Taxation Office (‘ATO’) bearing the signature block of the Deputy Commissioner, headed ‘Payment arrangement for your Income Tax Account debt’, which read:

Thank you for your recent promise to pay your outstanding account. We agree to accept a lump sum payment of $839,115.43 on or by 30 January 2015.

This payout figure is inclusive of an estimated general interest charge (GIC) amount calculated to 30 January 2015. Amounts of GIC are tax deductible in the year in which they are incurred.[81]

This letter, sent on 8 December 2014, wrongly stated that the tax payable included the amount for the general interest charge (‘GIC’) (which was approximately $335,000), as the decision-maker at the ATO, Mr Celantano, stated that he had ‘keyed in’ certain information into a computer-based ‘template bulk issue letter’ and that it was this process that generated the letter. Mr Celantano did not check the letter before sending it. Correspondence in 2014 between the taxpayer and the ATO indicated that the ATO regarded that the GIC charges were still under determination and clarified that the 2014 letter referred only to the primary debt. On 15 May 2015, another Deputy Commissioner of Taxation wrote to the taxpayer and advised that the request for full remission of the GIC was denied. Following correspondence from the taxpayer, on 13 May 2016 the Deputy Commissioner of Taxation sent a letter granting partial remission of the GIC. The taxpayer challenged the 2016 decision of the Deputy Commission, claiming that the 2014 letter was a ‘decision’ to grant his application to remit all the GIC incurred by him up to the time of the decision.

The Full Federal Court held, by majority, that the letter did not amount to a valid ‘decision’. The majority (Moshinsky and Derrington JJ) found that a valid decision had two elements:

1. a mental element: there must be a ‘mental process’ of reaching the decision, that is, a ‘process of deliberation, assessment and/or analysis’ on the part of the decision-maker;[82] and

2. an objective manifestation: there must be an objective manifestation of that decision.[83]

Therefore, in its judgment in Pintarich, the Full Federal Court has cast doubt on whether automated decisions are reviewable under the ADJR Act. This is because Moshinsky and Derrington JJ held that a ‘decision’ made under the ADJR Act has to involve a mental process of deliberation.[84] It is notable that Kerr J provided a significant dissent that recognised the difficulties in imposing a requirement that human mental processes need to be engaged for an act to be a ‘decision’ under the ADJR Act, particularly in the context of automated decision-making systems:

The hitherto expectation that a ‘decision’ will usually involve human mental processes of reaching a conclusion prior to an outcome being expressed by an overt act is being challenged by automated ‘intelligent’ decision-making systems that rely on algorithms to process applications and make decisions.

What was once inconceivable, that a complex decision might be made without any requirement of human mental processes is, for better or worse, rapidly becoming unexceptional. Automated systems are already routinely relied upon by a number of Australian government departments for bulk decision-making. Only on administrative (internal or external) and judicial review are humans involved.[85]

Kerr J objected that the legal conception of what constitutes a decision ‘cannot be static; it must comprehend that technology has altered how decisions are in fact made and that aspects of, or the entirety of, decision-making, can occur independently of human mental input’.[86] While this view is preferable, the High Court has refused special leave to appeal this case,[87] meaning that the majority’s decision remains the final word on the issue.

Importantly, if a human makes a decision guided or assisted by automated systems, this would still be a decision under the ADJR Act under the majority’s interpretation, as it would still involve a mental process of deliberation and cogitating by a human decision-maker.

Where the decision is actually made by an automated machine without any human involvement, there is unlikely to be a decision under the ADJR Act, as the majority’s test presumes that a human brain is involved in a mental process. This may lead to a perverse incentive for departments and agencies to automate so as to avoid judicial review.

2 Section 75(v) of the Constitution

Another avenue of challenge is via section 75(v) of the Constitution, which gives the High Court original jurisdiction in all matters where constitutional writs are sought against an officer of the Commonwealth. There is no issue if a computer is merely assisting a human, and the human, who is a Commonwealth officer such as a public servant, made the actual decision.

However, it is more difficult to argue that a fully automated decision falls within the scope of section 75(v), as courts have read in a requirement of a formal appointment of a natural person, and a prohibition against artificial persons.[88]

On the other hand, it may be argued that it is still possible that section 75(v) review would be available, but possibly not for the decision itself. The focus of section 75(v) is on the decision-maker, rather than the method of decision-making (via a decision). It may thus be argued that the review exists for any actions the Commonwealth officer may take in reliance on that decision (eg, deducting payments from a pension following a computer ‘decision’ that a debt was owed).[89] This suggests that as long as a human remains in the decision-making loop, challenges of automated decisions via section 75(v) may remain a viable option.[90]

In addition, there is a growing number of deeming provisions across a range of statutes, including social security, migration and business registration legislation.[91] These deeming provisions may potentially enable review under section 75(v) by providing that a decision effected by the use of AI is taken to be a decision by an individual such as the Secretary. The clauses also typically require an individual to have control of that system.[92] It may be argued that these deeming provisions mean that Parliament intended to preserve review rights and to enable enforcement action for such automated decisions.

Further, given the importance the High Court has placed on the jurisdiction under section 75(v),[93] it may be that the High Court would not read section 75(v) in a way that would allow review to be avoided where a ‘decision’ is made by AI rather than a human.

In short, it is uncertain whether judicial review under section 75(v) of the Constitution will be available for automated decisions made by computers, and this issue remains to be clarified by case law.

3 Implications

Therefore, it can be seen that the automation of decisions is likely to preclude judicial review under the ADJR Act, and may possibly exclude judicial review under section 75(v) of the Constitution, leaving individuals unable to challenge automated decisions by the government. This is because Australian administrative law has historically focused upon human decision-makers, as reflected in the framework for judicial review. In addition, Australian courts have adopted a formalist approach of interpretation rather than a purposive analysis in interpreting the ADJR Act and section 75(v) of the Constitution. The court’s requirement of human deliberation for a ‘decision’ to be made under the ADJR Act has left a vacuum where decisions are automated, while the narrow reading of section 75(v) of the Constitution, to only include natural persons, has excluded the ability of individuals to challenge the decisions of corporate and potentially technological entities.[94] By contrast, the UK courts’ focus on public function, rather than legal form and the requirements of a decision, is more amenable to reviewability within a changing technological environment.[95] To compound this, the more recent deeming provisions in Australia may enable broader use of AI, even if in practice it is not yet used for discretionary decisions. For example, the Parliamentary Joint Committee on Human Rights expressed concern about the Migration Legislation Amendment (Electronic Transactions and Methods of Notification) Act 2001 (Cth) inserting section 495A into the Migration Act 1958 (Cth), which allows decisions by computer programs that may involve complex or discretionary considerations. In particular, section 72(2)(e) of the Migration Act 1958 (Cth) allowed a ‘public interest’ test to be automated in relation to the grant of bridging visas, which requires the use of discretion. Although the Minister clarified that his personal decision-making powers are not exercised through departmental computer programs[96] and the Department of Home Affairs has recently abandoned its proposal to automate most of its visa processes,[97] the broad legislative framework permits such automation and does not preclude a future practice of automating discretionary decisions.

Presuming there is jurisdiction for the court to adjudicate on an automated decision, the next question is which grounds of review might be utilised to challenge automated decisions. There are several possible grounds relating to the design of an automated decision-making process. For example, where interim steps in a decision-making process are automated, there may be a potential for a decision to be affected by jurisdictional error where there is an error in the automation process.[98] For example, as we will see in the discussion of the Robodebt consent orders in Part IV, an automated decision made on the basis of flawed decision-making methodology may be challenged as being irrational, which is a jurisdictional error, thus invalidating the decision.[99] In addition, the automated system would need to be designed carefully so that the discretion of the decision-maker (if any) remains unfettered in exercising their power under the relevant legislation, policy or procedure.[100] Where the automated system was designed in a manner that included irrelevant considerations, those could be a ground for challenge as well.[101] Thus, where the court’s jurisdiction is enlivened for judicial review, there are a range of grounds that can be utilised to successfully challenge the decision. However, the prospects for passing the jurisdictional hurdle are not promising for the ADJR Act and are yet to be ascertained for section 75(v).

IV ‘TOP DOWN’ CONSIDERATIONS – INTERNATIONAL PRINCIPLES AND CASE STUDIES

Technology is a global phenomenon. As such, we argue that consideration of international principles, as well as case law and practices in other jurisdictions, should inform the development of Australian public law system to keep up with developing technologies. In other words, given that the influence and use of technology is not territorially bound, the legal response should likewise not be insular. This will require a paradigm shift for Australian public law which, whilst being influenced by UK law and that of other Commonwealth jurisdictions, has primarily developed internally. We also note, while there may be some differences in the content of public law and institutional frameworks across jurisdictions, responses to the rise of technology are now often developed across multiple jurisdictions, and indeed transnationally and internationally. As such, shared concepts and responses that developed internationally can and should be considered in Australia, although they may, of course, need to be adapted to make them appropriate for adoption.

Drawing on international developments, this Part therefore first sets out some key principles arising from human rights considerations that may guide our response to the emergence of AI. Second, this Part discusses an important recent overseas case to demonstrate how Australian public law may more appropriately respond to our new digital decision-making environment.

The rapidly developing technology on automation and AI has spawned an array of significant international work by human rights experts on appropriate regulatory approaches. At the United Nations (‘UN’) level, this includes a report on the implications of AI technologies for human rights in the information environment.[102] Similarly, at the regional European level, the Council of Europe convened a Committee of Experts on Human Rights Dimensions of Automated Data Processing and Different Forms of Artificial Intelligence, which published a study on the implications of the use of AI for human rights.[103] These reports emphasise that the tremendous potential of automated decision-making and AI needs to be appropriately balanced against their potential effects on human rights. They call on governments and the private sector to ensure that this occurs in compliance with human rights and fundamental freedoms. While the recommendations tend to be at relatively high level, they appropriately emphasise that the human rights implications need to be considered throughout all phases from design, development and ongoing deployment of algorithmic systems and by all actors involved.[104] The obligations of states include the creation of proper regulatory frameworks, awareness raising, research and evaluation.

The recently-released European Commission ‘White Paper on Artificial Intelligence’ of February 2020, is of interest as it is extremely enthusiastic about the use of automation and AI in the public sector, stating that ‘[i]t is essential that public administrations, hospitals, utility and transport services, financial supervisors, and other areas of public interest rapidly begin to deploy products and services that rely on AI in their activities’.[105] It also, importantly, underlines the need to place human rights at the centre of international cooperation on AI:

The Commission is convinced that international cooperation on AI matters must be based on an approach that promotes the respect of fundamental rights, including human dignity, pluralism, inclusion, non-discrimination and protection of privacy and personal data ...[106]

A Case Study: Data, Transparency and Redress – Dutch Litigation 2020

A recent case on automation of welfare fraud from the Netherlands – Nederlands Juristen Comité voor de Mensenrechten tegen Staat der Nederlanden (‘NJCM v Netherlands’)[107] – also known as the ‘SyRI case’ – provides a useful case study to compare to Australia’s Centrelink Robodebt scenario. The ‘SyRI’ case – named after the Dutch government’s automated system for detecting welfare fraud, Systeem Risico Indicatie (‘SyRI’, Risk Indication System) – also serves to illustrate much broader points about how persons affected by such systems obtain access to information and then seek redress for any harm arising from automation. The litigation, filed by a coalition of civil society groups and activists,[108] argued that the system violates data protection laws and human rights standards, in particular, article 8(2) of the ECHR, which guarantees the right to respect for private and family life.

1 Background

SyRI is a data analysis and risk calculation system developed by the Dutch Ministry of Social Affairs and Employment to predict an individual’s likelihood of engaging in benefits and tax fraud, and violations of labour laws. The system was authorised by Parliament as part of a package of welfare reforms enacted in 2014.[109] The legislation allowed the system to compile 17 categories of government data, including tax records, land registry files, and vehicle registrations. This was a targeted program, as indicated by the fact that it was used only in specific neighbourhoods of four cities with high numbers of low-income residents.[110]

The calculations made by the automated system used vast sources of data collected by various government agencies, including employment records, benefits information, personal debt reports, education, and housing history.[111] So, for instance, tax data was compared with information on who received state aid and support. Based on certain risk indicators, the software then possibly detected an increased risk of fraud.[112] If a risk report was generated, it had the effect that a person was deemed ‘investigative’ in connection with possible fraud, unlawful use and non-compliance with legislation.[113] The system was established to flag an individual as a fraud risk and then notify the relevant government agency, which had up to two years to open an investigation. Of particular concern was that the police were authorised under the scheme to receive risk reports at their request to carry out the performance of their legal duties.[114] Therefore the system gave rise to serious implications for persons affected.

2 The Status of SyRI and Legal Implications

Before discussing the findings on the right to privacy, it is interesting to note two matters which were in dispute between the parties: (i) the status of SyRI as an automated system and whether it used ‘deep learning’ and/or ‘big data’; and (ii) whether a risk report had a legal consequence for individuals. The latter issue is important for the application of article 22 of the GDPR, which establishes a right not to be subject to a decision based solely on automated processing, including profiling, unless an exception applies.

As to the first point, the plaintiffs argued that the deployment of SyRI constituted a large-scale, unstructured and unfocused automated linking of files relating to large groups of citizens, the secret processing of personal data, and the use of ‘deep learning’ and ‘big data’.[115] The State, in response, submitted that SyRI was not a deep learning application and was ‘not a tool to predict whether or not an individual could commit an offence’.[116] Problematically, the Court found that it could not test the accuracy of the position of the State because it had not made the risk model and the indicators that made up the risk model open to the public. Neither had it provided ‘objectively verifiable information’ to the Court in order to enable it to test the views of the State as to the status of the SyRI as a system.[117] Despite this information deficit, the Court was able to find that, contrary to the plaintiff’s submissions, the system used structured (rather than unstructured) data collection,[118] but agreed with the plaintiff that the system allowed for predictive analysis, deep learning and data mining.[119] It found it unnecessary to make a finding on whether the system constituted a form of ‘big data’. This is of interest to the Australian context and to the themes addressed in this article as it demonstrates the effect that opacity and secrecy can have on the ability of courts exercising review to make clear findings as to the status and operation of automated systems.

On the second point, the plaintiffs argued that the risk report could be regarded as an automated individual decision with ‘legal effect’ (or a decision which has a significant effect to those involved) as provided under article 22 of the GDPR.[120] This argument is very relevant to the Australian context in light of Federal Court authority (discussed in Part III(D) above) that an automated decision is not a ‘decision’ for the purpose of the ADJR Act.[121] It should be noted that the Australian Human Rights Commission, in its Discussion Paper on Human Rights and Technology, has defined ‘decision’ in the context of AI-informed decision-making to be ‘any decision that has a legal effect, or similar significant effect, for an individual’, mirroring the wording of article 22 of the GDPR.[122] Interestingly, the District Court of The Hague disagreed with the plaintiffs’ submissions on this point and held that the use of SyRI ‘[was] not aimed at having legal effect’.[123] However, it held that a risk report generated by the system ‘does have a similarly significant effect on the private life of the person to whom the risk report pertains’,[124] thereby potentially engaging article 22 of the GDPR. While the Court left open the question of whether the definition of ‘automated individual decision-making’ under the GDPR and any relevant exception to the prohibition of profiling were met in this case,[125] it noted that the effects of the report on the individuals concerned were a ‘significant factor’ in its assessment of whether the SyRI legislation complies with article 8 of the ECHR.[126] This illustrates that, under European law, there are multiple layers of regulation against which automated decision-making can be measured, which include the data protection provisions in the GDPR as well as international, European and domestic human rights frameworks.[127]

3 Arguments in Relation to the Right to Privacy

The substantive arguments as to the right to privacy were complex and linked to the above discussion as to the status and effect of the SyRI system. For the purposes of this article, the arguments will be summarised briefly here so as to enable us to focus on the ultimate court decision. In essence, the plaintiffs submitted that the SyRI system represented a serious interference in the private life of citizens and that the State of the Netherlands had not demonstrated that it was necessary to use such a heavy instrument as SyRI for maintaining the social security system.[128] They also argued that the SyRI legislation did not meet the ‘fair balance’ to justify the interference with article 8 of the ECHR.[129]

In response to these arguments, the Netherlands State submitted that SyRI legislation served a legitimate purpose, was based on objective criteria and contained adequate procedural and material safeguards.[130]

4 The Court Decision

The District Court of The Hague found that the use of the SyRI risk calculation system was unlawful as it violated the right to privacy under article 8 of the ECHR. In its judgment, the Court recognised the benefits that technology can provide to public administration:

New technologies – including digital options to link files and analyse data with the help of algorithms – offer (more) possibilities for the government to exchange data among its authorities in the context of their statutory duty to prevent and combat fraud. The court shares the position of the State that those new technological possibilities to prevent and combat fraud should be used. The court is of the opinion that the SyRI legislation is in the interest of economic wellbeing and thereby serves a legitimate purpose as adequate verification as regards the accuracy and completeness of data based on which citizens are awarded entitlements is vitally important.[131]

However, the Court noted that the development of new technologies also made the right to the protection of personal data and privacy increasingly important.[132]

In a human rights analysis of these conflicting concerns, the Court found that the SyRI legislation was disproportionate to the aim it sought to achieve.[133] It held that the SyRI legislation did not comply with the ‘fair balance’ that must exist under article 8(2) of the ECHR between the public interest in detecting welfare fraud and the violation of the private life that the legislation produces.[134] In doing so, the Court took into account the fundamental principles of data protection under European Union law,[135] in particular the principles of transparency,[136] the purpose limitation principle (that data collection and processing must be directly linked to specific purposes)[137] and the principle of data minimisation (that processing of personal data is limited to what is necessary for the relevant purposes).[138] It particularly highlighted the principle of transparency as the ‘guiding principle of data protection’ that ‘underlies and is enshrined in’ the EU Charter on Fundamental Freedoms and the GDPR. The Court held that

in view of Article 8 paragraph 2 of the ECHR this principle is insufficiently observed in the SyRI legislation. The court finds that the SyRI legislation in no way provides information on the factual data that can demonstrate the presence of a certain circumstance, in other words which objective factual data can justifiably lead to the conclusion that there is an increased risk.[139]

The Court also found that the legislation regarding the use of SyRI was insufficiently clear and verifiable and also declared the relevant legislative provisions to have no binding effect for being contrary to article 8(2) of the ECHR.[140]

The Government of the Netherlands decided not to appeal this decision. It maintained that the use of new technological tools, such as data analysis and algorithms, is legitimate and announced that the Ministry of Social Affairs and Employment is investigating how these new technologies can be used to combat fraud in an effective and efficient manner, while ensuring sufficient privacy.[141]

Before we compare this case example to that of the Australian litigation on Robodebt, we note that NJCM v Netherlands illustrates the three themes discussed in this article (data collection – transparency – redress). The data protection issues arose because the purposes for collection of the data were defined intentionally broadly[142] and individual citizens were not informed if the software classified them as a ‘high-risk citizen’.[143] In relation to freedom of information, civil society organisations sought information about the criteria the software used to assess whether there is an increased risk of welfare abuse. According to Algorithm Watch, an organisation called Bij Voorbaat Verdacht (‘Suspected from the Outset’) made a freedom of information request. The Ministry’s answer was as follows:

The risk model is a collection of one or more sets of related risk indicators that may be combined to assess the risk that certain natural or legal persons are not acting in accordance with applicable law. If one were to disclose what data and connections the Inspectie SZW is looking for, (potential) lawbreakers would know exactly on which stored data they would have to concentrate.[144]

Denial of access to information was also raised as a concern by the UN Special Rapporteur on Extreme Poverty and Human Rights, Professor Philip Alston, who appeared as amicus curiae in NJCM v Netherlands.[145] In his amicus brief, the UN Special Rapporteur noted that the Dutch Government’s position runs counter to important principles such as the rule of law and underlined that laws are made public

in order for citizens to know what is expected of them, in order for laws to be subject to public scrutiny and in order to ascertain, including via the judicial process, whether laws are properly applied and enforced and in line with higher principles, including international human rights law.[146]

These themes resonate in the Australian public law context, where individuals and journalists have had great difficulties obtaining information as to the operation of the Robodebt program, particularly in response to freedom of information requests.[147]

B Comparison to Robodebt

The Australian Government’s Online Compliance Intervention (‘OCI’) program, commonly referred to as ‘Robodebt’, involves an online machine learning method for raising and recovering social security overpayment debts. It extrapolates from the ATO’s data matching information about the total amount and period over which employment income was earned and applies that average to every separate fortnightly rate calculation period for working age payments.[148] Formerly based on risk profiling, Centrelink officers would select about 7% of discrepancies between debts and income for manual review. Controversially from July 2016, the online compliance scheme automatically issued letters to targeted welfare recipients asserting that they owe a debt for every case where they could not disprove the possible overpayment, effectively shifting the onus of proof from the department to the individual.[149]

The central legal problem is that Robodebt involved people being subjected to an automated debt-raising and collection system that utilised algorithms with high error rates.[150] The algorithm resulted in numerous miscalculated, and in some instances completely false,[151] debt claims against welfare recipients. Flaws in the design of the system meant that the overpayments were in many cases wrongly identified and therefore the use of technology has led to systematic errors in calculation, amplified by the scale of implementation to hundreds of thousands of debtors.

Importantly, the scheme had a disproportionate impact on vulnerable groups, such as Indigenous persons, aged persons and those with a disability, who generally are more greatly dependent on welfare support systems.[152] A Senate Committee inquiry and an Ombudsman investigation found that these large-scale incorrect calculations had grave repercussions for vulnerable low-socioeconomic groups, including individuals experiencing severe mental health issues, with reports of suicide in the affected population.[153] The Australian Robodebt example illustrates the issues of data collection and transparency, where the debtors were unable to access information about the methodology by which their debt was calculated (ie, by the inaccurate method of fortnightly income averaging), and there was a lack of transparency about the error rates of this income averaging method. Alleged debtors, who disproportionately belonged to already disadvantaged groups, also found it difficult to challenge the decisions due to their lack of understanding of the way the automated system operated.[154]

Following the critical parliamentary committee and Ombudsman reports, a debtor, Deana Amato, supported by Victoria Legal Aid, ran a test case to challenge the validity of her debt decision in the Federal Court. In the course of the proceedings, the Commonwealth conceded that the debt was unlawful, but the Court did not have the opportunity to make a fulsome ruling in the matter as the case was settled prior to the hearing. Through consent orders, the Court declared in Amato v Commonwealth[155] that the automated Robodebt decisions utilising income averaging alone were irrational[156] and thus unlawful. The consent order on the decisions’ unlawfulness proceeded on the narrow basis of irrationality of the methodology of the decision-making, which means that future automated decisions made on the basis of different, more reliable data points may be held to be valid. The litigation did not raise the bar with regard to any procedural safeguards, such as a requirement to provide more transparency or explanation of the decision-making, which will affect the opportunities of future debtors to seek redress.

In addition to this case, a class action was also lodged in the Federal Court in November 2019 by Gordon Legal on behalf of persons affected by Robodebt in Prygodicz v Commonwealth.[157] The applicants argue, amongst other things, that the Robodebt process was not authorised by legislation,[158] that the Commonwealth has been enriched by the overpayment debt (unjust enrichment),[159] and the Commonwealth breached its duty of care to the applicants and group members by using the calculations and outputs of the Robodebt system as the basis for the exercise of activities related to the overpayment of debt.[160] On that basis, the applicants claim, amongst other things, declarations that the debts were raised without powers, restitution of amounts by which the Commonwealth is unjustly enriched and damages for negligence.[161]

C Analysis

The Dutch NJCM v Netherlands case relating to the SyRI system highlights the advantage of using a human rights lens to seek review of automated systems because it examines the systemic issues arising from the operation of the risk assessment tool. The incompatibility with the right to privacy led to the implementing legislation as such being declared to be invalid. In contrast, the tendency in Australian public law litigation is to examine whether a particular individual or particular decision has been made unlawfully. The Dutch assessment framework, which includes the human rights under the ECHR, the European Union law protection offered by the EU Charter on Fundamental Freedoms and the GDPR, in particular the data protection principles of transparency, purpose limitation and data limitation, also provides greater recourse to individuals to challenge automated decisions on broader rights-protective grounds.

Of particular note here are some of the benefits provided by key ECHR interpretations of fundamental rights. First, the European Court of Human Rights has recognised that the right to respect for private life in article 8 can give rise to a positive obligation on states:

Although the object of Art 8 is essentially that of protecting the individual against arbitrary interference by the public authorities, it does not merely compel the State to abstain from such interference: in addition to this primarily negative undertaking, there may be positive obligations inherent in an effective respect for private life.[162]

This may be important, particularly given the pervasiveness of AI systems, in the context of automation where state authorities may need to take positive action to protect the privacy of individuals, even in the sphere of private actors amongst themselves, rather than merely abstaining from interference.

We also underline the link between data collection and its use, and the legal or other similar significant effects on the individual. As the Dutch Court made clear, the effect of profiling on the right to privacy, and the lack of observance of fundamental data protection principles, were significant factors in the assessment of whether the scheme met the requirements of being a necessary and proportionate interference with the privacy of the welfare recipients under scrutiny. As van der Sloot notes, the way in which article 8 of the ECHR has been interpreted by the Strasbourg Court provides a particularly suitable vehicle for protecting fundamental rights in the context of data processing:

Article 8 ECHR has been transformed from a classic privacy right to a personality right, providing protection to the personal development of individuals. Apart from its theoretical significance, this shift might prove indispensable in the age of Big Data, as personality rights protect a different type of interest, which is far more easy to substantiate in the new technological paradigm than those associated with the right to privacy.[163]

While not directly addressed in the Dutch case, the existence of a specific right to the protection of personal data under article 8 of the EU Charter of Fundamental Rights, in addition to the right to respect for private life, under article 7 of the EU Charter, further reinforces the protection of the individual data rights in the AI and Big Data era. With the exception of the general data privacy principles available under the Privacy Act, Australia lacks a similarly developed legal framework.

We also note that there are pitfalls in using negligence to challenge governmental decisions, as is the approach in the current class action in Prygodicz v Commonwealth. On the difference between negligence and human rights, Lord Bingham has noted that the Human Rights Act 1998 (UK) ‘is not a tort statute’, and its objects are ‘different and broader’.[164] Donal Nolan has also written compellingly on the difference between negligence and human rights in the context of the ECHR. He notes that the approach to causation is more relaxed under the Convention than under domestic negligence law[165] and that the difference between the two regimes is marked in relation to damages:

The disparities between the Convention legal order and the domestic law of negligence are even clearer when it comes to the question of damage. While damage recognised as actionable is a prerequisite of a negligence action, no such requirement exists in the case of an alleged violation of a Convention right. Furthermore, where the victim of a human rights violation seeks compensatory damages, recovery will be permitted for forms of harm which are not in themselves actionable in negligence, such as distress, anxiety, inconvenience and feelings of injustice, helplessness or humiliation.[166]

Although negligence claims relating to government decisions have been successful in seeking redress in other contexts, such as refugee policy,[167] there are significant obstacles to establishing a duty of care and the other required elements of a negligence suit against a governmental authority.[168] Apart from the difficulty of demonstrating the requisite damage, others pertain to establishing that a relevant duty of care existed, which under the approach adopted by the High Court requires consideration of all ‘salient features’[169] of the case. The salient features relevantly include the existence of conflicting duties on the defendant arising from other principles of law or statute, in particular those arising from the statute which governs the public body’s responsibilities and exercise of its powers; and the coherence of a duty in negligence with other legal principles in common law and statute, including public law mechanisms for the review of decisions through tribunal and appeals processes.[170] There may also be problems with proving fault, where a government entity exercised reasonable care in the design, commissioning or implementation of an automated system that then unexpectedly displayed shortcomings.

V RECOMMENDATIONS, REFORM AND CONCLUSIONS

In light of the above, and the nature of technology and automation in the government sphere, we submit there is a pressing need to revitalise public law to establish a more integrated, coherent system of principles and accountability from data design to the review stage. In setting out these recommendations, we emphasise that the significant knowledge, power and resource imbalance between state authorities and persons affected by automated government decision-making must be borne in mind in adopting reforms in this area.

A General Reforms

We recommend the establishment of a whole-of-government guidance framework on the design, implementation and auditing of automated decision-making in government.[171] More specifically, the discussion of international principles in Part IV above demonstrates the advantages of strengthening individual and group rights protections in relation to automated decisions. Therefore, we believe consideration should be given to introducing a complaint handling mechanism for automated decision-making which is similar in nature to the complaint handling mechanism provided by the Australian Human Rights Commission under federal discrimination legislation.

We also recommend that Australia’s human rights obligations to be considered as part of the administrative process, including automated processes. We note in this context that the 2009 National Human Rights Consultation Report (‘Brennan Report’) recommended that the ADJR Act be amended in such a way as to make Australia’s international human rights obligations or a consolidated list of those obligations a relevant consideration in government decision-making.[172] This should be considered once again.

B Privacy Reforms

The Federal Government is currently considering reforms to Australia’s privacy laws. These are primarily intended to respond to recommendations arising from the Australian Competition and Commission’s (‘ACCC’) Digital Platforms Inquiry.[173] However, there is also a need to include more specific protections against on automated decision-making, as are contained in newer international data protection frameworks, including the GDPR[174] and Council of Europe (‘CoE’) Convention 108+.[175]

As discussed above, under article 22 of the GDPR, a data subject has the right not to be subject to solely automated decision-making, including profiling, which produces legal or similarly significant effects for the data subject. Exceptions to this right apply, for example, where the automated decision-making is authorised by EU or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or where it is based on the data subject’s explicit consent. Similarly, CoE Convention 108+ recognises in its new article 9(1)(a) an individual right not to be subject to a purely automated decision significantly affecting the individual without having his or her views taken into consideration. These provisions apply to automated decision-making by public bodies as well as private entities. Rather than erecting absolute barriers to automated decision-making, they make it permissible where sufficient safeguards are taken to protect the fundamental rights of persons affected by it. Given the absence of a comparable human rights framework in Australia, the challenge will be to determine how protections with broadly similar effect can be enacted in this jurisdiction.

The data practices used to inform AI-based systems also raise other concerns, including for the definition of ‘personal information’ in the Privacy Act. In Privacy Commissioner v Telstra Corporation Limited,[176] the Full Court of the Federal Court adopted a narrow approach to the question of when information is about an individual, which has the potential to exclude certain metadata that, while generated by or in relation to an individual, is of a large technical nature.[177] This interpretation appears to underestimate that such information can nonetheless reveal personal characteristics or attributes, especially when combined with other information to create a profile, and that significant effects can follow from decisions made on the basis of such a profile. It is welcome that, in its response to recommendations arising from the ACCC Digital Platforms Inquiry,[178] the Federal Government has committed to reviewing the definition of ‘personal information’ in the Privacy Act with a view to capturing technical data and other online identifiers.

C Transparency Reforms

Should the Government decide to amend the Privacy Act to incorporate more protections in respect of automated decision-making as outlined above, the inclusion of a right equivalent to that in article 15(1)(h) of the GDPR (discussed above in Part III(C)(3)) would substantially improve the transparency of AI-based government decision-making. Otherwise, it is recommended that the issues of ‘explainability’ and transparency should be dealt with as outlined below.

It is recommended that the provisions in the ADJR Act and AAT Act that provide rights to request reasons for decisions are amended to ensure that decision-making processes are set up to ensure as far as possible that they are capable of being explained in reasons statement.

The ADJR Act could be amended to expand section 13 explicitly to require AI to be designed in a manner that requires reasons for decisions to be captured by the automated system. In addition, where machine learning is utilised to automate decisions, the ADJR Act could require accurate documentation of the decision logic, including the principles behind the machine learning model, training and testing processes; and a statement of reasons is logged for all predictions or decisions at the point in time that they are made. An equivalent provision could be included in section 25 the AAT Act.

These proposals are consistent with the 2004 Administrative Review Council’s recommendations that, in the interests of fairness, efficiency and transparency, ‘[e]xpert systems should comply with administrative law disclosure requirements – in particular, requirements associated with ... statements of reasons’.[179] We support the Council’s recommendation that a clear explanation for the reasons of a decision should be provided when people are notified of the decision – regardless of whether a person has formally asked for a statement of reasons.[180]

As a further refinement of the statement of reasons for AI-based decisions, we support the recommendation of the Australian Human Rights Commission that two forms of reasons be provided:

• a non-technical explanation of the AI-informed decision, which would be comprehensible to a lay person; and

• when necessary or upon request, a technical explanation of the AI-informed decision that can be assessed and validated by a person with relevant technical expertise.[181]

The requirement for the Australian Government to provide an explanation of AI-informed decision-making that is comprehensible to a lay person goes towards the fundamental aim of ensuring transparency in government decision-making. In many cases, this will be all that a person affected by a decision will seek. However, where requested or otherwise necessary, a technical explanation of the system that would allow a technical expert to verify and audit the complex nature of coding and algorithms should also be made available to ensure that an administrative decision, and the basis on which it has been made, can be properly scrutinised.[182] Overall, this is likely to create an obligation on the government to use software that furnishes relevant evidence to support evaluation and auditing and allow for technical accountability due to the demand for transparency.[183]

We also support the thrust of the proposal made by the Australian Human Rights Commission that, ‘[w]here an AI-informed decision-making system does not produce reasonable explanations for its decisions, that system should not be deployed in any context where decisions could infringe the human rights of individuals’.[184] However, we note that the question of whether a decision has the potential to infringe on human rights can be difficult to answer in the absence of a domestically enshrined and interpreted human rights catalogue.

In the case of the FOI Act, there are three key amendments which would enhance its ability to provide transparency in relation to AI-based decision-making:

1. Extending its scope so it applies to ‘information’ rather than ‘documents’ as does the Freedom of Information Act 2000 (UK).

2. Providing an exception to the trade secrets/commercial information exemption in section 43 for information that is necessary to shed light on algorithms used to make decisions that affect individuals.

3. Including in its proactive disclosure requirements a general description of automated decision-making technology that an agency uses to make decisions about persons.

There would also be value in amending the Archives Act 1988 (Cth) so that it deals specifically with information generated in the context of AI-based decision-making, including aspects such as the retention of training data for decision-making algorithms that utilise machine learning.

D Judicial Review Reforms

In relation to the gaps in judicial review, there are several options:

• The AAT Act and ADJR Act could be amended to make it clear that automated decisions fall within the scope of the Act; or

• The enabling legislation could make it clear that recourse to the courts is available.

The first option of reforming the ADJR Act is preferable, as it is a one-step solution and does not require each piece of enabling legislation to be amended. This reform should amend the definition of a ‘decision’ in the ADJR Act to clarify that it includes a decision wholly or partly made by an automated system.

E Conclusions

In conclusion, technological developments present an abundance of opportunities to the government to streamline and enhance the consistency and efficiency of service delivery and decision-making. Yet given the government’s significant coercive and information-gathering powers, there is a need to ensure that new technologies align with the values underpinning public law.

We now return to the question posed in the Introduction: Is Australian public law ‘fit for purpose’ in a technological era? Our general conclusion is that despite the absence of explicit human rights protections in Australian domestic law, by and large, Australians can rely on existing institutional structures to challenge many government decisions. This is because Australia has a large array of effective administrative law institutions (such as comprehensive state, territory, and federal merits review bodies, ombudsman offices and state anti-corruption agencies) that form a strong counterpoint to executive power. These oversight bodies are complemented by parliamentary committees, which are effective mechanisms for the scrutiny of government action.

Nevertheless, as technology and governmental practice have outpaced the law, this article has identified a range of legislative and operational gaps in the public law frameworks, in terms of privacy, freedom of information, and judicial review. It has recommended ways in which Australian public law should be revitalised and enhanced to become more value-compliant and consistent with emerging international best practice standards. These reforms will ensure that the development and usage of new technologies in government abide by the rule of law and are consistent with the fundamental public law principles of lawfulness, fairness, rationality, and transparency. Ensuring observance with these established principles will not be an undue obstacle to greater efficiency of administrative decision-making but, to the contrary, be critical to engendering the public confidence and trust that is necessary to facilitate the successful adoption and acceptance of new technologies. A consistent, considered approach to AI that is compliant with a reinvigorated public law framework will enable the Australian Government, and Australians, to reap the benefits of new technologies while minimising its attendant risks, as well as protect individual rights and freedoms that are fundamental to our democracy.


[*] Senior Lecturer, Faculty of Law and Associate, Castan Centre for Human Rights Law, Monash University.

[**] Senior Lecturer, Faculty of Law and Deputy Director, Castan Centre for Human Rights Law, Monash University.

[*** ] Professor, Faculty of Law and Associate, Castan Centre for Human Rights Law, Monash University.

[****] Associate Professor, Faculty of Law and Associate, Castan Centre for Human Rights Law, Monash University.

[1] See Terry Carney, ‘Vulnerability: False Hope for Vulnerable Social Security Clients?’ [2018] UNSWLawJl 27; (2018) 41(3) University of New South Wales Law Journal 783; Terry Carney, ‘Robo-Debt Illegality: The Seven Veils of Failed Guarantees of the Rule of Law?’ (2019) 44(1) Alternative Law Journal 4.

[2] Ben Saul, ‘Australian Administrative Law: The Human Rights Dimension’ in Matthew Groves and HP Lee (eds), Australian Administrative Law: Fundamentals, Principles and Doctrines (Cambridge University Press, 2007) 50, 51.

[3] See, eg, Institute of Electrical and Electronics Engineers, Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (Report, version 2) 20–32; Antoinette Price, ‘First International Standards Committee for Entire AI Ecosystem’ [2018] (3) IEC e-Tech 33; Commission Nationale Informatique & Libertés, How Can Humans Keep the Upper Hand? The Ethical Matters Raised by Algorithms and Artificial Intelligence (Report, December 2017); Association for Computing Machinery US Public Policy Council, ‘Statement on Algorithmic Transparency and Accountability’ (Statement, 12 January 2017); State-of-the-Art Report: Algorithmic Decision-Making (Report, December 2018).

[4] See, eg, Corinne Cath, ‘Governing Artificial Intelligence: Ethical, Legal and Technical Opportunities and Challenges’ (2018) 376(2133) Philosophical Transactions of the Royal Society A 20180080: 1–8; Michael Guihot, Anne F Matthew and Nicolas P Suzor, ‘Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence’ (2017) 20(2) Vanderbilt Journal of Entertainment and Technology Law 385; John Frank Weaver, ‘Regulation of Artificial Intelligence in the United States’ in Woodrow Barfield and Ugo Pagallo (eds), Research Handbook on the Law of Artificial Intelligence (Edward Elgar Publishing, 2018) 155.

[5] Paul Daly, ‘Administrative Law: A Values-Based Approach’ in John Bell et al (eds), Public Law Adjudication in Common Law Systems: Process and Substance (Hart Publishing, 2016) 23, 26.

[6] The ‘new administrative law’ package included the Administrative Decisions (Judicial Review) Act 1977 (Cth) (‘ADJR Act’), which currently provides the main mechanism for access to judicial review, and also the Freedom of Information Act 1982 (Cth) (‘FOI Act’) and the Privacy Act 1988 (Cth): see Australian Law Reform Commission, ‘Review of Secrecy Laws’ (Issues Paper No 34, December 2008) 19 [1.19]. The other key elements were the Administrative Appeals Tribunal Act 1975 (Cth) (‘AAT Act’) and the Ombudsman Act 1976 (Cth).

[7] See, eg, Yee-Fui Ng and Maria O’Sullivan, ‘Deliberation and Automation: When Is a Decision a “Decision”?’ (2019) 26(1) Australian Journal of Administrative Law 21; Justice Melissa Perry and Alexander Smith, ‘iDecide: The Legal Implications of Automated Decision-Making’ [2014] Federal Judicial Scholarship 17.

[8] See, eg, Monika Zalnieriute, Lyria Bennett Moses and George Williams, ‘The Rule of Law and Automation of Government Decision-Making’ (2019) 82(3) Modern Law Review 425; Cary Coglianese and David Lehr, ‘Regulating by Robot: Administrative Decision Making in the Machine-Learning Era’ (2017) 105(5) Georgetown Law Journal 1147, 1157; Danielle Keats Citron, ‘Technological Due Process’ (2008) 85(6) Washington University Law Review 1249.

[9] Nederlands Juristen Comité voor de Mensenrechten tegen Staat der Nederlanden [Netherlands Jurists Committee of Human Rights v State of the Netherlands], Rechtbank Den Haag [Hague District Court], C/09/550982/HA ZA 18-388 (5 February 2020) (‘NJCM v Netherlands’) <https://uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2020:1878> (in English).

[10] Robert French, ‘Rationality and Reason in Administrative Law: Would a Roll of the Dice be Just as Good?’ (Annual Lecture, Australian Academy of Law, 29 November 2017) (‘Rationality and Reason in Administrative Law’); Chief Justice Robert French, ‘Public Law: An Australian Perspective’ (Speech, Scottish Public Law Group, 6 July 2012) 16.

[11] French, ‘Rationality and Reason in Administrative Law’ (n 10) 3.

[12] Justice RS French, ‘Judicial Review Rights’ [2001] AIAdminLawF 4; [2001] (28) Australian Institute of Administrative Law Forum 33.

[13] Administrative Review Council, Automated Assistance in Administrative Decision Making: Report to the Attorney-General (Report No 46, November 2004) 3.

[14] See, eg, academic commentary such as that of Peter Cane, ‘Theory and Values in Public Law’ in Paul Craig and Richard Rawlings (eds), Law and Administration in Europe: Essays in Honour of Carol Harlow (Oxford University Press, 2003) 3; Martin Loughlin, ‘Theory and Values in Public Law: An Interpretation’ [2005] (Spring) Public Law 48. According to UK commentator, Paul Daly, ‘administrative law in this sense is best understood by reference to several core values: the rule of law, good administration, democracy and separation of powers’: Daly (n 5) 23.

[15] Chief Justice James Allsop, ‘Values in Public Law’ (Speech, James Spigelman Oration, 27 October 2015) [20] <www.fedcourt.gov.au/digital-law-library/judges-speeches/chief-justice-allsop/allsop-cj-20151027> (emphasis in original).

[16] So, for instance, Chief Justice Allsop notes, at ibid, that

[u]ncertainty of rule or outcome and inequality in inconsistencies of the exercise of power are aspects of unfairness or arbitrariness. The necessary humanity required in the exercise of power reflects a rejection of unfairness, and a need to have a perspective in examining the exercise of power of that of the subject, and not just from that of the wielder, of the power.

[17] Kelson v Forward [1995] FCA 1584; (1995) 60 FCR 39, 66.

[18] See Matthew Groves, ‘Administrative Justice in Australian Administrative Law’ [2011] AIAdminLawF 8; [2011] (66) Australian Institute of Administrative Law Forum 18, 18: ‘The precise meaning or content of administrative justice are arguably not yet settled’. See also Robin Creyke and John McMillan, ‘Administrative Justice: The Concept Emerges’ in Robin Creyke and John McMillan (eds), Administrative Justice: The Core and the Fringe (Australian Institute of Administrative Law, 2000) 1, 3: ‘Those seeking a definition of “administrative justice” will ... need to recognise that the essence of the concept is tempered by conflicting (and legitimate) interests’.

[19] See Groves, ‘Administrative Justice in Australian Administrative Law’ (n 18) 20–1. See also French, ‘Judicial Review Rights’ (n 12).

[20] French, ‘Judicial Review Rights’ (n 12) 33.

[21] See, eg, Joseph Raz, who explains that

‘the rule of law’ ... has two aspects: (1) that people should be ruled by the law and obey it, and (2) that the law should be such that people will be able to be guided by it. ... [I]f the law is to be obeyed it must be capable of guiding the behaviour of its subjects. It must be such that they can find out what it is and act on it.

Joseph Raz, The Authority of Law: Essays on Law and Morality (Oxford University Press, 2nd ed, 2009) 213–14 (emphasis omitted) (‘The Authority of Law’), originally published as Joseph Raz, ‘The Rule of Law and Its Virtue’ (1977) 93(2) Law Quarterly Review 195, 198 (emphasis omitted). See also Tom Bingham, The Rule of Law (Penguin Books, 2011) 37; Zalnieriute, Bennett Moses and Williams (n 8).

[22] Raz, The Authority of Law (n 21) 216–17.

[23] Bingham (n 21) 66.

[24] Ibid 110.

[25] See, eg, Mark Latonero, Governing Artificial Intelligence: Upholding Human Rights & Dignity (Report, 10 October 2018); Privacy International and Article 19, Privacy and Freedom of Expression in the Age of Artificial Intelligence (Report, April 2018); Committee of Experts on Internet Intermediaries (MSI-NET), ‘Algorithms and Human Rights: Study on the Human Rights Dimensions of Automated Data Processing Techniques and Possible Regulatory Implications’ (Study No DGI(2017)12, Council of Europe, March 2018) (‘Algorithms and Human Rights’).

[26] Convention for the Protection of Human Rights and Fundamental Freedoms, opened for signature 4 November 1950, 213 UNTS 221 (entered into force 3 September 1953).

[27] The main report was the Commonwealth Administrative Review Committee, Parliament of Australia, Commonwealth Administrative Review Committee: Report (Parliamentary Paper No 144, August 1971) (‘Kerr Committee Report’). This was supplemented by the Bland Committee which produced: Committee on Administrative Discretions, Parliament of Australia, Committee on Administrative Discretions: Interim Report (Parliamentary Paper No 53, January 1973) and Committee on Administrative Discretions, Parliament of Australia, Committee on Administrative Discretions: Final Report (Parliamentary Paper No 316, October 1973); Ellicott Committee: Committee of Review, Parliament of Australia, Prerogative Writ Procedures: Report of Committee of Review (Parliamentary Paper No 56, May 1973).

[28] Kerr Committee Report (n 27) 112 [389].

[29] Commonwealth of Australia, Administrative Review Council Annual Report 1976–77 (Report, 1977) Foreword, cited in Justice Duncan Kerr, ‘Reviewing the Reviewer: The Administrative Appeals Tribunal, Administrative Review Council and the Road Ahead’ [2015] Federal Judicial Scholarship 16 <http://www5.austlii.edu.au/au/journals/FedJSchol/2015/16.html> .

[30] Administrative Review Council, ‘Automated Assistance in Administrative Decision Making’ (Issues Paper, 2003) 11–15.

[31] Zalnieriute, Bennett Moses and Williams (n 8).

[32] Perry and Smith (n 7).

[33] Paul Gowder, ‘Transformative Legal Technology and the Rule of Law’ (2018) 68 (Supplement 1) University of Toronto Law Journal 82.

[34] An algorithm may be defined as ‘an unambiguous procedure to solve a problem or a class of problems. It is typically composed of a set of instructions or rules that take some input data and return outputs’: Claude Castelluccia and Daniel Le Métayer, ‘Understanding Algorithmic Decision-Making: Opportunities and Challenges’ (Study No PE 624.261, European Parliament, Panel for the Future of Science and Technology, March 2019) 3 [2.1].

[35] Cliff Bertram, Asher Gibson and Adriana Nugent (eds), Closer to the Machine: Technical, Social, and Legal Aspects of AI (Office of the Victorian Information Commissioner, August 2019) 3.

[36] See Makoto Hong Cheng and Hui Choon Kuen, ‘Towards a Digital Government: Reflections on Automated Decision-Making and the Principles of Administrative Justice’ (2019) 31(2) Singapore Academy of Law Journal 875.

[37] See Carol Harlow and Richard Rawlings, ‘Proceduralism and Automation: Challenges to the Values of Administrative Law’ in Elizabeth Fisher, Jeff King and Alison L Young (eds), The Foundations and Future of Public Law (Oxford University Press, 2020) 275.

[38] Jenna Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3(1) Big Data and Society: 1–12.

[39] Robert van den Hoven van Genderen, ‘Privacy and Data Protection in the Age of Pervasive Technologies in AI and Robotics’ (2017) 3(3) European Data Protection Law Review 338; ‘Algorithms and Human Rights’ (n 25) 12–16.

[40] Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown Publishing Group, 2016).

[41] Richard Glenn, Commonwealth Ombudsman, Centrelink’s Automated Debt Raising and Recovery System: A Report About the Department of Human Services’ Online Compliance Intervention System for Debt Raising and Recovery (Investigation Report No 2/2017, April 2017).

[42] Amy Remeikis, ‘New Rules for Job Seekers Prompt Warning about Another “Robodebt Debacle”’, The Guardian (online, 20 March 2019) <https://www.theguardian.com/australia-news/2019/mar/20/new-rules-for-job-seekers-prompt-warning-about-another-robo-debt-debacle>.

[43] The ‘golden rule’ is discussed in Jake Goldenfein, ‘Algorithmic Transparency and Decision-Making Accountability: Thoughts for Buying Machine Learning Algorithms’ in Cliff Bertram, Asher Gibson and Adriana Nugent (eds), Closer to the Machine: Technical, Social, and Legal Aspects of AI (Office of the Victorian Information Commissioner, August 2019) 41, 48.

[44] Australian Government, Department of Home Affairs, ‘The Administration of the Immigration and Citizenship Program’ (Background Paper, 4th ed, February 2020) 28 [173] states: ‘Importantly, no adverse visa decision is ever made by a machine. ... The officer might be prompted and assisted by the latest technology and automated analytical tools, but it is a person who will be the decision-maker’.

[45] Anna Huggins, ‘Automated Processes and Administrative Law: The Case of Pintarich’, AUSPUBLAW (Blog Post, 14 November 2018) <https://auspublaw.org/2018/11/the-case-of-pintarich/>. See also Digital Transformation Agency, Australian Government, ‘Vision 2025: We Will Deliver World-Leading Digital Services for the Benefit of All Australians’ (Strategy Paper, 2018).

[46] New South Wales Government, ‘Digital NSW: Designing Our Digital Future’ (Strategy Paper, 2019) 3.

[47] Information privacy laws in other states and territories include the Information Privacy Act 2014 (ACT); Information Act 2002 (NT); Information Privacy Act 2009 (Qld); Personal Information Protection Act 2004 (Tas).

[48] Discussed below at Part IV.

[49] Productivity Commission, Australian Government, Data Availability and Use (Inquiry Report No 82, 31 March 2017).

[50] Data Sharing (Government Sector) Act 2015 (NSW); Public Sector (Data Sharing) Act 2016 (SA); Victorian Data Sharing Act 2017 (Vic); Government of Western Australia, ‘Privacy and Responsible Information Sharing for the Western Australian Public Sector’ (Discussion Paper, 2 August 2019).

[51] ‘New Legislation’, Australian Government, Office of the National Data Commissioner (Web Page, 2019) <www.datacommissioner.gov.au/data-sharing/legislation>.

[52] Big Innovation Centre, Ethics and Legal in AI: Decision Making and Moral Issues (Theme Report, 27 March 2017) 6.

[53] ‘The “Black Box” Problem of AI’, Data Driven Investor (Web Page, 9 May 2018) <https://medium.com/datadriveninvestor/the-black-box-problem-of-ai-33d261805435>.

[54] Marion Oswald, ‘Algorithm-Assisted Decision-Making in the Public Sector: Framing the Issues Using Administrative Law Rules Governing Discretionary Power’ (2018) 376(2128) Philosophical Transactions of the Royal Society A 20170359: 1–20, 5.

[55] ADJR Act s 13(1); AAT Act s 28(1).

[56] Administrative Review Council, ‘Practical Guidelines for Preparing Statements of Reasons’ (Guidelines, November 2002).

[57] Matthew Groves, ‘Reviewing Reasons for Administrative Decisions: Wingfoot Australia Partners Pty Ltd v Kocak[2013] SydLawRw 25; (2013) 35(3) Sydney Law Review 627, 630 (‘Reviewing Reasons for Administrative Decisions’), citing Hill v Repatriation Commission [2004] FCA 832; (2004) 207 ALR 470; Preston v Secretary, Department of Family and Community Services [2004] FCA 300; (2004) 39 AAR 177; Civil Aviation Safety Authority v Central Aviation Pty Ltd [2009] FCA 49; (2009) 253 ALR 263.

[58] Groves, ‘Reviewing Reasons for Administrative Decisions’ (n 57) 630, citing Garrett v Nicholson [1999] WASCA 32; (1999) 21 WAR 226, 248 [73] (Owen J).

[59] [2006] NSWCA 284; (2006) 67 NSWLR 372, 397 [121] (Basten JA).

[60] FOI Act s 11(1) provides an enforceable right of access under the Act to a ‘document’ of an agency unless the document is exempt.

[61] The publication requirements are set out in ibid pt II.

[62] Ibid s 8(2)(j).

[63] Australian Government, Office of the Australian Information Commissioner, ‘FOI Guidelines: Guidelines Issued by the Australian Information Commissioner under s 93A of the Freedom of Information Act 1982’ (Guidelines, June 2020) 16 [13.87].

[64] See discussion in Wojciech Samek, Thomas Wiegand and Klaus-Robert Müller, ‘Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models’ [2017] arXiv 1708.08296: 1–8.

[65] [2011] AATA 365 (‘Re Schouten’).

[66] Law Council of Australia, Submission to Australian Human Rights Commission, Human Rights and Technology (21 April 2020) 16 [55].

[67] Re Schouten [2011] AATA 365, [39].

[68] Heike Felzmann et al, ‘Transparency You Can Trust: Transparency Requirements for Artificial Intelligence between Legal Norms and Contextual Concerns’ (2019) 6(1) Big Data and Society: 1–14; Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a “Right to an Explanation” Is Probably Not the Remedy You Are Looking For’ (2017) 16(1) Duke Law and Technology Review 18, 38–43.

[69] See, eg, Edwards and Veale (n 68); Ashley Deeks, ‘The Judicial Demand for Explainable Artificial Intelligence’ (2019) 119(7) Columbia Law Review 1829; and the literature referred to in Royal Society, ‘Explainable AI: The Basics’ (Policy Briefing, November 2019).

[70] FOI Act s 47(1)(a).

[71] Ibid s 47(1)(b).

[72] [2015] AATA 956.

[73] Ibid [31] (Deputy President Melick and Member Taglieri).

[74] Ibid [32]–[33] (Deputy President Melick and Member Taglieri).

[75] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1, art 15(1)(h) (‘GDPR’).

[76] The Federal Court is able to exercise this same jurisdiction under the Judiciary Act 1903 (Cth) s 39B.

[77] ADJR Act ss 3, 5.

[78] Griffith University v Tang [2005] HCA 7; (2005) 221 CLR 99, 122 [61] (Gummow, Callinan and Heydon JJ) (‘Tang’); Australian Broadcasting Tribunal v Bond [1990] HCA 33; (1990) 170 CLR 321, 337 (Mason CJ, Brennan J agreeing at 365, Deane J agreeing at 369) (‘Bond’).

[79] Tang [2005] HCA 7; (2005) 221 CLR 99, 122 [61] (Gummow, Callinan and Heydon JJ); Bond [1990] HCA 33; (1990) 170 CLR 321, 337 (Mason CJ, Brennan J agreeing at 365, Deane J agreeing at 369).

[80] Pintarich v Deputy Commissioner of Taxation [2018] FCAFC 79; (2018) 262 FCR 41 (‘Pintarich’).

[81] Ibid 43 [3] (Kerr J).

[82] Ibid 64 [129], quoting Pintarich v Deputy Commissioner of Taxation [2017] FCA 944, [56] (Tracey J).

[83] Pintarich [2018] FCAFC 79; (2018) 262 FCR 41, 67 [140].

[84] For further discussion, see Ng and O’Sullivan (n 7).

[85] Pintarich [2018] FCAFC 79; (2018) 262 FCR 41, 48–9 [46]–[47].

[86] Ibid 49 [49].

[87] Pintarich v Deputy Commissioner of Taxation [2018] HCASL 322.

[88] According to R v Murray [1916] HCA 58; (1916) 22 CLR 437, 452 (Isaacs J), an ‘officer of the Commonwealth’ has to have an office of some conceivable tenure, be directly appointed by the Commonwealth, accept office and salary from the Commonwealth, and be removable by the Commonwealth. See also Broken Hill Pty Co Ltd v National Companies & Securities Commission (1986) 61 ALJR 124; Businessworld Computers Pty Ltd v Australian Telecommunications Commission [1988] FCA 206; (1988) 82 ALR 499; Post Office Agents Association Ltd v Australian Postal Commission (1988) 84 ALR 563; McGowan v Migration Agents Registration Authority [2003] FCA 482; (2003) 129 FCR 118, 126 [26] (Branson J); Australasian College of Cosmetic Surgery Ltd v Australian Medical Council Ltd [2015] FCA 468; (2015) 232 FCR 225, 234 [43] (Katzmann J); Mark Aronson, Matthew Groves and Greg Weeks, Judicial Review of Administrative Action and Government Liability (Thomson Reuters, 6th ed, 2017) 49–51.

[89] Ng and O’Sullivan (n 7).

[90] As Justice Perry and Smith have pointed out: ‘[i]f automated systems were used in cases of this kind [requiring discretion or evaluative judgment], not only would there be a constructive failure to exercise the discretion; they apply predetermined outcomes which may be characterised as pre-judgment or bias’: Perry and Smith (n 7).

[91] Migration Act 1958 (Cth) s 495A; Veterans’ Entitlements Act 1986 (Cth) s 4B; Aged Care Act 1997 (Cth) s 23B-4; Social Security (Administration) Act 1999 (Cth) s 6A; Business Names Registration Act 2011 (Cth) s 66; Road Vehicle Standards Act 2018 (Cth) s 62.

[92] We note that there are exceptions to this. For instance, s 66(2) of the Business Registration Act 2011 (Cth) provides: ‘A decision made by the operation of a computer program under an arrangement made under subsection (1) is taken to be a decision made by ASIC’. There is no reference to an individual, or that the individual has control.

[93] See, eg, Graham v Minister for Immigration and Border Protection [2017] HCA 33; (2017) 263 CLR 1.

[94] See Yee-Fui Ng, ‘In the Moonlight? The Control and Accountability of Government Corporations in Australia’ [2019] MelbULawRw 16; (2019) 43(1) Melbourne University Law Review 303, 327–30.

[95] See Terence Daintith and Yee-Fui Ng, ‘Legal Form and Function in the Public Sector: The Government-Owned Company in the United Kingdom and Australia’ (2020) 136 (April) Law Quarterly Review 292.

[96] Parliamentary Joint Committee on Human Rights, Parliament of Australia, Human Rights Scrutiny Report (Report No 11 of 2018, 16 October 2018) 80–1.

[97] The Department of Home Affairs has terminated the Request for Tender process for its proposed Global Digital Platform: see Alan Tudge, ‘New Approach to Technology Capability Acquisition and Delivery’ (Media Release, Department of Home Affairs, 20 March 2020). For more information on the proposed global digital platform, see Department of Home Affairs, ‘Immigration Reform’, Immigration and Citizenship (Web Page, 23 March 2020) <https://immi.homeaffairs.gov.au/what-we-do/immigration-reform/about-the-reform>.

[98] Dominique Hogan-Doran, ‘Computer Says “No”: Automation, Algorithms and Artificial Intelligence in Government Decision-Making’ (2017) 13(3) Judicial Review 345, 355.

[99] Order of Davies J in Amato v Commonwealth (Federal Court of Australia, VID611/2019, 27 November 2019).

[100] Australian Government Information Management Office, Department of Finance and Administration (Cth), Automated Assistance in Administrative Decision-Making: Better Practice Guide (Report, February 2007) 14 (‘Better Practice Guide’).

[101] Craig v South Australia [1995] HCA 58; (1995) 184 CLR 163, 179–80 (the Court); Minister for Immigration and Multicultural Affairs v Yusuf [2001] HCA 30; (2001) 206 CLR 323, 351 [82] (McHugh, Gummow, and Hayne JJ). See also Hogan-Doran (n 98).

[102] David Kaye, Special Rapporteur, Promotion and Protection of the Right to Freedom of Opinion and Expression, UN Doc A/73/348 (29 August 2018).

[103] See, eg, Karen Yeung, Expert Committee on Human Rights Dimensions of Automated Data Processing and Different Forms of Artificial Intelligence (MSI-AUT), ‘Responsibility and AI: A Study of the Implications of Advanced Digital Technologies (Including AI Systems) for the Concept of Responsibility within a Human Rights Framework’ (Study No DGI(2019)05, Council of Europe, September 2019).

[104] Expert Committee on Human Rights Dimensions of Automated Data Processing and Different Forms of Artificial Intelligence (MSI-AUT), ‘Addressing the Impacts of Algorithms on Human Rights: Draft Recommendation of the Committee of Ministers to Member States on the Human Rights Impacts of Algorithmic Systems’ (Recommendation, 12 November 2018).

[105] European Commission, ‘White Paper on Artificial Intelligence: A European Approach to Excellence and Trust’ (White Paper, 19 February 2020) 8.

[106] Ibid 9.

[107] Rechtbank Den Haag [Hague District Court], C/09/550982/HA ZA 18-388 (5 February 2020).

[108] The lead plaintiff was the Nederlands Juristen Comité voor de Mensenrechten (the Netherlands Committee of Jurists for Human Rights – the Dutch Section of the International Commission of Jurists (ICJ)). The other plaintiffs comprised: Platform Bescherming Burgerrechten (the Dutch Platform for the Protection of Civil Rights); Privacy First (Amsterdam); Koepel van DBC-Vrije Praktijken (Umbrella Organisation of DBC-Free Practices in Amsterdam) – an organisation representing the interests of patients of mental health professionals; Landelijke Cliëntenraad (the National Client Participation Council) and two author-activists (in their personal capacity). The District Court of The Hague also allowed two interveners to make amicus curiae submissions: the Netherlands Federation of Trade Unions (‘FNV’) and the UN Special Rapporteur on Extreme Poverty and Human Rights (Professor Philip Alston). Note that the Court found that the claims of three of the plaintiffs were inadmissible: see ibid [6.14]–[6.15].

[109] Ministerie van Sociale Zaken en Werkgelegenheid, Staatsblad van het Koninkrijk der Nederlanden [Official Gazette of the Kingdom of the Netherlands], No 320, 11 September 2014 <https://zoek.officielebekendmakingen.nl/stb-2014-320.html> (in Dutch only). For further background information, see Valery Gantchev, ‘Data Protection in the Age of Welfare Conditionality: Respect for Basic Rights or a Race to the Bottom?’ (2019) 21(1) European Journal of Social Security 3, 16–19.

[110] See Philip Alston, ‘Brief by the United Nations Special Rapporteur on Extreme Poverty and Human Rights as Amicus Curiae in the case of NJCM cs/ De Staat der Nederlanden (SyRI) before the District Court of The Hague (Case Number: C/09/550982/ HA ZA 18/388)’, Submission in NJCM v Netherlands, C/09/550982/HA ZA 18-388, 26 September 2019, 2–3 [8] <www.ohchr.org/Documents/Issues/Poverty/Amicusfinalversionsigned.pdf>.

[111] A full list of the data collected is set out in the judgment: see NJCM v Netherlands, Rechtbank Den Haag [Hague District Court], C/09/550982/HA ZA 18–388 (5 February 2020) [4.17].

[112] See NJCM v Netherlands, Rechtbank Den Haag [Hague District Court], C/09/550982/HA ZA 18–388 (5 February 2020) [4.29]. For instance, SyRI would detect a discrepancy if a person received a housing allowance but was not registered at that particular address: see discussion in judgment at [6.88]. See also Ilja Braun, ‘High-Risk Citizens’, Algorithm Watch (Web Page, 4 July 2018) <https://algorithmwatch.org/en/story/high-risk-citizens/>.

[113] NJCM v Netherlands, Rechtbank Den Haag [Hague District Court], C/09/550982/HA ZA 18–388 (5 February 2020) [3.2].

[114] Ibid [4.10].

[115] Ibid [6.45].

[116] Ibid [6.48].

[117] Ibid [6.49].

[118] Ibid [6.50].

[119] Ibid [6.51].

[120] Article 22(1) of the GDPR provides that ‘[t]he data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her’ (emphasis added). See discussion of the ‘legal effect’ issue in ibid [6.57].

[121] Pintarich [2018] FCAFC 79; (2018) 262 FCR 41. See Ng and O’Sullivan (n 7).

[122] Australian Human Rights Commission, ‘Human Rights and Technology’ (Discussion Paper, December 2019) 62.

[123] NJCM v Netherlands, Rechtbank Den Haag [Hague District Court], C/09/550982/HA ZA 18–388 (5 February 2020) [6.59].

[124] The Court said, at ibid, that it derived its conclusion from its reading of article 22 of the GDPR but also

from the guidelines of the Article 29 Data Protection Working Party’ and from the fact that a ‘risk report can be stored for two years and can be used by the participants in the SyRI project in question for a maximum of 20 months. In addition, the Public Prosecution Service and the police may be notified of the risk report upon request.

[125] Ibid [6.60].

[126] Ibid.

[127] See ibid [6.2].

[128] Ibid [6.75].

[129] Ibid [6.83].

[130] Ibid [6.1].

[131] Ibid [6.4].

[132] Ibid [6.5].

[133] Ibid [6.7].

[134] Ibid.

[135] See discussion of the Charter of Fundamental Rights of the European Union (‘EU Charter on Fundamental Freedoms’) and GDPR in ibid [6.27]–[6.41].

[136] The Court explained this, at ibid [6.31], as follows:

The principle of transparency requires easily accessible and easy to understand information, communication and clear and plain language, and the provision of information to the data subject about the identity of the controller and the purposes of the data processing. Aside from this, under this principle, further information must actively be provided to ensure a sound and transparent data processing, and natural persons must be made aware of the risks, rules, safeguards and rights in relation to the processing of personal data and also of how they may exercise their rights with respect to the processing.

[137] The Court explained this as follows: ‘The principle of purpose limitation means that personal data must be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes’: ibid [6.32].

[138] The Court explained this, at ibid [6.33], as follows:

The principle of data minimisation requires personal data to be adequate, relevant and limited to what is necessary in relation to the purposes for which it is processed. As also follows from the principle of storage limitation laid down in the GDPR, not more personal data may be kept for longer than is necessary for the purpose for which the personal data are processed.

[139] Ibid [6.87].

[140] Ibid [6.7].

[141] Rijksoverheid [Government of the Netherlands], ‘Staat Niet In Hoger Beroep Tegen Vonnis Rechter Inzake SyRI’ (News Item, 23 April 2020) <https://www.rijksoverheid.nl/actueel/nieuws/2020/04/23/staat-niet-in-hoger-beroep-tegen-vonnis-rechter-inzake-syri> (in Dutch only).

[142] For a discussion of the context of this choice, see Gantchev (n 109) 17.

[143] Braun (n 112).

[144] Ibid.

[145] Alston (n 110).

[146] Ibid 8–9 [26].

[147] See discussion in Ashlynne McGhee, ‘Centrelink Debt Recovery Program: Department Rejects FOI Requests Relating to Plagued Scheme’, ABC News (online, 10 February 2017) <www.abc.net.au/news/2017-02-10/centrelink-debt-recovery-program-foi-requests-rejected/8258564>.

[148] See Carney, ‘Vulnerability: False Hope for Vulnerable Social Security Clients?’ (n 1) 810.

[149] See discussion in Senate Community Affairs References Committee, Parliament of Australia, Design, Scope, Cost-Benefit Analysis, Contracts Awarded and Implementation Associated with the Better Management of the Social Welfare System Initiative (Report, June 2017) 19 [2.31], 71 [4.1], 79-80 [4.46]–[4.53] (‘Senate Social Welfare System Initiative Report’).

[150] See ibid 1 [1.1]–[1.2], 33–4 [2.85]–[2.93].

[151] Terry Carney, ‘The New Digital Future for Welfare: Debts without Legal Proofs or Moral Authority?’ [2018] (March) University of New South Wales Law Journal Forum 1, 3.

[152] Senate Social Welfare System Initiative Report (n 149) 2; NITV Staff Writer, ‘Concerns as “Vulnerable” Welfare Recipients Targeted by Centrelink Robodebt’, National Indigenous Television (online, 15 August 2018) <www.sbs.com.au/nitv/article/2018/08/15/concerns-vulnerable-welfare-recipients-targeted-centrelink-robodebt>. See also Australian Government, Australian Institute of Health and Welfare, Australia’s Welfare 2017 (Report, 2017) 286.

[153] Senate Social Welfare System Initiative Report (n 149) 94–6, 106; Glenn (n 41) 21–2.

[154] See discussion in Carney, ‘Vulnerability: False Hope for Vulnerable Social Security Clients?’ (n 1); Ng and O’Sullivan (n 7).

[155] Order of Davies J in Amato v Commonwealth (Federal Court of Australia, VID611/2019, 27 November 2019). The Court found, at [1]–[2], amongst other things, that:

The demand for payment of an alleged debt first made by the Respondent to the Applicant on 2 March 2018 (the alleged debt) (emphasis omitted) was not validly made because the information before the decision-maker acting on behalf of the Respondent was not capable of satisfying the decision-maker that: (a) a debt was owed by the Applicant to the Respondent, within the scope of s 1222A(a) and s 1223(1) of the Social Security Act 1991 (Cth) in the amount of the alleged debt; or that (b) any of the necessary preconditions for the addition of a 10% penalty to such a debt, as prescribed by s 1228B(1)(c) of the Social Security Act 1991 (Cth) were present.

In consequence of the declaration in paragraph 1, the notice purportedly issued on 2 March 2018 was not a validly issued notice for the purpose of s 1229 of the Social Security Act 1991 (Cth) because the decision-maker could not have been satisfied that a debt was owed in the amount of the alleged debt.

[156] On irrationality, see Minister for Immigration and Citizenship v SZMDS (2010) 240 CLR 611, 625 [40] (Gummow ACJ and Kiefel J), 647–50 [130]–[132], [135] (Crennan and Bell JJ); Tisdall v Webber [2011] FCAFC 76; (2011) 193 FCR 260, 296 [126] (Buchanan J, Tracey J agreeing at 286 [93]), cited with approval in P v Child Support Registrar [2014] FCAFC 98; (2014) 225 FCR 378, 392–3 [53]–[54] (The Court); Rawson Finances Pty Ltd v Commissioner of Taxation [2013] FCAFC 26; (2013) 296 ALR 307, 335–6 [84]–[85] (Jagot J, Nicholas J agreeing at 351 [142]).

[157] Katherine Prygodicz, ‘Originating Application Starting a Representative Proceeding under Part IVA Federal Court of Australia Act 1976’, Submission in Prygodicz v Commonwealth, VID1252/2019, 20 November 2019 <https://gordonlegal.com.au/media/1135/191119-prygodicz-ors-v-commonwealth-of-australia-originating-application.pdf> (‘Prygodicz Originating Application’); Katherine Prygodicz, ‘Statement of Claim’, Submission in Prygodicz v Commonwealth, VID1252/2019, 20 November 2019 (‘Prygodicz Statement of Claim’) <https://gordonlegal.com.au/media/1136/191119-prygodicz-ors-v-commonwealth-of-australia-statement-of-claim.pdf>.

[158] See ‘Prygodicz Statement of Claim’ (n 157) 10 [46]:

[T]he calculations or other outputs of the Robodebt System did not establish, and were not capable of establishing, for the purposes of section 1223(1) of the SSA, that a person who obtained the benefit of an amount paid by way of Social Security Payment was not entitled to obtain that benefit such that the amount of the Social Security Payment is a debt due to the Commonwealth.

[159] Ibid 12–13 [50]–[62].

[160] Ibid 14–18 [66]–[79].

[161] ‘Prygodicz Originating Application’ (n 157).

[162] Evans v United Kingdom [2007] I Eur Court HR 353, 381 [75].

[163] Bart van der Sloot, ‘Privacy as Personality Right: Why the ECtHR’s Focus on Ulterior Interests Might Prove Indispensable in the Age of “Big Data”’ (2015) 31(80) Utrecht Journal of International and European Law 25, 25. In referring to a ‘personality right’, van der Sloot is referring to the type of interests protected by art 2, para 1 of the German Constitution, which specifies ‘[e]veryone has the right to the free development of his personality insofar as he does not violate the rights of others or offend against the constitutional order or the moral code’: Grundegesetz für die Bundesrepublik Deutschland [Basic Law for the Federal Republic of Germany].

[164] R (Greenfield) v Secretary of State for the Home Department [2005] UKHL 14; [2005] 1 WLR 673, 684 [19] (Lord Bingham).

[165] Donal Nolan, ‘Negligence and Human Rights Law: The Case for Separate Development’ (2013) 76(2) Modern Law Review 286, 309.

[166] Ibid 308. Nolan gives an example, at ibid 308, of this:

The award of £10,000 by the Court of Appeal in the article 2 case of Van Colle for the fear and distress suffered by the deceased in the period leading up to his death can for example be contrasted with the refusal of the House of Lords in the negligence case of Hicks v Chief Constable of the South Yorkshire Police to compensate the estates of two sisters killed in the Hillsborough disaster for the fear and pain they suffered before they died (citations omitted).

[167] See, eg, Plaintiff S99/2016 v Minister for Immigration and Border Protection [2016] FCA 483; (2016) 243 FCR 17: a case involving a female applicant refugee held on Nauru who wished to be transferred to Australia to terminate her pregnancy. In this case, Bromberg J held that the Minister for Immigration had a duty of care to undertake reasonable care to the applicant with regards to discharging the responsibility he assumed to procure for her a safe and lawful abortion and, unusually, granted an injunction to restrain the Minister from failing to discharge this duty of care.

[168] The complexities of demonstrating government liability for negligence are discussed in Mark Aronson, ‘Government Liability in Negligence’ [2008] MelbULawRw 2; (2008) 32(1) Melbourne University Law Review 44; Justine Bell-James and Kit Barker, ‘Public Authority Liability for Negligence in the Post-Ipp Era: Sceptical Reflections on the “Policy Defence”’ [2016] MelbULawRw 8; (2016) 40(1) Melbourne University Law Review 1.

[169] Caltex Refineries (Qld) Pty Ltd v Stavar [2009] NSWCA 258; (2009) 75 NSWLR 649.

[170] See Sullivan v Moody (2001) 207 CLR 562; Hunter and New England Local Health District v McKenna [2014] HCA 44; (2014) 253 CLR 270.

[171] We note here that the Australian Human Rights Commission’s Discussion Paper on Human Rights and Technology has proposed a regulatory framework comprising a National Strategy on New and Emerging Technologies which amongst other things, promotes effective regulation of technologies (Proposal 1) and a new AI Safety Commissioner (Proposal 19): see ‘Human Rights and Technology’ (n 122) 189–92.

[172] National Human Rights Consultation Committee, National Human Rights Consultation Report (Report, September 2009) 183. This proposal is discussed, and criticised, in Groves, ‘Administrative Justice in Australian Administrative Law’ (n 18) 22–3.

[173] Australian Competition and Consumer Commission, Digital Platforms Inquiry (Final Report, June 2019).

[174] GDPR [2016] OJ L 119/1.

[175] Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data, opened for signature 28 January 1981, ETS No 108 (entered into force 1 October 1985), as amended by Protocol Amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, opened for signature 10 October 2018, CETS No 223 (not yet in force).

[176] Privacy Commissioner v Telstra Corporation Ltd [2017] FCAFC 4; (2017) 249 FCR 24.

[177] See Normann Witzleb and Julian Wagner, ‘When Is Personal Data “About” or “Relating to” an Individual? A Comparison of Australian, Canadian, and EU Data Protection and Privacy Laws’ (2018) 4 Canadian Journal of Comparative and Contemporary Law 293.

[178] Australian Government, Regulating in the Digital Age: Government Response and Implementation Roadmap for the Digital Platforms Inquiry (Report, 2019) 6, 17.

[179] Automated Assistance in Administrative Decision-Making (n 13) 32 [4.5.2]. See also ‘Practical Guidelines for Preparing Statements of Reasons’ (n 56).

[180] Automated Assistance in Administrative Decision-Making (n 13) 32 [4.5.1].

[181] ‘Human Rights and Technology’ (n 122) 85.

[182] Citron (n 8) 1284.

[183] Deven R Desai and Joshua A Kroll, ‘Trust but Verify: A Guide to Algorithms and the Law’ (2017) 31(1) Harvard Journal of Law and Technology 1, 43–4. For the Australian guidelines, see Better Practice Guide (n 100) 45–9.

[184] ‘Human Rights and Technology’ (n 122) 190 Proposal 8.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/UNSWLawJl/2020/37.html