AustLII Home | Databases | WorldLII | Search | Feedback

Computers and Law: Journal for the Australian and New Zealand Societies for Computers and the Law

You are here:  AustLII >> Databases >> Computers and Law: Journal for the Australian and New Zealand Societies for Computers and the Law >> 2021 >> [2021] ANZCompuLawJl 4

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Mendoza, Bernadette; Szollosi, Miklos; Leiman, Tania --- "Automated decision making and Australian discrimination law" [2021] ANZCompuLawJl 4; (2021) 93 Computers & Law 10


Automated decision making and Australian discrimination law

Bernadette Mendoza [1], Miklos Szollosi [2], and Tania Leiman [3]

23 November 2020

Automated decision making, data driven inferencing, and bias

Automated Decision Making (ADM) or Data Driven Inferencing (DDI) may include traditional rule-based systems, algorithms or ‘more specialised systems which use automated tools to predict and deliberate, including through the use of machine learning.’[4] Training data sets guide the automated systems to ‘learn’ to apply the data they analyse to come to a decision.[5] But training data ‘can be susceptible to subconscious cultural biases’[6] especially if developers and designers do not intentionally incorporate diverse perspectives.[7] Large scale Artificial Intelligence (AI) systems are ‘developed almost exclusively in a handful of technology companies and a small set of elite university laboratories, spaces that in the West tend to be disproportionately white, affluent, technically oriented, and male.’[8] This risks perpetuating biases and discriminatory outcomes[9], raising broader questions about transparency, accountability and systemic disadvantage.

Although Article 22.1 of the EU General Data Protection Regulation (GDPR) gives data subjects ‘the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her,’[10] no similar protection yet exists in Australian anti-discrimination legislation. Existing Commonwealth anti-discrimination laws prohibit discrimination on the basis of age, gender, race and sex in a variety of contexts. Where multiple factors contribute to a discriminatory event, effects of each separate factor compound to form a more significant discriminatory effect, known as intersectionality. [11] Algorithms using multiple factors, thus risk magnifying existing unlawful discrimination significantly.[12] And, as noted in a recent paper by Gerards and Borgesius[13]:

‘While current non-discrimination law offers people some protection, AI decision-making presents the law with several challenges. For instance, AI can generate new categories of people based on seemingly innocuous characteristics, such as web browser preference or apartment number, or more complicated categories combining many data points. Such new types of differentiation could evade non-discrimination law, as browser type and house number are not protected characteristics, but such differentiation could still be unfair, for instance if it reinforces social inequality.’

Accountability, transparency and procedural fairness

If decision-makers are to be held accountable for discriminatory decisions and actions, decision-making must be ‘open to inspection and challenge’,[14] with capacity to identify both the actual decision-maker and the powers or framework within which that decision maker is empowered to operate. Tools used to inform decision-making and assess impacts on subjects of decisions must be capable of evaluation.[15] The ability to provide reasons for any decision is essential for transparency.[16] Procedural fairness requires sufficient notice before decisions are made, an opportunity to make submissions and ‘credible, reliable and significant’ information being considered to make the decision.[17] ADM/ DDI parameters or formulas however, may be commercial-in-confidence, known only to software designers, behind proprietary paywalls, or with ‘some inputs commercial secrets’[18] – the ‘black box effect’.[19] Even software designers may not be able to identify machine learning processes leading to ultimate outcomes, or describe them accurately or in sufficient detail to purchasers or end users.[20]

ADM/DDI thus makes it more difficult to determine who (or what) has made a decision or taken action, the nature of that decision or action, what factors have been taken into account or disregarded and why, and weight given to those factors. This increases the difficulty of identifying unfair[21] or discriminatory decisions or actions, locating the basis of that discrimination to determine whether it is or unlawful, or succeeding in complaints. It also makes it difficult to demonstrate lack of procedural fairness – i.e. whether decisions have been made without proper jurisdiction, have failed to comply with a procedural requirement, or (in the context of administrative decisions by public authorities) were made for an improper purpose (i.e. not in the spirit of the legislation under which power was purported to have been conferred).[22]

Anti-discrimination legislation

Commonwealth legislation prohibits discrimination on the basis of age[23], disability,[24] race[25] and sex[26]. Discrimination occurs when an individual or group of individuals are treated less favourably than others due to their background or specific traits. Discrimination can be direct or indirect. Direct discrimination relates to a ‘differential treatment.’[27] It occurs when one person treats an individual or groups of individuals less favourably than the another individual or groups of individuals on the basis of protected characteristics such as background, race, status, gender, age, disability, sex or their characteristics.[28] Indirect indiscrimination occurs when criteria are defined that are not intended to directly affect a particular race, age, sex, etc. but the selection of that criteria still affects people who fit into that category in a disproportionate manner.[29]

‘Person’

Legislative prohibitions against discrimination presume that any discriminator is a ‘person’. ‘Person’ can include ‘a body politic or corporate as well as an individual’.[30] How this applies where decisions are made or actions carried out by algorithms using machine learning is not clear. Perhaps existing administrative law principles allowing delegation to an authorised agent[31] might be extended to cover delegation to an ADM/DDI system,[32] or alternatively an approach similar to that under consideration for driverless vehicles might be considered – with those seeking to use ADM/DDI systems required to nominate a legal entity responsible in advance.[33]

‘Because of’

Direct discrimination occurs if the less favourable treatment is ‘because of’ age;[34] ‘because of’[35] or ‘on the ground of a disability’;[36] ‘based on race, colour, descent or national or ethnic origin’;[37] or ‘by reason of sex or sexual characteristics’.[38] Factors such as postcode,[39] creditworthiness, [40] arrest records,[41] country of origin,[42] and past non-attendance at medical appointments,[43] and car usage[44] may act as proxies for prohibited grounds of discrimination – either intentionally or unintentionally. ADM/DDI makes it more it difficult for any aggrieved person to establish the necessary causal link. Even complete transparency may not identify intermediate steps and processes used in machine learning to lead a final outcome, or show how the outcome was impacted by considerations of age, disability, race or sex.

Actions required

Where ‘imposing or proposing to impose, a condition, requirement or practice’ is required to enliven any prohibition against discrimination, such as that against indirect age discrimination,[45] this begs the question whether ‘imposing’ or ‘proposing’ includes the following or other similar actions:

• including, or not including, particular sets of data in training data sets;

• writing code in particular ways;

• structuring algorithmic decision-making processes in a particular order; and

• weighting various factors as part of other ADM/DDI processes.

‘2 or more reasons’

If:

a) ‘an act is done for 2 or more reasons; and

b) one of the reasons (whether or not it is the dominant or a substantial reason) is:

i) the age of a person; or

ii) a characteristic that appertains generally to persons of the age of a person; or

iii) a characteristic that is generally imputed to persons of the age of a person’

then it is taken to be done because of the age of the person and could potentially be discriminatory.[46] Could this mean that any weighting of age as a factor within ADM/DDI contravenes the Age Discrimination Act?

Algorithmic discrimination: examples

Research has raised concerns that predictive policing tools use ‘police‐recorded data sets ... rife with systematic bias.’[47] A defendant who pled guilty to using a vehicle without the owner’s consent challenged the court’s reliance on the COMPAS tool to determine his likelihood of recidivism or for bail and sentencing decisions,[48] but was unable to access the algorithm’s code because Northpointe refused to share its proprietary code.[49]

Visa’s Smarter Stand-in Processing (Smarter STIP) (that approves or rejects a transaction on behalf of the issuer when online systems are down) claims it is based on ‘unique insights derived from the cardholder’s past purchasing behaviour, rather than solely on static rules applied across an entire card portfolio’ and is trained on data relating to ‘multiple recurrent neural network layers with millions of parameters.’[50] Card-holders have no information about how the proprietary commercial-in-confidence algorithm works or how it is trained, although rejection of payments may have serious consequences. A 2019 investigation of Goldman Sachs' credit card practices suggested the algorithm used to assess creditworthiness had discriminated on the basis of gender, if not also on race and age.[51] Perceptions of indirect discrimination when a loan application was not approved, and the lack of transparency or understanding by bank employees of how algorithmic assessment operated can lead to litigation.[52]

Research has shown biometric recognition systems are inaccurate and may be biased,[53] yet they are widely used by public and private agencies for security, policing,[54] aviation and anti-terrorism activities.[55] Where algorithms are ‘embedded in the camera’s enclosed technical system’,[56] there is no means of assessing whether training data sets are biased.

Algorithms assessing insurance risks charged residents of postcodes where most people are minorities premiums on average 30% higher than residents from postcodes with greater numbers of Caucasian residents. [57]

Recruitment algorithms commonly use training data drawn from existing employees.[58] While this can ‘help human decision-makers avoid their own prejudices by adding consistency to the hiring process’,[59] it can entrench existing lack of diversity,[60] or use online skills tests that are discriminatory.[61]

An algorithm used to refer patients to intervention health programmes in the US failed to identify less than half the number of Black patients with complex needs, who in turn were less likely than white patients to be referred to intervention programs to improve their health, ending up on long waiting lists. [62]

Also in the US, ADM in electronic health record software draws on patient data including personal information (eg. age, gender, ethnicity), clinical history and prior attendance rates (to name a few) to determine the probability that patients will not attend appointments.[63] Appointments for these patients are then double booked to avoid predicted mis-allocation of resources. Data about prior attendance is grouped into sub-categories based on reasons for non-attendance, including persons with a disability (e.g. unable to attend due to conditions such as obesity or decreased mobility) or lower socio-economic status (e.g. not be able to afford transport to appointment, or may have to rely on a short notice opportunity to take up extra hours at work). When these patients finally can attend, their appointment has been double-booked, so may not be seen or receive an equitable level of care. This again highlights the magnifying effect of intersectionality, with patients already disadvantaged by disability and/or socio-economic status further disadvantaged by having their opportunity to access medical appointments reduced. These patients are additionally likely unaware of their disadvantage within the system, exacerbating this inequality and its impact. If two patients attend for the same booking time, it’s unclear which patient will be given priority and access to a health professional. If both patients cannot be accommodated, then the patient who is not seen suffers yet further disadvantage, while simultaneously generating data that another scheduled appointment was unkept. This feedback loop can skew the dataset, entrenching a patient’s profile as a non-attender.

An algorithm to guide decisions in the US health care system ‘uses health costs as a proxy for health needs’ with the result that ‘[l]ess money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients.’[64] ‘Research published in October 2020[65] study into ‘a widely used but controversial formula for estimating kidney function that by design assigns Black people healthier scores’ meant that ‘[o]ne third of Black patients, more than 700 people, would have been placed into a more severe category of kidney disease if their kidney function had been estimated using the same formula as for white patients.’[66]

Conclusion

Lack of transparency in decision making, and lack of information about use of ADM/DDI, means those discriminated against may be unaware such discrimination is even taking place. The above examples show the need for urgent action to minimise risks of discrimination in the use of ADM/DDI, and in ensuring existing Australian anti-discrimination legislation is fit for purpose.


[1] BBus (HRM), Juris Doctor student, Flinders University.

[2] BBus, Juris Doctor student, Flinders University.

[3]LLB, GDLP, GCE(HE), Prof Cert (Innovation), Associate Professor and Dean of Law, Flinders University.

[4] Commonwealth Ombudsman, the Office of the Australian Information Commissioner and

the Attorney-General’s Department, Automated Decision-making – Better Practice Guide (Guide document, 2019).

[5] Gustavo E. A. P. A. Batista et al., ‘A Study of the Behaviour of Several Methods for Balancing Machine Learning and Training Data’ (2004) ACM SIGKDD Explorations Newsletter 1.

[6] Ibid, 589.

[7] Ibid.

[8] Human Rights Council, Racial Discrimination and Emerging Digital Technologies: A Human Rights Analysis Report of the Special Rapporteur on Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance, A/HRC/44/57, (18 June 2020) Australian Human Rights Commission, Human Rights and Technology Discussion Paper (December, 2019) [6].

[9]Council of Europe, Discrimination Artificial Intelligence and Algorithmic Decision-making (Report, 2018) [11].

[10] James Farrar & Anton Ekker, ‘Uber Drivers Challenge Dismissal by Algorithm,’ Ekker (Web Page, 26 October 2020) https://ekker.legal/2020/10/26/uber-drivers-challenge-dismissal-by-algorithm/.

[11] Alexandra Grant, ‘Intersectional Discrimination in U Visa Certification Denials: An Irremediable Violation of Equal Protection?’ (2013) 3 Columbia Journal of Race and Law 253, (Intersectional Discrimination) 261.

[12] Ibid.

[13] Janneke Gerards and Frederik Zuiderveen Borgesius, ‘Protected Grounds and the System of Non-discrimination Law in the Context of Algorithmic Decision-making and Artificial Intelligence’ (2 November 2020) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3723873.

[14] Judith Bannister, Anna Olijnyk and Stephen McDonald, Government Accountability – Australian Administrative Law (Cambridge, 2nd ed., 2018) [17].

[15] Lyria Bennett Moses & Janet Chan, ‘Using Big Data for Legal and Law Enforcement Decisions: Testing the New Tools’ (2014) 37(2) UNSW Law Journal 655.

[16] Peter Cane and Leighton McDonald, Principles of Administrative Law (Oxford University Press, 2nd, ed. 2012).

[17] Judith Bannister, Anna Olijnyk & Stephen McDonald, Government Accountability – Australian Administrative Law (Cambridge, 2nd ed., 2018) [392].

[18] Dominique Hogan-Doran, ‘Computer Says “No”: Automation, Algorithms and Artificial Intelligence in Government Decision-making,’ (2017) 13 The Judicial Review.

[19] Dallas Card, ‘The “Black Box” Metaphor in Machine Learning’, Towards Data Science (Web Page, 6 July 2017) https://towardsdatascience.com/the-black-box-metaphor-in-machine-learning-4e57a3a1d2b0.

[20] Lyria Bennett Moses and Janet Chan, ‘Using Big Data for Legal and Law Enforcement Decisions: Testing the New Tools’ (2014) 37(2) UNSW Law Journal 656.

[21] Junzhe Zhang and Elias Bareinboim, ‘Fairness in Decision Making – The Causal Explanation Formula’ [November 2017] National Science Foundation 4 https://par.nsf.gov/servlets/purl/10060701.

[22] Judith Bannister, Anna Olijnyk & Stephen McDonald, Government Accountability – Australian Administrative Law (Cambridge, 2nd ed., 2018).

[23] Age Discrimination Act 2004 (Cth) (ADA) s 4.

[24] See Disability Discrimination Act 1992 (Cth) (DDA).

[25] Racial Discrimination Act 1975 (Cth) (RDA) s 9(1).

[26]See Sex Discrimination Act 1984 (Cth) (SDA).

[27] Kasper Lippert-Rasmussen, The Routledge Handbook of the Ethics of Discrimination (Taylor and Francis Group, 2018) [21].

[28] Australian Human Rights Commission, ‘Direct Discrimination’ (Web Page) https://humanrights.gov.au/quick-guide/12026#:~:text=Direct%20discrimination%20happens%20when%20a,background%20or%20certain%20personal%20characteristics.

[29] Indre ̇Z Liobaite, ‘A Survey on Measuring Indirect Discrimination in Machine Learning’ (2015) 1(148) Aalto University and Helenski Institute for Information Technology 5 <https://arxiv.org/pdf/1511.00148.pdf>.

[30] Acts Interpretation Act 1901 (Cth) s 2C(1).

[31] Carltona Ltd v Commissioners of Works [1943] 1 All ER 560; Katie Miller et al, ‘Administrative Decision Making – Delegations and Avoiding Bias’ Victorian Government Solicitor’s Office 7, http://www.vgso.vic.gov.au/sites/default/files/9627_3_1.pdf.

[32] Hemmett v Market Direct Group Pty Ltd [2018] WASC 214.

[33] National Transport Commission 2020, ‘Review of Guidelines for Trials of Automated Vehicles in Australia’(Discussion Paper, May 2020) 13.

[34] ADA, s 14.

[35] DDA, s 5.

[36] DDA, s 6.

[37] RDA s 9(1).

[38] SDA s 5.

[39] Robin Allen and Dee Masters, ‘Artificial Intelligence: The Right to Protection From Discrimination Caused by Algorithms, Machine Learning and Automated Decision-making’ (2019) Academy of European Law 585.

[40] Marlene Satter, ‘Apple Credit Algorithm Accused of Discriminating Against Women, Including Cofounder Wozniak's Wife’, (BenefitsPro, 11 November 2019) https://search.proquest.com/docview/2313429588?rfr_id=info%3Axri%2Fsid%3Aprimo.

[41] Stephanie Wykstra, ‘Philosopher's Corner: What is "Fair"? Algorithms in Criminal Justice’ (2018) 34 (3) Issues in Science and Technology 21.; Karen Hao, ‘AI is Sending People to Jail – and Getting it Wrong’ MIT Technology Review (21 January 2019) https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/ ; Will Douglas Heaven,

Predictive Policing Algorithms are Racists. They Need to be Dismantled.’ MIT Technology Review (17 July 2020) https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/?fbclid=IwAR3zTH9U0OrjaPPqifYSjldzgqyIbag6m-GYKBAPQ7jo488SYYl5NbfzrjI.

[42] Helen Warrell, ‘Home Office's visa Algorithm Creates 'Real Risk' of Discrimination, Say Lawyers’, The Financial Times (10 June 2019) https://go.gale.com/ps/i.do?p=AONE&u=flinders&id=GALE|A588318041&v=2.1&it=r; Helen Warrell, ‘Home Office Ditches 'Racist' Algorithm Used in Visa Rulings’, The Financial Times (5 August 2020) https://search.proquest.com/docview/2440219947?rfr_id=info%3Axri%2Fsid%3Aprimo.

[43] Sarah G Murray et al, ‘Discrimination by Artificial Intelligence In A Commercial Electronic Health Record – A Case Study’ HealthAffairs (Website, 31 January 2020) https://www.healthaffairs.org/do/10.1377/hblog20200128.626576/full/.

[44] Alberto Cevolini & Elena Esposito, ‘From Pool to Profile: Social consequences of Algorithmic Prediction in Insurance’ (2020) 7(2) Big Data & Society [2]; Hogan Lovells Publications, ‘A Look at the Impact and Insurance Regulatory Challenges of InsurTech innovations, AI, Machine Learning, Blockchain, and Smart Contracts’ (Hogan Lovells Web Page, 18 March 2019) https://www.hoganlovells.com/en/publications/the-impact-and-insurance-regulatory-challenges-of-insurtech-innovations-ai-machine-learning-blockchain-and-smart-contracts.

[45] ADA, s 15 (1)(a).

[46] ADA, s 16.

[47] Kristian Lum and William Isaac, ‘To Predict and Serve?’ Significance (7 October 2016) https://doi.org/10.1111/j.1740-9713.2016.00960.x

[48] Anne Washington, ‘How to Argue with an Algorithm: Lessons from the Compas-Propublica Debate’ (2016) 17 (1) Colorado Technology Law Journal 131.

[49] Stephanie Wykstra, ‘Philosopher's Corner: What is "Fair"? Algorithms in Criminal Justice’ (2018) 34 (3) Issues in Science and Technology 21.

[50] Banking Newslink, ‘Visa Smarter Stand-in Processing(Smarter STIP) Announced’, Farnham (28 August 2020) https://search.proquest.com/docview/2437734781?rfr_id=info%3Axri%2Fsid%3Aprimo.

[51] Marlene Satter, ‘Apple Credit Algorithm Accused of Discriminating Against Women, Including Cofounder Wozniak's Wife’, (BenefitsPro, 11 November 2019) https://search.proquest.com/docview/2313429588?rfr_id=info%3Axri%2Fsid%3Aprimo.

[52] Webb v Commonwealth Bank of Australia (Anti-Discrimination) [2011] VCAT 1592.

[53] See C. Mayer, M. Eggers and B. Radig, ‘Cross-database Evaluation for Facial Expression Recognition’ (2014) 24(1), Pattern Recognition and Image Analysis 124; Jacob Snow, ‘Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots’, American Council of Civil Liberties (26 July 2018); Stephen Mayhew, ‘Biometric Border Control Solution by Vision-Box Deployed at Perth Airport in Australia’, Biometric Update (Web Page, 12 February 2019) https://www.biometricupdate.com/201902/biometric-border-control-solution-by-vision-box-deployed-at-perth-airport-in-australia;

Joy Buolamwini and Timnit Gebru, ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’ 2018 Proceedings of Machine Learning Research 81:1-15, Conference on Fairness, Accountability and Transparency; Karen Hao, ‘Making Face Recognition Less Biased Doesn’t Make it Less Scary’ MIT Technology Review (29 January 2019) https://www.technologyreview.com/2019/01/29/137676/making-face-recognition-less-biased-doesnt-make-it-less-scary/; Drew Harwell, ‘Federal Study Confirms Racial Bias of Many Facial-Recognition Systems, Casts Doubt on their Expanding Use,’, The Washington Post (20 December 2019) https://www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/.

[54] Jon Schuppe, ‘How Facial Recognition Became a Routine Policing Tool in America’ NBC News (11 May 2019) https://www.nbcnews.com/news/us-news/how-facial-recognition-became-routine-policing-tool-america-n1004251.

[55] Birgit Schippers, ‘Facial Recognition: Ten Reasons You Should Be Worried About the Technology’, The Conversation (Web Page, 21 August 2019) https://theconversation.com/facial-recognition-ten-reasons-you-should-be-worried-about-the-technology-122137.

[56] Nathalie Grandjean, Matthieu Cornelis & Claire Lobet-Maris, ‘Sociological and Ethical Issues in Facial Recognition Systems: Exploring the Possibilities for Improved Critical Assessments of Technologies?’ (2008) Tenth IEEE International Symposium on Multimedia.

[57] Jeff Larson, Julia Angwin, Lauren Kirchner, Surya Mattu, Dina Haner, Michael Saccucci, Keith Newsom-Stewart, Andrew Cohen, Martin Romm,How We Examined Racial Discrimination in Auto Insurance Prices’ (ProPublica Web Page, 5 April 2017) https://www.propublica.org/article/minority-neighborhoods-higher-car-insurance-premiums-methodology.

[58] Kenneth Terrell, ‘Can Artificial Intelligence Outsmart Age Bias?’, AARP (Web Page, 16 January 2019) https://www.aarp.org/work/job-search/info-2019/ai-job-recruiting-age-bias.html;

[59] Miranda Bogen ‘All the Ways Hiring Algorithms Can Introduce Bias’, Harvard Business Review (6 May 2019) https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias; also see Alcami Interactive, ‘Demo Video’ Alcami Interactive (Website) https://alcamiinteractive.com/?gclid=Cj0KCQjwwOz6BRCgARIsAKEG4FVFgf92i-H-dumg_2myYwRQiHhYZoqnNk1W8uMs4ybtDI9inTNwszQaAmdeEALw_wcB.

[60] Nick Kolakowski, ‘How A.I. Could Enable Ageism, Discrimination in Hiring’, Dice (Web Page, 3 October 2019) https://insights.dice.com/2019/10/03/ageism-discrimination-ai-enabled-hiring/.; Gerards, Janneke and Zuiderveen Borgesius, Frederik, ‘Protected Grounds and the System of Non-discrimination Law in the Context of Algorithmic Decision-making and Artificial Intelligence’ (2 November 2020). Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3723873, citing Commission for Racial Equality, Medical School Admissions: Report of a Formal Investigation into St. George’s Hospital Medical School (Report, 1988).

[61] Kenneth Terrell, ‘Can Artificial Intelligence Outsmart Age Bias?’, AARP (Web Page, 16 January 2019) https://www.aarp.org/work/job-search/info-2019/ai-job-recruiting-age-bias.html;

[62] Helen Warrell, ‘Home Office Ditches 'Racist' Algorithm Used in Visa Rulings’, The Financial Times (5 August 2020) https://search.proquest.com/docview/2440219947?rfr_id=info%3Axri%2Fsid%3Aprimo;

[63] Sarah G Murray et al, ‘Discrimination by Artificial Intelligence In A Commercial Electronic Health Record – A Case Study’ HealthAffairs (Website, 31 January 2020) https://www.healthaffairs.org/do/10.1377/hblog20200128.626576/full/.

[64] Ziad Obermeyer, Brian Powers , Christine Vogeli, & Sendhil Mullainathan, ‘Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations’ (2019) 366(6464) Science 447-453.

[65] Ahmed, S et al ‘Examining the Potential Impact of Race Multiplier Utilization in Estimated Glomerular Filtration Rate Calculation on African-American Care Outcomes’ (2020) Journal of General Internal Medicine https://link.springer.com/epdf/10.1007/s11606-020-06280-5?sharing_token=8MThtq7CL8ELK5gPD6oQnve4RwlQNchNByi7wbcMAY4sS-Jau6qwCo5Cp22VpPLydZ7zEV2Ghms7-dSEf4KxhNL9QGs0yD4jG-UzXpR87s6vzVasyx_RQ2i5XBqALPpb81GcCAg85-sa5TiKx3B-JdGrGb2wL_b1f4MY0Efuako%3D.

[66] Tom Simonite, ‘How and Algorithm Blocked Kidney Transplants to Black Patients’, WIRED Business (Web Page, 26 October 2020) https://www.wired.com/story/how-algorithm-blocked-kidney-transplants-black-patients/?mbid=social_twitter&utm_brand=wired&utm_social-type=owned&utm_content=buffer4898b&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/ANZCompuLawJl/2021/4.html