AustLII Home | Databases | WorldLII | Search | Feedback

Sydney Law Review

Faculty of Law, University of Sydney
You are here:  AustLII >> Databases >> Sydney Law Review >> 2021 >> [2021] SydLawRw 20

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Bednarz, Zofia; Manwaring, Kayleen --- "Keeping the (Good) Faith: Implications of Emerging Technologies for Consumer Insurance Contracts" [2021] SydLawRw 20; (2021) 43(4) Sydney Law Review 455


Keeping the (Good) Faith: Implications of Emerging Technologies for Consumer Insurance Contracts

Zofia Bednarz[1]* and Kayleen Manwaring[1]

Abstract

Developments in tools powered by Artificial Intelligence (‘AI’) and Big Data — both existing and emerging — are predicted to have a revolutionary effect on the insurance industry in the near future. These technological advances have begun materialising at challenging times for the insurance industry in Australia. Various inquiries, including the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry, have uncovered evidence of insurers’ unethical, and often unlawful, practices, which adversely affect consumers. The focus of this article is harm that may be caused to consumers arising out of use of AI- and Big Data-powered analytics in terms of discrimination, exclusion, and unfair prices. We analyse insurance-specific rules currently in place, including recent reforms. We focus on anti-discrimination laws, insureds’ duty of disclosure, and insurers’ obligations (including the duty of utmost good faith), and consider whether they sufficiently address the potential harms that using decision-making models may cause to consumers.

Please cite this article as:

Zofia Bednarz and Kayleen Manwaring, ‘Keeping the (Good) Faith: Implications of Emerging Technologies for Consumer Insurance Contracts’ [2021] SydLawRw 20; (2021) 43(4) Sydney Law Review 455.

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International Licence (CC BY-ND 4.0).

As an open access journal, unmodified content is free to use with proper attribution.

Please email sydneylawreview@sydney.edu.au for permission and/or queries.

© 2021 Sydney Law Review and authors. ISSN: 1444–9528

I Introduction

The insurance industry in Australia is currently facing important challenges and opportunities. These arise from two different forces driving transformation: first, emerging technologies; and second, various inquiries into the insurance industry, and especially recommendations set out by the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry (‘Royal Commission’). This article considers whether current law, including recent law reform, is adequate to protect consumers in relation to insurance contracts in the face of these industry disruptions.

Developments in tools powered by Artificial Intelligence (‘AI’) and Big Data — both existing and emerging — are predicted to soon have a revolutionary effect on the insurance industry.[1] For a data-driven industry, such as insurance, enhanced data analytics promises cost reduction, creation of new products and the potential to offer more efficient and tailored services to their customers. In this article, we focus on insurance contracts with consumers and the influence AI tools have on business-to-consumer relationships.

Studies have demonstrated that restrictive regulation may be hindering the implementation of AI tools by financial services firms.[2] Regulatory approaches should therefore balance two main objectives: promoting technology uptake where it can provide benefits, and addressing risks associated with its use. In this article, we focus on harm potentially caused to consumers owing to the use of AI- and Big Data-powered analytics in terms of discrimination, exclusion and unfair prices. Any legal change should only be brought about as a response to evidence about real risks or threats associated with AI and the proven inadequacy of existing rules to a new sociotechnical reality. This article provides insight into the operation of current rules relating to underwriting of consumer insurance in the context of technological advances.

We critically examine current rules, including recent reforms, to establish whether adequate consumer protection has, or can be, achieved, or whether other interventions are needed. In this article, we focus on provisions imposing specific obligations on insurers, particularly anti-discrimination laws and provisions of the Insurance Contracts Act 1984 (Cth) (‘ICA’). Our analysis is chiefly concerned with general insurance and life insurance consumer contracts. Privacy and data protection laws are relevant to insurers’ use of emerging technologies.[3] However, the scope of this article does not extend to laws directly applicable to personal data collection, sharing and processing. We assume, for the purposes of this article, that insurers have already obtained consumers’ data. This could happen lawfully, such as when consumers voluntarily agree to share their data with insurers in exchange for benefits such as premium reduction. We examine the extent to which the rules reviewed may constrain insurers in using consumers’ data and inferences made from that data, and how insurers’ access to vast amounts of consumers’ data influences parties’ rights and obligations under insurance contracts.

In Part II we explain both the technologies at issue and the consequent potential for consumer harm. It is rare for sociotechnical change to arise in a regulatory vacuum: just because a new or modified technology emerges, it does not mean its use is ungoverned by existing legal principles.[4] Moves to implement new or amended legal rules ‘should be justified by evidence about real threats or risks created by technological developments and/or the poor fit between existing law and new technological possibilities’.[5] One risk often associated with use of AI decision-making is that of bias or discrimination. In Australia, insurers are mostly free to use an individual consumer’s characteristics for underwriting purposes, subject to significant constraints imposed by anti-discrimination laws.[6] In Part III, therefore, we analyse the application of anti-discrimination provisions when AI tools are used for extracting meaningful features from data and decision-making.

In Part IV, we examine the insured’s duty of disclosure. We argue that the rationale of the insured’s duty of disclosure has been significantly affected by Big Data- and AI analytics.[7] In Part IV(A), we consider the information asymmetry between the parties and recent law reform resulting from Recommendation 4.5 of the Royal Commission’s Final Report.[8] This replaces the consumer’s duty of disclosure with a duty to take reasonable care not to make a misrepresentation to an insurer, a recommendation made to address perceived inadequacies in consumer protection under the disclosure regime. However, Recommendation 4.5 did not consider emerging technologies. Therefore, we ask in Part IV(B) whether the implementation of the Recommendation will sufficiently address the changing information balance between the parties to an insurance contract potentially brought about by Big Data and AI technologies.

Part V examines insurers’ obligations towards the insured and consumers’ right to know what data influenced insurers’ decisions about premiums and cover. We outline specific information duties imposed on insurers, as well as consider other duties. We then turn to the utmost good faith requirement in insurance contracts and consider its potential as a safeguard against using ‘black box’ models for underwriting of contracts. Part VI concludes.

II Emerging Technologies in the Insurance Industry

A Definitions

The use of AI and Big Data tools has led to different ways of doing business, including in the insurance and financial services industries. Deployment of these technologies and systems in insurance is emerging, rather than mature. Sociotechnical change arising out of these technologies — that is, the new things, conduct and relationships[9] enabled by them — have great potential to deliver both benefits and harms for consumers, as well as insurers.

These emerging technologies have also led to a plethora of literature on their sociotechnical attributes and affordances, as well as analysis of related concepts. A full literature review is outside the scope of this article. However, it is essential, when looking at legal problems potentially arising from sociotechnical change, that there is a good understanding of the nature of technology discussed.[10] So we propose relevant definitions based on the approach of Guihot and Bennett Moses, who undertook a significant literature review relating to these technologies, in the context of legal and regulatory regimes.[11]

1 Artificial Intelligence and Machine Learning

The term ‘artificial intelligence’ (AI) is over 60 years old, but still lacks consensus as to definition.[12] This is unsurprising, given the density and contested nature of the concepts involved, such as ‘intelligence’. AI exists as a technological discipline or field of study, but also as a sociotechnical concept, albeit one that generates significant controversy.[13] Both the discipline and concept are ‘constantly evolving’.[14] Therefore, this article can only provide a snapshot at one point in time.

From a technical perspective, AI encompasses a range of tools and techniques, including: Machine Learning (‘ML’); computer vision; natural language processing; speech recognition; robotics; expert systems and planning and optimisation.[15] An ‘AI system’ incorporates these tools or techniques, on their own or combined, into hardware and software.[16] As a sociotechnical concept, AI has been described in many ways, including as ‘systems that display intelligent behaviour by analysing their environment and taking actions — with some degree of autonomy — to achieve specific goals’ by the European High-Level Expert Group on Artificial Intelligence.[17] And as ‘a collection of interrelated technologies used to solve problems and perform tasks that, when humans do them, requires thinking’ by the Australasian Council of Learned Academies.[18]

ML is one of the more common forms of AI used in data-rich industries such as insurance. Computing devices and software can be programmed to ‘learn’: that is, ‘modify or adapt their actions ... so that these actions get more accurate’[19] over time, as measured against a ‘rational goal’.[20] However, one of the key limitations of this learning is that it does ‘not typically include contemplating the impact of action, reasoning about intervention, or counter-factual reasoning’.[21] ML models also tend to be empirically constructed, so outcomes are based on identification and application of correlations in the data, and causal reasoning is not used.[22]

Many forms of ML use methods directed to detection of patterns in data. These patterns are then used in predicting future data, or in probabilistic decision-making.[23] Neural networks are used in a form of complex ML (or ‘deep learning’). Deep learning is notable for the difficulty or impossibility, even for the original programmer, to work out why a particular decision was made or outcome produced.[24] Success of a particular ML application can be heavily dependent on amount and quality of ‘training data’ used to teach the machine, as well as quality of methods employed and whether the assumptions underlying the initial model are updated as circumstances change.

2 Big Data

In this article, we adopt the following meaning for ‘Big Data’: ‘approaches, techniques and methods that involve processing data with high volume, velocity and/or variety’.[25] The ‘volume’ of Big Data is huge: often reported as amounting to terabytes or more,[26] but also expanding exponentially. ‘Velocity’ refers to data generation that is dynamic, and constantly being created or modified.[27] This data dynamism requires very high processing speeds, so data insights are delivered in time to be useful.[28] ‘Variety’ of data ‘refers to the fact that data will not all lie within a single database architecture’[29] and includes ‘large volumes of structured and unstructured data [held] in different formats from which insights may be drawn’.[30] For example, different forms of data such as images, text, audio files, video files and numbers may all be linked.[31]

The technologies discussed offer tools allowing for data analysis that are unprecedented in terms of their potential for managing large quantities of data and uncovering new correlations and trends difficult or impossible for humans to discover. Current uses of deep learning techniques have brought about a paradigm shift: many models now — as opposed to traditional, statistical ML — work with unstructured data. The models themselves discover patterns in data and choose how to extract meaningful features from it. This new generation of ML models can process high volumes and variety of data to produce a wide range of inferences about individuals’ likely behaviour and appetite for risk-taking.

3 Opacity and Explainability

The ‘opacity’ (or lack of transparency) of many AI and Big Data processes has attracted significant attention. This attention arises particularly in contexts where those processes are used to make (or help to make) decisions resulting in social consequences, such as a decision to grant insurance or allow an insurance claim. A seminal article by Berkeley computer scientist and sociologist Burrell[32] outlines the most important types of opacity seen in algorithms and ML used for this purpose:

1. an opacity (corporate opacity) resulting from deliberate corporate (and state) secrecy, for reasons such as protecting trade secrets, limiting ‘gaming’, and avoiding scrutiny and/or regulation of dubious activities;[33]

2. ‘technical illiteracy’, as most people do not have specialist skills required to read code and understand algorithmic design; and

3. opacity due to complexity arising out of:

(a) multi-component systems, for example, with voluminous code, large engineering teams and/or many interlinkages between modules; and

(b) interplay between large datasets and the way the model processes data (dimensional opacity).[34]

This last category, 3(b), is distinctive to ML and needs further explanation. Some powerful ML models tend to ‘possess a degree of unavoidable complexity’ that may not be able to be resolved to a humanly comprehensible explanation, even by the designers.[35] This lack of interpretability is due to the ‘high dimensionality’[36] of ML, in particular:

• the use of Big Data; that is, ‘billions or trillions of data examples, and thousands or tens of thousands’ of data properties or ‘features’;[37]

• the nature of an ML model’s mechanism in handling large numbers of heterogenous features;

• the use of mathematical optimisation techniques to deal with resource constraints; and

• logic of the ML model changing as it ‘learns’.[38]

Three examples of this type of complexity follow.

First, emergent behaviour unable to be predicted by developers may be observed, arising from interplay between the dataset and changing logic and parameters of the model. Second,

[w]ith greater computational resources, and many terabytes of data to mine (now often collected opportunistically from the digital traces of users’ activities), the number of possible features to include ... rapidly grows way beyond what can be easily grasped by a reasoning human.[39]

This exponential increase in features not only makes reasoning about algorithms more difficult, but can also lead to ‘subtl[e] and imperceptibl[e]’ shifts in decisions.[40] Third, ‘weighted’ inputs may not map to real-world features intelligible to humans,[41] or may not be able to be mapped directly to an outcome because mathematical manipulation of model dimensions is needed to manage computing constraints.[42]

This high-dimensionality type of complexity is particularly true of deep learning models, typically (although not always)[43] deep neural networks.[44]

If you had a very small neural network, you might be able to understand it. ... But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.[45]

A practical, somewhat simplified example, of a particular ML application can assist in understanding both corporate opacity and dimensional opacity. For example, in 2017 researchers employed by Sears department store company designed a model allowing someone with access to consumer shopping history (for example, through a customer loyalty scheme) to create abstract numerical profiles (called ‘vector representations’) of customers, in order to provide targeted product recommendations.[46] In this model, each customer’s profile can be described with a sequence of numeric values (such as a 200-position vector of continuous values), but these cannot be individually mapped to real-world characteristics. Vector representations like these can be used as additional input features in decision-making models, for example, for the purpose of insurance underwriting. Data brokers are also likely to create and on-sell these profiles to one or more third parties. To protect their business model, creators will have no incentive to disclose any proprietary information to third parties beyond the minimum needed for integration. So third parties (such as insurers) are likely to be given a ‘black box’ to integrate into their decision-making models. Even where it might be possible to trace the impact of individual dimensions of these vectors on the model’s output (regarding their insurance risk profile) they still cannot be traced to any real-world properties of a person. However, latently, they may encode any properties of this person. Therefore, neither the insurer nor an individual insured can know, or demonstrate, what properties (for example, race, gender, religion) are encoded in an individual’s vector representation.

Understandably, due to these different forms of opacity, there have been substantial calls for ‘explainability’ of AI processes. Explainability ‘refers to any technique that helps the user or developer of ML models understand why models behave the way they do’.[47] There are different types of explainability, including ‘global explainability’, which ‘attempts to understand the high-level concepts and reasoning used by a model’,[48] and ‘local explainability’, which ‘aims to explain the model’s behaviour for a specific input’.[49]

Local explainability is most useful for a consumer seeking to understand a decision.[50] Common examples of local explainability techniques used in ML include:[51]

• feature importance scores (for example, 60% of the decision was based on age, which had a positive correlation with likely number of claims);

• counterfactual explanations (for example, had your annual mileage been lower by 10,000km, your premium would have been 50% cheaper);[52] and

• identification of influential training data points (for example, past claimants A, D and E had the most influence on predictions) (‘influential example identification’).

However, a 2020 study carried out on ‘explainable AI’ found continuing barriers to use of ML explanations by end users in theory and in practice.[53] General problems include the risk of spurious correlations, lack of causal explanations and the need for experts to interpret explanations.[54] For the local explainability examples set out above, difficulties include:

• feature importance analyses resulting in unexpected explanations not aligned with human intuition;

• unfeasible and suboptimal counterfactuals;

• ‘intractability’ of influential example identification for large datasets (ie no efficient model exists); and

• sensitivity of influential example identification to outliers in the data.[55]

4 Inferences

Inference is the process of using a trained AI model to make a prediction. ML models have been shown to be capable of inferring things such as a person’s sexual orientation from their face photos,[56] or a person’s suicidal tendencies from their posts on Twitter.[57] However, a question arises as to the accuracy of such predictions. Models operate on correlations between input data and target variables, rather than confirming a causal relationship between them.[58] Consequently, where certain identified features correlate statistically with their risk outcome for an insurer’s model, this does not mean that risk will be correct for a specific individual.

B Potential for Consumer Harm

It is difficult to assess the full extent to which insurers are currently using the technologies discussed in Part II(A), particularly more complex forms of ML such as deep learning. Like many corporate entities, substantial details of technologies used by insurers are usually unknown to consumers, due to corporate secrecy practices.[59] However, there are some clear indications that insurers are increasingly using large datasets and AI and other automated tools and techniques to assist them in various business processes. These processes include: determining to whom they will offer insurance; price and conditions on which insurance is offered;[60] processing claims; fraud detection; client communications; and payments.[61] For example, both the 140-year-old Zurich Insurance[62] and neophyte Lemonade Insurance Company[63] have publicly announced AI use in claims handling. In 2016, some United Kingdom (‘UK’) insurance businesses acknowledged use of ML techniques in assessing and pricing risk for motor and home insurance,[64] and United States (‘US’) businesses have indicated increasing use of and interest in Big Data-powered predictive analytics for property and casualty insurance.[65] Data-rich companies outside the insurance industry (such as Woolworths and Qantas) are now branching out into insurance, offering discounts linked to the provision of more and more granular lifestyle data through rewards card schemes.[66] Potential for the use of these technologies across the industry is significant, with their promise of cost reductions, especially labour costs, efficiency growth and increase in market share.[67]

Data profiling in insurance allows for personalisation of risk, and therefore individualisation of premium and cover. More precise risk assessment creates important benefits for insurers, in addition to possibly lower premiums[68] for many insureds. Personalisation of risk and pricing can incentivise consumers to adopt more prudent behaviour, when risk is under their control. Some insureds, however, will face higher premiums. Others may be considered uninsurable, and unable to change or control their risk profile.[69]

Advanced Big Data analytics, and the data collection required,[70] creates opportunities for insurers to access increasingly large amounts of data on consumers. Exploitation of those opportunities will exacerbate existing information asymmetries and power imbalances between insurers and consumers. Consequent harm will be caused if insurers abuse their increased access to information in order to refuse claims, ultimately depriving consumers of the benefit of their insurance. We discuss this issue in more detail in Part IV(A).

Use of AI tools for commercial decision-making also raises concerns regarding discrimination and bias.[71] Discrimination can be direct, as when a person is treated differently because of their membership in a protected class; or indirect, when a seemingly neutral rule leads to discriminatory outcomes.[72] In the context of algorithmic bias, the concept of ‘fairness’ is often used to refer to the need to prevent or limit indirect discrimination.[73] The risk of both direct and indirect discrimination does not necessarily result from the use of ML models. In fact, indirect discrimination in the provision of financial services has been a longstanding problem, with practices such as ‘redlining’ reportedly used by humans (using traditional simple algorithms) in financial services businesses long before ML and Big Data tools emerged.[74] However, in the context of use of new ML models, indirect discrimination is concerning for several reasons.[75] Considering the exponential growth of data held by organisations, technological advancements and promised increases in cost- and time-effectiveness, a growing number of people may soon be affected.[76] While the problem is not new per se, the potential for harm is arguably greater.

ML models, like simpler algorithms, may reproduce human biases and introduce new ones, potentially leading to discriminatory decision-making.[77] AI models will inevitably indirectly discriminate (not necessarily unlawfully)[78] when the protected attribute is a predictive characteristic.[79] For example, if women were more likely to default on a loan and sex was a protected attribute, the model would look for proxies for sex (for example, clothes purchased, Facebook likes, and Netflix selections). Consequently, using ML models for predictive analytics requires businesses to test for bias and discrimination, as it will occur if a protected attribute has predictive value.

The 2020 Australian Human Rights Commission (‘AHRC’) Technical Paper illustrates five ways in which algorithmic bias arises:

(1) different base rates, when a protected group is predicted to be less profitable, because of historical and current disadvantage;[80]

(2) historical bias, ‘when the data used to train an AI system no longer accurately reflects reality’;[81]

(3) label bias, when human bias in the recording of the target (customer profitability) is reproduced;[82]

(4) contextual features and underrepresentation, when patterns and trends identified in the data across individuals that influence AI model’s predictions are not transferable across different demographics;[83] and

(5) contextual features and inflexible models, when ‘the data contains insufficient information to capture the differing behaviour of the various demographics’.[84]

Algorithmic bias is often linked to human (conscious and unconscious) biases. This arises when historical data under- or over-representing certain phenomena or groups of people[85] is embedded in datasets used for training and testing of the models. In our example above, it may be that historically women were rarely granted loans, as their work was unpaid and relied on their husband’s income. The model would therefore learn that loans with low default risk were only granted to men. In simple terms, for the model, discriminating against women would lead to the desired outcome; that is, maximising loans’ safety and profitability for the firm. This example shows how testing for bias is necessary.

The problem also derives from lack of reliable datasets for both training and testing. Reasons for this unreliability range from technological to economic to legal. At the technological end, de-identification of data leads to complications.[86] In general terms, for a dataset to be de-identified, all traces of information potentially revealing a subject’s identity have to be removed or changed.[87] The process usually involves removal of sensitive or protected attributes, such as gender-, health-, or ethnicity-related data.[88] Consequently, however, bias may become unmeasurable. De-identification is a useful tool for privacy compliance, as information that has been de-identified is no longer considered personal information. Also, competitive strategies (and in some cases competition law) prevent businesses from sharing datasets, which means models can only be trained and tested on a firm’s own historical records.

There is anecdotal evidence illustrating effects on the private sector, with examples including technology giants Amazon,[89] Google[90] and Facebook.[91] Also, as noted, discrimination, especially indirect, may be difficult to observe and distinguish from treatment that is not discriminatory, as seen in the case of the Apple credit card. The card, a joint undertaking of Apple and Goldman Sachs, was launched in the US in August 2019. It was alleged that Apple’s system granted men much higher credit limits than women, or even rejected women’s applications for the card altogether.[92] The issue only came to light when husband-and-wife couples applied for the card and compared outcomes. However, after investigating, the New York State Department of Financial Services (‘NYSDFS’) concluded in March 2021 that there had been no discrimination, and credit rates varied due to each person’s different credit score, indebtedness, income, credit utilisation, missed payments, and other credit history elements.[93] This case however reveals an issue arising out of use of Big Data analytics: when clients receive personalised financial products, they cannot know immediately what features were taken into account, and subsequent investigation may be required.

Arguably, this has always been the case with any type of decision-making, including when done by humans or simple non-AI algorithms, as indirect discrimination is generally difficult to observe or discover. However, the problem is exacerbated by Big Data and AI technological advancements, tying back to the opacity/explainability problem discussed in Part II(A). It can be illustrated by the amount of data that could, potentially, be considered by insurers. For example, possible indirect discrimination would be even more difficult to discover if a model were basing its underwriting decision on a person’s Instagram account, combined with their grocery shopping history, Google search history, websites they visited and conversations overheard by a voice assistant, such as Alexa or Siri, information collected by smart homes, and telematic data from vehicles.

Use of genetic tests for underwriting of life insurance contracts illustrates how insurers’ access to too much information can be damaging to insureds.

The Financial Services Council has introduced a moratorium on genetic tests in life insurance from July 2019 to June 2024. While the moratorium is in place, industry members cannot use adverse genetic test results for purposes of underwriting of contracts, subject to certain conditions.[94] The ban on life insurance risk evaluation on the basis of adverse genetic tests results is a solution advocated by many experts,[95] and has been implemented successfully in other jurisdictions, such as the UK, Canada and some European jurisdictions.[96] This is because studies have demonstrated serious harms to consumers linked to genetic tests in the life insurance context. Insurer practices contrary to the industry’s own guidelines have been uncovered,[97] such as refusing to take into account treatments undertaken by consumers lowering risk of the genetic condition occurring.[98] Also, despite an Australian Law Reform Commission recommendation, no standardised control of quality (scientific reliability, actuarial relevance and reasonableness) was made regarding the test insurers used for underwriting contracts.[99] All these issues meant consumers were reluctant to participate in medical surveys or test themselves for a genetic condition, knowing this could preclude access to life insurance.[100] Similarities exist between genetic information and other kinds of information about potential insureds collected through digital means and inferred with use of AI tools. The reliability of such digitally collected and inferred information, as well as its potential adverse effect on consumers, could face similar issues to genetic tests.

Although indirect discrimination is not a feature exclusively linked to ML models,[101] there are community expectations as to the fairness of AI-based tools.

If such tools are used by businesses for commercial gain,[102] they should not perpetuate bias and lead to (unlawful) discriminatory outcomes. We contend that no form of unlawful discrimination, especially as it disproportionately affects already disadvantaged groups, should be accepted.[103] Therefore, insurers (alongside all businesses dealing with consumers) who use new technologies should undertake specific steps aiming to ensure the fairness of decision-making models used.[104]

At the same time, it must be acknowledged that some form of indirect discrimination will almost always exist in any decision-making procedure,[105] and so it cannot be fully avoided in automated decision-making settings. The question is when it is unreasonable for indirect discrimination to exist, and thus when it should be considered unlawful.[106] With this consideration in mind, we proceed to examine relevant provisions regulating insurers’ obligations and the protection offered to insureds when AI analytics are used.

III Anti-Discrimination Laws and Automated Data Profiling in Insurance Contracts

We begin by analysing anti-discrimination laws and protection available to consumers when algorithmic decision-making is used in the context of insurance contracts. Discrimination, in relation to provision of goods and services, is understood as a different, less favourable treatment, than another person would receive in the same circumstances. This less favourable treatment is due to discrimination against a certain characteristic of a person.[107] Not every type of different treatment is unlawful discrimination; it will only be considered as such for specific characteristics set out in the legislation. Insurers must comply with both state and federal discrimination legislation, although there are some exemptions for insurance contracts, as discussed below. At the federal level, discrimination based on the protected attributes of age,[108] disability,[109] sex (including, for example, marital status or pregnancy and breastfeeding)[110] and race (including national or ethnic origin)[111] is forbidden. At the state level, protected attributes may also include professional or industrial activity, trade, or occupation.[112]

Personalisation of premiums and cover in insurance contracts results in treating insureds differently based on their characteristics, as insureds face different levels of risk. Insurers are free to choose factors on which they base the premium price (in most lines of insurance), unless anti-discrimination laws apply. However, even in the case of protected attributes, both state and federal anti-discrimination legislation make it lawful in some cases for an insurer to refuse insurance, or to discriminate on policy terms. These exceptions allow insurers to discriminate based on age, disability and gender.[113] Normally, unlawful discriminatory conduct can also be permitted under a time-limited exemption.[114] Exemptions are subject to a two-limb test. Discrimination, to be lawful, needs to be based upon actuarial or statistical data on which it is reasonable for the insurer to rely, and needs to be reasonable having regard to the matter of the data and other relevant factors (the ‘data limb’).[115] If (and only if) such actuarial or statistical data is not available and cannot reasonably be obtained, the discrimination may be considered as reasonable having regard to any other relevant factors (the ‘no-data limb’).[116]

How, then, would an insurer’s Big Data analytics be viewed under the two limbs? Two questions should be considered. First, the possible classification of AI models under the data limb. Second, the implications of using AI models for the reasonableness of discrimination.

To answer the first question, we need to consider two distinct uses of AI models. If a model applies known actuarial or statistical data, there is no doubt it will fall under the data limb. For example, AI tools could examine potential insureds’ social media accounts, automatically searching for evidence of them engaging in high- or low-risk activities, according to known statistics. Evidence of smoking or extreme sports, such as paragliding, would indicate a higher risk to health or life, while evidence of behaviours such as regular exercise and healthy eating, would imply a lower risk. In such cases, the insurer would be able to prove they based the underwriting decision on these factors. Following the decision in Ingram v QBE Insurance (Australia) Ltd,[117] the insurer must show they actually knew and applied the relevant empirical evidence at the time of making the decision. It is not sufficient to show evidence confirming the correctness of the decision regarding premium and policy terms, if this evidence was unknown to the insurer when the policy was issued.

In contrast to traditional statistical models, ML models are commonly used in data mining processes to discover meaningful patterns in data, rather than starting from a given hypothesis as to which features are predictive of the target outcome.[118] As discussed in Part II(A), patterns derived by a ML model may be opaque, so that an outcome may not be easily traced to particular features of the data subject. In this scenario, the model used cannot be classified under the data limb as applying ‘known actuarial or statistical data’. The insurer would still be required to demonstrate that relevant statistical or actuarial data cannot be obtained, as application of the partial exemption for insurance requires the data limb and no-data limb to be applied in a strict sequence.[119] The sequential nature of the limbs implies that for the model to be considered under the no-data limb, first it must be established no relevant data was available at the moment of contract underwriting.[120] If it is available, or can reasonably be obtained, it must not be ignored. This means that even under the no-data limb, the insurer would still need to be able to show what correlation the model uses: for example, a link between a person’s driving style and the risk they present for the purpose of car insurance.[121]

The second question we raise, as to the reasonableness of the discrimination, applies to both the data limb and the no-data limb, which means potential classification of the model under those limbs is less relevant. The discrimination must be reasonable having regard to the matter of data (the data limb) or other relevant factors (both limbs). In QBE Travel Insurance v Bassanelli, the Federal Court indicated that ‘[a]ny matter which is rationally capable of bearing upon whether the discrimination is reasonable would fall within the umbrella of relevance’.[122] In the Bassanelli case, other insurers’ practices were considered relevant.[123] Guidelines issued by the Australian Human Rights Commission (‘AHRC Guidelines’) provide additional examples of ‘other relevant factors’:

• medical or other professional opinion;[124]

relevant information about circumstances of the particular individual seeking insurance;[125]

• actuarial advice;[126] or

• insurer’s commercial judgement.[127]

The AHRC Guidelines indicate discrimination cannot be reasonable if based on untested assumptions. Case law states the test of reasonableness is

an objective one, which requires the court to weigh the nature and extent of the discriminatory effect, on the one hand, against reasons advanced in favour of the requirement or condition on the other. All circumstances of the case must be taken into account.[128]

It follows that potentially discriminatory decisions may require careful consideration and must take into account:

• ‘practical and business implications’;

• ‘whether less discriminatory options were available’;

• ‘the individual’s particular circumstances’;

• legislative objects, especially the object of eliminating discrimination as far as possible; and

• ‘all other relevant factors of the particular case’.[129]

Furthermore, for disability discrimination, an insurer who imputes a disability merely from a medical consultation is not acting reasonably under the AHRC Guidelines.[130]

In the case of indirect discrimination, the question of reasonableness is particularly complex. As discussed above, if a protected attribute is a predictive variable, even if it is not included in the data, the model will approximate it from other available data.[131] Lack of information on the protected attribute in the dataset will only make it more complicated to de-bias the model. Due to the opacity and complexity of AI models, it may be particularly challenging to establish whether a model discriminates at all, and if it does, whether the discrimination is unreasonable and therefore unlawful.

Let us consider a hypothetical example. Statistically speaking, an insurer’s model proposes higher car insurance prices for people with mental health issues, such as depression. Is this unreasonable discrimination? People with depression may be higher risk, but the feature the model is considering when setting the price is the accident history, and not a person’s mental health. Therefore, it may be shown that insureds with depression, with better accident histories, receive the same pricing outcomes as insureds without mental health issues, and healthy insureds with bad accident histories pay higher prices, as do insureds with depression with similar accident histories. It may statistically be the case that people with depression more often have worse accident histories. However, accident history is a much better predictor of risk than mental health, so the model’s decision is not based on mental health. Maybe, if accident history was unavailable, the model would take into account mental health as a good predictor of accident history, but availability of more granular data prevents it. However, now imagine that depression is truly predictive of a higher risk, irrespective of a person’s accident history. In such case, a ML model would actually be discriminating against people with a disability. An insurer would then need to show this discriminatory outcome is reasonable; that is, that an insured’s mental health is not only a good predictor of car insurance claims, but it is a truly predictive variable in itself.[132]

One problem with indirect discrimination is difficulty of observation. Anti-discrimination laws require the consumer actively to request explanation as to why they are denied insurance, or why the policy (premium or cover) is on less advantageous terms. The right is useless unless the consumer is aware of the discrimination. However, indirect discrimination may be very difficult to observe (as discussed in the context of the Apple example above), a problem that is exacerbated in the case of opaque ML models. This might mean that using such models could contravene anti-discrimination laws, as it would make it considerably more difficult for consumers to question the reasonableness of discrimination.

Insurers’ historical lack of compliance makes the issue even more problematic. Cases of insurance discrimination against people with mental health conditions illustrate this well. The recent inquiry into discrimination in the travel insurance industry by the Victorian Equal Opportunity and Human Rights Commission has shown that such discrimination is a systemic issue.[133] The Commission investigated three major insurers, making up almost 40% of Australia’s travel insurance industry, over eight months.[134] It found over 365,000 policies were issued unlawfully discriminating against people with mental health conditions.[135] Despite Ingram,[136] which held blanket mental health insurance exclusions constituted unlawful discrimination, such practices are still widespread. Industry attitudes towards mental health disorders seem to be changing, as indicated for instance by adoption of the new General Insurance Code of Practice.[137] Nonetheless, cases of deliberate insurance discrimination may still be prevalent and potentially exacerbated by algorithmic decision-making.

Algorithmic bias in combination with opacity, increases the risk of unlawful discrimination in an insurance context. This is especially concerning since various inquiries, including the Royal Commission, have demonstrated many insurers do not satisfactorily comply with existing rules.[138] It needs to be carefully considered by regulators and lawmakers, in particular regarding the possible introduction of clear rules requiring explainability of a model’s decision, and of restrictions on the use of unexplainable models. A useful example of how such rules could operate comes from overseas. NYSDFS issued a Circular Letter requiring life insurers using external sources of data to explain the rationale of their underwriting decision.[139] They cannot ‘rely on the proprietary nature of a third-party vendor’s algorithmic processes to justify the lack of specificity related to an adverse underwriting action’.[140] Such an approach should limit insurers’ use of data to only such features with a causal influence on risk, and would thus require provision of an explanation if discriminatory treatment is suspected.

IV Insured’s Duty of Disclosure

A Information Asymmetry

Before the ICA, general insurance contracts in Australia followed common law principles focusing on the insured’s pre-contractual duty of disclosure.[141] The insured’s duty of disclosure, or at least the duty not to make misrepresentations to the insurer, is characteristic of insurance contracts in most jurisdictions.[142] This is because, as Lord Mansfield put it in the landmark case of Carter v Boehm:

Insurance is a contract upon speculation. The special facts, upon which the contingent chance is to be computed, lie most commonly in the knowledge of the insured only: the under-writer trusts to his representation, and proceeds upon confidence that he does not keep back any circumstance in his knowledge, to mislead the under-writer into a belief that the circumstance does not exist, and to induce him to estimate the risque, as if it did not exist.[143]

Information asymmetry between an insured and an insurer has always favoured the insured, who has, at least theoretically, all information needed to calculate risk.

It has therefore been more efficient to require the insured to disclose to the insurer all facts relevant to their risk, rather than expect the insurer to investigate each potential insured. The rationale for the insured’s duty of disclosure goes further than allowing insurers to price risk correctly,[144] but extends to preventing fraud and exploitation of information imbalance by insureds.[145]

The paradigm, however, may be changing due to an ever-increasing creation and availability of digitalised data about consumers. Privacy protection aside, insurers are now able to collect consumers’ data from external, or ‘non-traditional’ sources; that is, sources different from the proposal forms that consumers typically complete for the purposes of underwriting contracts. These may include all sorts of smartphone applications, consumers’ social media presence, website cookies, smart homes,[146] healthcare and fitness devices,[147] cars,[148] and public surveillance devices with facial recognition capabilities, to name just a few.

AI tools make it possible and commercially feasible to analyse large amounts of personal data in order to extract meaningful features and ultimately evaluate insureds’ risk; although we cannot confirm this is already occurring in practice at a large scale. However, there is evidence that US insurers are already accessing potential insureds’ social media accounts for the purpose of underwriting of life insurance contracts.[149] Traditionally understood information asymmetry between the parties to an insurance contract is affected, as an insurer may obtain relevant information about the insured without asking them to provide it. Therefore, some argue the insured’s duty of disclosure should be significantly restricted or even reversed.[150]

Interestingly, a similar concern regarding information asymmetry and power imbalances between parties in consumer insurance contracts was shared by the Royal Commission. This concern resulted in Recommendation 4.5, ‘Duty to take reasonable care not to make a misrepresentation to an insurer’:

Part IV of the Insurance Contracts Act should be amended, for consumer insurance contracts, to replace the duty of disclosure with a duty to take reasonable care not to make a misrepresentation to an insurer (and to make any necessary consequential amendments to the remedial provisions contained in Division 3).[151]

The Royal Commission’s reasons for recommending an overhaul of the insured’s duty of disclosure did not consider ML models. Other concerning practices of insurance businesses motivated the Royal Commission’s proposed reform. Commissioner Hayne considered that the insured’s duty of disclosure in consumer insurance contracts (set out in s 21(1) of the ICA) placed a disproportionate burden on the insured. Section 21(1) provides general guidance as to what the insured must disclose: every matter relevant to the insurer’s decision whether to accept risk, which the insured knows is relevant, or a reasonable person in the circumstances could be expected to know to be relevant. The insured’s knowledge of relevance or materiality of the matters has been extensively discussed by scholars and judges.[152] It is predominantly an objective test, focusing on what a reasonable insured would understand as relevant. It means an insured, even if asked specific questions by an insurer, must also volunteer other information they know to be material to the risk.

However, the Royal Commission inquiry indicated such an approach might be outdated for consumer insurance. The TAL Life Limited (‘TAL’) case study cited in the Final Report of the Royal Commission[153] demonstrated, in Commissioner Hayne’s words, ‘the breadth and depth of the gap between what a consumer knows and what an insurer knows’.[154] TAL’s handling of claims made by three insureds under income protection policies was examined by the Royal Commission. The second insured’s case provides an interesting illustration of how information collection by insurers may constitute an attempt to refuse cover, rather than correct evaluation of risk. The insured was diagnosed with cancer shortly after taking out a TAL income protection policy. The insurer, trying to find a reason for contract avoidance, sought a retrospective underwriting opinion in relation to some symptoms, which may have been indicative of the cancer and were experienced by the insured prior to entering the policy. The potential argument in the insurer’s favour was that had those symptoms been disclosed by the insured, the policy would have been refused.[155] It was admitted that TAL would review insureds’ claims and pre-contractual information provided in a form of ‘fishing expedition’, collecting all information available, including irrelevant material.[156] Clearly, use of sophisticated technology, and collecting consumers’ data from various sources would make such conduct much easier for insurers. Availability of consumers’ data, for example their daily shopping (as collected through retailers’ loyalty programs) or their Internet browser searches and other data collected through cookies,[157] could provide information to insurers potentially letting them avoid claims in similar circumstances.

This potential use of technology was not discussed by the Royal Commission,[158] as the investigated practices were historical and use of the technology in question did not yet appear to be widespread. Nevertheless, the accessibility of consumers’ personal data paired with algorithmic decision-making could exacerbate issues identified by the Royal Commission. Therefore, the Royal Commission’s Recommendation 4.5, and law reform proposals put forward in response and finally adopted (discussed in Part IV(B)) need to be evaluated from the point of view of AI and Big Data use by insurers.

The Royal Commission concluded that the duty to take reasonable care not to make a misrepresentation to an insurer is more appropriate, and less complex, than the current duty applicable to consumer contracts. Such a duty for the consumer places the onus on the insurer to ask the right questions to obtain relevant information as to risk.[159] The Australian Government agreed with the Royal Commission, proposing to amend the duty of disclosure for consumers in order to protect them from claims refusal due to inadvertent omissions or insurers’ failure to ask appropriate questions.[160] This new approach has been implemented into legislation, and the insured’s duty of disclosure in consumer insurance contracts has been replaced by the insured’s duty to take reasonable care not to make a misrepresentation to the insurer.[161] The changes apply to all consumer insurance contracts entered into, renewed or varied on or after 5 October 2021.[162]

B Insured’s Misrepresentation to the Insurer

What does the overhaul of the insured’s duty of disclosure mean? In consumer insurance contracts (when the insurance is obtained for personal, domestic or household purposes),[163] an insured will now have a duty to take reasonable care not to make a misrepresentation to the insurer before the contract is entered into.[164] This includes also life insurance contracts, as well as contracts formerly belonging to the ‘eligible contracts’ category.[165]

Insurers are protected against insureds’ misrepresentation through remedies of contract avoidance or liability reduction to the amount placing the insurer in a position they would have been in had the misrepresentation not been made.[166] Section 28(1) of the ICA provides that remedies are not available to the insurer if, even though a misrepresentation occurred, it would have entered the contract on the same terms. The onus of proof is on the insurer, who must demonstrate they would not have entered the contract had they known what the insured misrepresented. Case law shows clear guidelines used by insurers as to whether they would accept various risks can be helpful in proving what they would or would not have done.[167] Use of (explainable) statistical models (ML or otherwise) could also be of assistance regarding such proof, as it would be possible to provide the new, previously undisclosed information item to the model to check how the outcome changes.

In general terms, we consider that eliminating the consumer’s duty of disclosure is a positive development in the age of AI and Big Data. The new rules state that whether an insured has taken reasonable care not to make a misrepresentation is to be determined considering all the relevant circumstances.[168] Although it should be assumed, without more, that the insured is an average person with no special skills or knowledge, the test is ultimately subjective. If the insurer knew, or ought to have known, about particular characteristics or circumstances of the insured individual, these characteristics or circumstances must be considered in determining whether the insured has taken reasonable care not to make a misrepresentation to the insurer.[169] In this context, an insurer’s wide collection of data may have important consequences. It may lower the standard of care required of the consumer to discharge their duty, such as in cases where data collected by the insurer indicates that they have a certain type of vulnerability or disability. Therefore, the situations in which an insurer is deemed to have knowledge of the insured’s circumstances will need to be clarified.

The question is also how the standard of care required of the insured for the purpose of making relevant representations to the insurer would be affected by use of AI and Big Data tools by an insurer. Theoretically, this could be relevant for the purpose of s 20B(2) of the ICA. The promises of profit brought about by the technology use are likely to incentivise data collection by insurers, including collecting detailed data about a consumer,[170] as well as attempts to make sophisticated inferences.[171] Use of sophisticated ML models and Big Data collection drives the information asymmetry between the parties even further in an insurer’s favour. Therefore, the overhaul of an insured’s duty of disclosure in consumer insurance contracts is a welcome development, aiming at reflecting and remedying the imbalance of power and information asymmetry between the parties. Consequently, the standard of care required from the insured should also reflect this. The rules are new, and untested in this context, so we can only offer a general interpretation. Insurers should not be able to avoid paying a claim on the basis of an insured’s misrepresentation as to the risk, if the insurer knew the insured’s risk circumstances from collected data and subsequent analytics. This should be, at least partly, captured by s 28 ICA, with remedies unavailable to the insurer if they would have entered the contract on the same terms anyway.

The question, then, is when an insurer would be understood as ‘knowing’ something. In terms similar to s 21(2) of the ICA, which now does not apply to consumer contracts, it could be construed that an insurer knows matters ‘known to an appropriate officer or agent of the insurer or contained in current official records’.[172] Courts have held that insurers’ knowledge could not automatically be inferred based solely on insurers’ access to some written information (for example, a newspaper extract) in paper-based general files held by the insurer.[173] Despite this, in our view the use of digital information systems by modern insurers will mean the benefit of ignorance should not be easily available.[174] This argument is even stronger when AI and Big Data tools are used, especially inferentially.

V Insurer’s Obligations

A Information Duties

In the light of all the issues we have discussed relating to the insured’s duty of disclosure, we must now consider an insurer’s obligations towards the insured. These obligations originate in the overarching duty of utmost good faith out of s 13 of the ICA, as well as in other specific statutory provisions.

As set out in Part II(A), the opacity of ML models is potentially problematic, especially considering the significant limits on local explainability. Opacity may cause significant harm to consumers, as algorithmic bias or discriminatory outcomes may be effectively unobservable by affected individuals. Also, in the context of disclosure duties, insureds may not know, justifiably so, what matters would be relevant to algorithmic risk assessment. If refused cover, or offered a different cover or premium than expected, a lack of a local explanation will deprive the consumer of (potential) useful feedback from the insurer: feedback that could help them change their practices to obtain a better deal. The inability to trace this feedback to particular consumer features renders information on premiums and cover of doubtful value given unidentified factors leading to the result may change at any time. The consumer’s right to know which features affected the underwriting decision and in what way should be mirrored by a duty on the insurer to provide them with such information.

There are various concrete duties imposed on insurers requiring certain conduct or information provision to insureds, but only some of those duties may play a role in the context of use of Big Data and AI analytics.[175] Of particular interest are s 75 of the ICA and ss 160–3 of the General Insurance Code of Practice. Section 75 of the ICA provides an insured with a right to request written reasons from an insurer who:

• rejects a potential insured;

• cancels a contract;

• does not offer renewal; or

• offers insurance cover on terms less advantageous than terms they would otherwise offer.[176]

Sections 160–63 of the Code of Practice also deal with the insured’s access to information. Insurers are under a duty to provide an insured with any information relied on in assessing their application for insurance cover, in handling the claim, or in responding to their complaint. An insurer may refuse to provide access to the requested information, but they cannot do so unreasonably.

As discussed above in the context of anti-discrimination laws, the consumer must proactively request information in writing, which constitutes a significant barrier. Even if they do so, the provision does not specify the nature of data to which the individual would be entitled.[177] It was argued in the context of use of genetic information for life insurance underwriting that the consumer should be entitled to ‘an explanation, in layman’s terms, of the reasons for the unfavourable underwriting judgment and the actuarial basis for that decision’.[178] However, both s 75 of the ICA and the General Insurance Code of Practice do not require this. Furthermore, disclosing data influencing underwriting decisions is costly and time-consuming even when ML decision-making is not involved,[179] and it also risks compromising commercial confidentiality.[180] While disclosing exact data underpinning an underwriting decision to a consumer may not be necessary, there is no relevant guidance on the level of detail required by either s 75 of the ICA or the Code of Practice.

B Other Obligations Regarding Insurers’ Conduct towards Consumers

Various provisions require insurers to comply with a specified high standard of conduct, especially when they are dealing with consumers. Apart from those already mentioned, there is a series of rules arising from insurance contracts being financial products, set out in the Corporations Act 2001 (Cth) (‘Corporations Act’) and Australian Securities and Investments Commission Act 2001 (Cth) (‘ASIC Act’).[181] These require insurers to:

• provide their services efficiently, honestly and fairly;[182]

• comply with other obligations of Australian financial services licensees;[183]

• prohibit misleading or deceptive conduct;[184] and

• prohibit unconscionable conduct.[185]

Standards imposed by these rules, broadly speaking, require fair and honest conduct by financial services providers, similar to an insurer’s duty of utmost good faith.[186] Although these standards are relevant in the context of use of AI and Big Data tools, this article’s main focus is on the duty of utmost good faith, ‘a foundation stone and guiding principle of insurance and insurance law’.[187]

C Insurer’s Duty of Utmost Good Faith

An insurance contract is a special type of contract in the terms of protections offered to both contractual parties. Although historically the focus was on insureds’ duties of disclosure, insurers also have obligations stemming from utmost good faith towards insureds. Carter v Boehm held that ‘[g]ood faith forbids either party by concealing what he privately knows, to draw the other into a bargain, from his ignorance of that fact, and his believing the contrary.’[188] This also includes the insurer, as ‘the policy would equally be void, against the under-writer, if he concealed; as, if he insured a ship on her voyage, which he privately knew to be arrived: and an action would lie to recover the premium.’[189]

Underwriting of insurance contracts is highly regulated by statute, and specific provisions do offer some (limited) protection for consumers against harms resulting from use of AI models for analysing consumers’ data. Section 13 of the ICA restates the common law principle in Carter v Boehm, implying a term in insurance contracts requiring all parties to act, in respect of any matter arising under or in relation to the contract, with the utmost good faith. The duty has been divided into four ‘quadrants’, and covers both pre-contractual and post-contractual phases.[190] In this section, we focus on pre-contractual operation of the duty in respect of the insurer’s use of AI and Big Data technologies, including how pre-contractual use of AI and Big Data may relate to later decisions regarding payment of claims.

The insurer’s duty of utmost good faith at the pre-contractual stage applies to all aspects of the parties’ relationship. The scope of the duty has been judicially described as: ‘an insurer’s statutory obligation to act with utmost good faith may require an insurer to act, consistently with commercial standards of decency and fairness, with due regard to the interests of the insured’.[191] The insurer’s obligation to act following the standards of decency, fairness and honesty[192] in the pre-contractual phase can be divided into two components: the duty of disclosure towards the insured, and the insurer’s conduct beyond the disclosure. Both are relevant to the use of AI models in consumer insurance contracts.

First, will insurers need to inform the prospective insured what data is used for underwriting of the contract, and how data is processed? In the context of English law, it has been argued that insurers who use predictive models should be required to disclose all matters affecting risk evaluation, so that insureds could understand the basis upon which proposed cover has been offered.[193] Australian law, however, although stemming from the same common law principles, has evolved differently, and the statutory duty of s 13 of the ICA is different to the common law. English law therefore offers limited assistance.

Case law relevant to the insurer’s pre-contractual duty of utmost good faith demonstrates it is not absolute. An insurer’s knowledge of an insured’s under-insurance will not necessarily amount to breach of the duty, as set out in Kelly v New Zealand Insurance Co Ltd.[194] In Kelly, the insurer knew the insured’s house was furnished with antiques and other expensive items, yet they accepted an increase in premium without explaining to the insured the consequences of failing to provide a list of items. However, the Court considered that only the insured knew what specific items were in the residence and their overall value. Consideration of the insurer’s knowledge was important, as they had a loss adjuster’s report referring to several expensive items in the insured’s residence. The Court’s decision that the insurer was not in breach of the duty of utmost good faith seems to indicate this report was seen as insufficiently specific. And the insured refused to provide a list of items, possibly due to concerns about tax authorities finding out about his house’s contents.[195]

Could the findings in Kelly be extrapolated to issues of collection and use of digital consumer data? The problem lies in the fact that AI models for extracting and inferring relevant data operate on probabilities and, in the current state of technological development, cannot, in many cases, provide specific concrete ‘knowledge’ regarding the circumstances of insureds. The question is, how detailed and specific would an insurer’s (undisclosed) ‘knowledge’ need to be at the time of the policy being granted, for it to be considered in breach of the duty of utmost good faith when it subsequently uses this knowledge to deny an insured’s claim?

In the age of Big Data and AI, individual data footprints have increased exponentially due to growth of social media and connected consumer devices, affordable technology to collect and process data is readily available, and data brokerage still constitutes a multi-billion dollar industry despite recent setbacks such as data breaches and restriction of access by platforms.[196] These conditions have significant potential to affect information and power asymmetry between insurers and insureds, as insurers have unprecedented access and means to analyse insureds’ digital data.[197] The insurers’ duty of utmost good faith requires them to act in a fair, reasonable, decent way, with regard to the interests of insureds.[198] This does not require insurers to put insureds’ interests before their own.

However, we believe that the duty of utmost good faith, in the context of unprecedented technical advantages, would require insurers at least to disclose to insureds what was discovered through data analysis. For example, in a factual scenario similar to Kelly, an insurer’s duty to act in utmost good faith would imply the need to warn the insured that valuable items in their home, about which the insurer knows, would not be covered if a detailed list is not provided. The use of technological advancements provides great advantage to insurers. If they can rely on data accessed for underwriting purposes, they should be held accountable when it comes to paying claims. We argue that to act with utmost good faith means a quid pro quo ought to apply: if insurers use Big Data analytics to price the risk, this will affect their duty to pay claims.[199] The main problem is the opacity around the use of ML tools.[200] Therefore, in the context of the use of advanced data analytics by insurers, we consider that the minimum obligation of insurers should be disclosure of what is known and how it may affect potential claims. While this goes against the decision in Kelly, we argue that the use of new technologies is a significant change, and higher standards should apply.

Similar considerations apply to the second component of insurers’ decent, fair and honest obligation: that is, insurers’ conduct beyond disclosure. To act with utmost good faith means more than just to act in good faith, encompassing ‘notions of fairness, reasonableness and community standards of decency and fair dealing’.[201] The courts note ‘[w]hile dishonest conduct will constitute a breach of the duty of utmost good faith, so will capricious or unreasonable conduct.’[202] Using opaque ML models, especially when there is a demonstrable detrimental effect on insureds, or prospective insureds (for example, if cover is denied), could be considered capricious and unreasonable. However, as discussed above in the context of indirect discrimination, detrimental and discriminatory effects on insureds may be unobservable for affected parties, as well as for insurers. For insurers to comply with the utmost good faith standard, it may therefore be necessary carefully to consider operation of their ML models and Big Data collection used in underwriting of consumer contracts. We argue that insurers knowingly accepting that their underwriting procedures are using opaque, unexplainable models, without efforts to control bias and procedural fairness, could be considered failing to meet the utmost good faith standard.

The discussion about usefulness of the duty of utmost good faith cannot be separated from remedies potentially available to aggrieved consumers. The common law remedy mentioned in Carter v Boehm, that of voiding the policy and allowing premium recovery by an insured,[203] is of little use to insureds due to the nature of insurance contracts. However, breach of the statutory duty under s 13 of the ICA does give rise to damages, as a breach of an implied contract term that also applies to contract formation.[204] Damages are awarded in contract.

This raises a problem. If conduct offending against the duty of utmost good faith is pre-contractual, how can an implied term be breached?[205] If the contract is ultimately entered into, pre-contractual conduct breaching the duty of utmost good faith can result in a contract-based remedy.[206] However, when the contract is not entered into, there is no implied term, and so there will be no contractual remedy based on breach of the duty of utmost good faith. Further obstacles to access to remedies for breach of the insurers’ duty of utmost good faith include its uncertain definition, and difficulties in proving breach. Self-represented consumers before the Australian Financial Complaints Authority are unlikely to grasp fully the nature and extent of the duty and would likely fail in their argument.[207]

Section 13(2) of the ICA provides that a failure by a party to a contract of insurance to comply with the duty of utmost good faith is a breach of the ICA, attracting a civil penalty under s 13(2A).[208] Sections 75A–75ZE detail enforcement of civil penalty provisions by the Australian Securities and Investments Commission (‘ASIC’). Insurers who are breaching or likely to breach their utmost good faith duties are under an additional duty to self-report to ASIC under s 912D of the Corporations Act.[209] For the duty to report to arise, the breach or likely breach must be significant, taking into account, for example, the number or frequency of similar previous breaches.[210] These rules require insurers to have a good understanding of the algorithmic decision-making processes they are using for underwriting contracts, and awareness of problems relating to bias or unexplainability potentially amounting to breach of their duty of utmost good faith. The self-reporting duty could become an important tool in the context of the use of emerging technologies and consequent compliance with financial services and insurance law.

ASIC enforcement powers are outlined in ss 915A915J of the Corporations Act, and include powers to vary, suspend or cancel an insurer’s financial services licence. ASIC may also issue banning orders under ss 920A920F, prohibiting a person from providing a financial service permanently or for a specified period. ASIC will normally act only where it foresees a general benefit to the market and the public. Isolated breaches are unlikely to attract more serious penalties such as a permanent banning order.[211]

ASIC’s powers under s 55A of the ICA are also important. If an insured has suffered, or is likely to suffer, damage, due to contract terms or the insurer’s conduct breaching ICA requirements, ASIC may act against the insurer on behalf of the aggrieved party if it believes it is in the public interest to do so. ASIC may also act on behalf of a group of insureds. Public interest is evident in preventing consumers suffering damage owing to an insurer using AI and Big Data tools, given the scope of likely harm. ASIC’s powers in this context can provide important assistance to consumers. However, this is not a perfect remedy, especially because the regulator acts on behalf of, and at the application of, the aggrieved party. Additionally, systemic and potentially unobservable breaches of utmost good faith duties and anti-discrimination rules may arise.

VI Conclusion

An increasing use of algorithmic decision-making for the purposes of underwriting consumer insurance will inevitably affect parties’ relationships. Insurance contracts are special, being contracts on speculation, and both common law and legislation have imposed specific duties on both parties, aiming at balancing rights and burdens. Rules applicable to these contracts have been developed across centuries, but it is only recently that sociotechnical changes have brought about a need for more far-reaching interventions. Recent inquiries into the insurance industry have, unfortunately, demonstrated important shortcomings regarding the fairness of consumer treatment by some insurers. The most important issue is that current rules have been breached by insurers to refuse claims, exclude cover, or unjustifiably raise premiums. Against this background, technological advancements bring yet another challenge for the insurance industry. On the one hand, AI and Big Data tools promise unprecedented benefits in terms of costs reduction and efficiency to insurers, and potentially also insureds. On the other hand, the potential for consumer harm is significant. The proposal of considered solutions to these harms is beyond the scope of this article, but we offer some preliminary observations warranting further investigation.

Our analysis shows that changes in behaviour by insurers arising out of use of emerging technologies such as AI and Big Data are not wholly unregulated by existing law. The content of some consumer protection provisions specifically applicable to insurance contracts would, on their face, be adequate to safeguard consumers against some of the more egregious potential abuses by insurers. However, the Royal Commission has uncovered a serious and systemic lack of compliance with those provisions, and therefore additional incentives — punitive, persuasive, or both — are needed. The Royal Commission proposed a more interventionist approach by ASIC, and we support that call.

However, the existing law also contains significant uncertainty in its application to existing and potential new conduct by insurers using these emerging technologies. Better guidance is required on what good behaviour by insurers looks like in the new sociotechnical reality. Enforceable codes of conduct, to which industry, regulators and consumers contribute, should be useful. A co-regulatory process[212] of this nature could go a long way to improving consumer trust in both the insurance industry and new technologies, benefitting all market players.[213]

Any new regulatory provisions, whether by code of conduct or otherwise, must deal with the current lack of transparency and explainability of ML decision-making. These deficiencies affect consumer choice and adequacy of cover, content and timing of regulatory intervention, and quality of judicial decision-making. The prospect of mandatory human intervention in decision-making must also be considered.[214] We recognise that this is a complex problem without easy solutions, but delay potentially exacts a significant price. Insurers attracted by the prospect of more cost-efficient business models are likely to make substantial investment in these technologies — with consequent entrenched resistance to regulatory intervention increasing over time.[215]


&#6[1] PhD, Research Associate, Centre for Law, Markets and Regulation (‘CLMR’), University of New South Wales (‘UNSW’), Sydney, Australia; Member, Allens Hub for Technology, Law and Innovation. The author acknowledges funding from Santander Financial Institute, Fundación UCEIF (Spain) and CLMR. Email: z.bednarz@unsw.edu.au; ORCID iD: https://orcid.org/0000-0001-6719-8101.

PhD, Senior Lecturer, UNSW Law & Justice; Member, CLMR; Allens Hub for Technology, Law and Innovation. Email: kayleen.manwaring@unsw.edu.au; ORCID iD: https://orcid.org/0000-0002-[1]970-3430.

The authors thank attendees of the CLMR Research Symposium 2020, especially Scott Donald, John Morgan, Jeannie Paterson and Gail Pearson for valuable feedback on an earlier draft, and Maciej Rybinski (CSIRO Data61) for technological advice. However, all errors and omissions are the authors’ own.

[1] Swiss Re Institute, Data-Driven Insurance: Ready for the Next Frontier? (Sigma No 1/2020,

29 January 2020) 2; World Economic Forum, The Future of Financial Services: How Disruptive Innovations Are Reshaping the Way Financial Services Are Structured, Provisioned and Consumed (Final Report, June 2015) 59–68.

[2] One of the key findings by the Cambridge Centre for Alternative Finance and World Economic Forum, Transforming Paradigm: A Global AI in Financial Services Survey (Report, 29 February 2020) 12, 79–80.

[3] Brendan McGurk, Data Profiling and Insurance Law (Hart Publishing, 2019) 176–202; Florent Thouvenin, Fabienne Suter, Damian George and Rolf H Weber, ‘Big Data in the Insurance Industry: Leeway and Limits for Individualising Insurance Contracts’ (2019) 10(2) Journal of Intellectual Property, Information Technology and Electronic Commerce Law 209, 227–41.

[4] Lyria Bennett Moses, ‘How to Think about Law, Regulation and Technology — Problems with “Technology” as a Regulatory Target’ (2013) 5(1) Law, Innovation and Technology 1, 9.

[5] Michael Guihot and Lyria Bennett Moses, Artificial Intelligence, Robots and the Law (LexisNexis, 2020) 21.

[6] Jason Courtenay, ‘The Insurer’s Right to Choose Risk’ (2017) 40(1) Australian and New Zealand Institute of Insurance and Finance 36, 36–7.

[7] In the context of English law, McGurk (n 3) 123–64 argues that the insured’s duty of disclosure is rendered obsolete by use of these technologies and needs to be abolished or significantly reduced.

[8] Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry (Final Report, 4 February 2019) vol 1, 32 (‘Royal Commission Report’).

[9] Lyria Bennett Moses, ‘Why Have a Theory of Law and Technological Change?’ (2007) 8(2) Minnesota Journal of Law, Science & Technology 589, 594.

[10] Chris Reed, ‘Taking Sides on Technology Neutrality’ (2007) 4(3) SCRIPTed 263, 282; Bert-Jaap Koops, ‘Ten Dimensions of Technology Regulation: Finding Your Bearings in the Research Space of an Emerging Discipline’ in Morag Goodwin, Bert-Jaap Koops and Ronald Leenes (eds), Dimensions of Technology Regulation (Wolf Legal Publishing 2010) 312.

[11] Guihot and Bennett Moses (n 5) ch 1.

[12] Ibid.

[13] See, eg, Toby Walsh, It’s Alive! Artificial Intelligence from the Logic Piano to Killer Robots

(La Trobe University Press, 2017) 17; House of Lords Select Committee on Artificial Intelligence, AI in the UK: Ready, Willing and Able? (Report of Session 2017–19, HL Paper 100, 16 April 2018) 13–14; Iria Giuffrida, Fredric Lederer and Nicolas Vermeys, ‘A Legal Perspective on the Trials and Tribulations of AI: How Artificial Intelligence, the Internet of Things, Smart Contracts, and Other Technologies Will Affect the Law’ (2018) 68(3) Case Western Reserve Law Review 747, 751–6; Roger Clarke, ‘What Drones Inherit from Their Ancestors’ (2014) 30(3) Computer Law & Security Review 247, 249.

[14] Guihot and Bennett Moses (n 5) 19.

[15] Toby Walsh, Neil Levy, Genevieve Bell, Anthony Elliot, James Maclaurin, Iven Mareels and Fiona Wood, The Effective and Ethical Development of Artificial Intelligence: An Opportunity to Improve Our Wellbeing (Report for the Australian Council of Learned Academies, July 2019) 32–6.

[16] Guihot and Bennett Moses (n 5) 14.

[17] European Commission High-Level Expert Group on Artificial Intelligence, ‘A Definition of AI: Main Capabilities and Disciplines’, 8 April 2019, <https://ec.europa.eu/newsroom/dae/

document.cfm?doc_id=56341> 1. Note, however, the Expert Group’s disclaimer that its description and definition of AI ‘is a very crude oversimplification of the state of the art’.

[18] Walsh et al (n 15) 14.

[19] Stephen Marsland, Machine Learning: An Algorithmic Perspective (Chapman and Hall/CRC Press, 2nd ed, 2015) 4.

[20] Guihot and Bennett Moses (n 5) 23.

[21] Ibid 15, citing Judea Pearl and Dana Mackenzie, The Book of Why: The New Science of Cause and Effect (Basic Books, 2018) fig 1.2.

[22] Roger Clarke, ‘Why the World Wants Controls over Artificial Intelligence’ (2019) 35(4) Computer Law and Security Review 423, 428, Table 2. See also Kalev Leetaru, ‘A Reminder that Machine Learning is about Correlations not Causation’ Forbes Online (15 January 2019) <https://www.forbes.com/sites/kalevleetaru/2019/01/15/a-reminder-that-machine-learning-is-about-correlations-not-causation/?sh=5f2b93d66161>.

[23] Kevin P Murphy, Machine Learning: A Probabilistic Perspective (MIT Press, 1st ed, 2012) 1.

[24] Jenna Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3(1) Big Data & Society 1, 3–5; Will Knight, ‘The Dark Secret at the Heart of Al’ (2017) 120(3) MIT Technology Review 54, 56.

[25] Guihot and Bennett Moses (n 5) 9, citing Rob Kitchin, The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences (Sage Publication Ltd, 2014) 68.

[26] Rob Kitchin and Gavin McArdle, ‘What Makes Big Data, Big Data? Exploring the Ontological Characteristics of 26 Datasets’ (2016) (January–June) Big Data and Society 1, 6.

[27] Guihot and Bennett Moses (n 5) 76.

[28] Ibid 9.

[29] Ibid.

[30] Ibid.

[31] Kitchin (n 25) 77.

[32] Burrell (n 24) 3–5. As of 11 December 2021, this article had been cited over 1200 times, according to Google Scholar <https://scholar.google.com/citations?user=Cp9FkPYAAAAJ&hl=en>.

[33] See Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015).

[34] For example, the Deep Patient ML system was able to come to accurate predictions of ‘the onset of psychiatric disorders like schizophrenia’ in hospital patients, but the developers have admitted they do not understand how it arrives at its predictions: Knight (n 24) 57.

[35] Burrell (n 24) 5.

[36] Ibid 2.

[37] Ibid 5.

[38] Ibid 5.

[39] Ibid 9. See also Pedro Domingos, ‘A Few Useful Things to Know about Machine Learning’ (2012) 55(10) Communications of the ACM 78, 82: ‘[i]ntuition [f]ails in [h]igh [d]imensions’.

[40] Burrell (n 24) 9.

[41] Ibid.

[42] Ibid.

[43] Michael Legg and Felicity Bell, Artificial Intelligence and the Legal Profession (Hart Publishing, 2020) 33.

[44] A ‘deep’ neural network is a neural network of more than one hidden layer: Jerry Kaplan, Artificial Intelligence: What Everyone Needs to Know (Oxford University Press, 2016) 34.

[45] Knight (n 24) 60, quoting Professor Tommi Jaakola, MIT Computer Science and Artificial Intelligence Laboratory. See also Guihot and Bennett Moses (n 5) 154.

[46] Bibek Behera, Manoj Joshi, Abhilash KK and Mohammad Ansari Ismail, ‘Distributed Vector Representation of Shopping Items, the Customer and Shopping Cart to Build a Three Fold Recommendation System’ (2017) arXiv:1705.06338 [cs.IR].

[47] Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José MF Moura and Peter Eckersley, ‘Explainable Machine Learning in Deployment’ in FAT* ‘20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (January 2020) 648, 648.

[48] Ibid 649.

[49] Ibid. See an alternative categorisation of ‘model-centric explanations’ and ‘subject-centric explanations’: Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a ‘Right to Explanation’ is Probably Not the Remedy You are Looking for’ (2017) 16(1) Duke Law & Technology Review 18, 22.

[50] Edwards and Veale (n 49) 22.

[51] Bhatt et al (n 47).

[52] See Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR’ (2018) 31(2) Harvard Journal of Law & Technology 841, 844–6.

[53] The study by Carnegie Mellon, Cambridge, University of Texas and IBM researchers (among others) is reported in Bhatt et al (n 47).

[54] Ibid 656.

[55] Ibid 651–5.

[56] Yilun Wang and Michal Kosinski, ‘Deep Neural Networks Are More Accurate Than Humans at Detecting Sexual Orientation from Facial Images’ OSF (Research Project, updated 26 May 2020) <https://osf.io/zn79k/>.

[57] Bridianne O’Dea, Stephen Wan, Philip J Batterham, Alison L Calear, Cecile Paris and Helen Christensen, ‘Detecting Suicidality on Twitter’ (2015) 2(2) Internet Interventions 183.

[58] Anya ER Prince and Daniel Schwarcz, ‘Proxy Discrimination in the Age of Artificial Intelligence and Big Data’ (2020) 105(3) Iowa Law Review 1257, 1263–4.

[59] Pasquale (n 33) ch 2.

[60] Guihot and Bennett Moses (n 5) 231.

[61] Ibid 250.

[62] Brenna Hughes Neghaiwi and John O’Donnell, ‘Zurich Insurance Starts Using Robots to Decide Personal Injury Claims’, Reuters (online, 18 May 2017) <https://www.reuters.com/article/zurich-ins-group-claims-idUSL8N1IJ3L0>.

[63] Daniel Schreiber, ‘Lemonade Sets a New World Record: How A.I. Jim Broke a World Record without Breaking a Sweat’, Lemonade Blog (Blog Post, 1 January 2017) <https://www.lemonade.com/

blog/lemonade-sets-new-world-record/>.

[64] Financial Conduct Authority (UK), Call for Inputs on Big Data in Retail General Insurance (Feedback Statement FS16/5, September 2016) [2.21] <https://www.fca.org.uk/publication/

feedback/fs16-05.pdf>.

[65] ‘P/C Insurers Expanding Use of Predictive Models, Big Data across Functions’, Insurance Journal News (Article, 3 March 2016) <https://www.insurancejournal.com/news/national/2016/03/03/

400846.htm>.

[66] ‘Woolworths Insurance: Life Insurance’ (Web Page, 2020) <https://insurance.woolworths.com.au/

life-insurance.html>; ‘Qantas Insurance’ (Web Page, 2021) <https://insurance.qantas.com/>.

[67] Organisation for Economic Co-operation and Development (‘OECD’), The Impact of Big Data and Artificial Intelligence (AI) in the Insurance Sector (Report, 28 January 2020) 8–9.

[68] Actuaries Institute, The Impact of Big Data on the Future of Insurance (Green Paper, November 2016) 4.

[69] The UK car insurance market after the Court of Justice of the European Union ruling in Association Belge des Consommateurs Test-Achats ASBL v Conseil des Ministres (C-236/09) [2011] ECR I-00773 provides an interesting example: see Patrick Collison, ‘How an EU Gender Equality Ruling Widened Inequality’, The Guardian (online, 14 January 2017) <https://www.theguardian.com/

money/blog/2017/jan/14/eu-gender-ruling-car-insurance-inequality-worse>.

[70] Machine learning has been described as ‘very data hungry’: World Economic Forum, The New Physics of Financial Services: Understanding How Artificial Intelligence is Transforming the Financial Ecosystem (Report, August 2018) 42.

[71] Centre for Data Ethics and Innovation (UK) (‘CDEI’), Review into Bias in Algorithmic Decision-Making (Report, November 2020) 21; Eirini Ntoutsi, Pavlos Fafalios, Ujwal Gadiraju, Vasileios Iosifidis, Wolfgang Nejdl, Maria-Esther Vidal, Salvatore Ruggieri, Franco Turini, Symeon Papadopoulos, Emmanouil Krasanakis, Ioannis Kompatsiaris, Katharina Kinder-Kurlanda, Claudia Wagner, Fariba Karimi, Miriam Fernandez, Harith Alani, Bettina Berendt, Tina Kruegel, Christian Heinze, Klaus Broelemann, Gjergji Kasneci, Thanassis Tiropanis and Steffen Staab, ‘Bias in Data-Driven Artificial Intelligence Systems: An Introductory Survey’ [2020] 10(3) WIREs Data Mining and Knowledge Discovery 1, 2.

[72] Waters v Public Transport Corp [1991] HCA 49; (1991) 173 CLR 349, 392 (Dawson and Toohey JJ). See also Australian Iron & Steel Pty Ltd v Banovic [1989] HCA 56; (1989) 168 CLR 165, 175 (Deane and Gaudron JJ); Commonwealth Bank of Australia v Human Rights and Equal Opportunity Commission [1997] FCA 1311; (1997) 80 FCR 78, 96–7 (Sackville J).

[73] See, eg, Finn Lattimore, Simon O’Callaghan, Zoe Paleologos, Alistair Reid, Edward Santow, Holli Sargeant and Andrew Thomsen, Using Artificial Intelligence to Make Decisions: Addressing the Problem of Algorithmic Bias (Australian Human Rights Commission Technical Paper, November 2020).

[74] Prince and Schwarcz (n 58) 1268–9.

[75] Lattimore et al (n 73) 26, 31, 40; Prince and Schwarcz (n 58) 1267–8; CDEI (n71) 7; Frederik J Zuiderveen Borgesius, ‘Strengthening Legal Protection against Discrimination by Algorithms and Artificial Intelligence’ (2020) 24(10) The International Journal of Human Rights 1572, 1577; Ntoutsi et al (n 71) 9.

[76] CDEI (n71) 24.

[77] Ibid 21; Ntoutsi et al (n 71) 2.

[78] See Part III in this article.

[79] Lattimore et al (n 73) 32; Prince and Schwarcz (n 58) 1273.

[80] Lattimore et al (n 73) 31–2.

[81] Ibid 34.

[82] Ibid 40.

[83] Ibid 45.

[84] Ibid 48–9.

[85] Ibid 34–45 (Scenarios 2 and 3).

[86] Christine M O’Keefe, Stephanie Otorepec, Mark Elliot, Elaine Mackey and Kieron O’Hara, ‘The De-Identification Decision-Making Framework’, CSIRO Data61 (Report, 18 September 2017) 4–5.

[87] Ibid 9–10, cf Privacy Act 1988 (Cth) s 6(1).

[88] O’Keefe et al (n 86) 18–20.

[89] Jeffrey Dastin, ‘Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women’, Reuters Technology News (online, 10 October 2018) <https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G>.

[90] Amit Datta, Michael Carl Tschantz and Anupam Datta, ‘Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination’ [2015] (1) Proceedings on Privacy Enhancing Technologies 92.

[91] Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove and Aaron Rieke, ‘Discrimination through Optimization: How Facebook’s Ad Delivery Can Lead to Biased Outcomes’ (2019) 3(199) Proceedings of the ACM on Human-Computer Interaction 1.

[92] Neil Vidgor, ‘Apple Card Investigated after Gender Discrimination Complaints’, The New York Times (online, 10 November 2019) <https://www.nytimes.com/2019/11/10/business/Apple-credit-card-investigation.html>.

[93] New York State Department of Financial Services (‘NYSDFS’), Report on Apple Card Investigation (Report, March 2021).

[94] Financial Services Council, Standard No. 11: Moratorium on Genetic Tests in Life Insurance,

21 June 2019, 3–5.

[95] See, eg, Ainsley J Newson, Sam Ayres, Jackie Boyle, Michael T Gabbett, and Amy Nisselle, ‘Human Genetics Society of Australasia Position Statement: Genetic Testing and Personal Insurance Products in Australia’ (2018) 21(6) Twin Research and Human Genetics 533.

[96] Margaret Otlowski, Kristine Barlow-Stewart, Paul Lacaze and Jane Tiller, ‘Genetic Testing and Insurance in Australia’ (2019) 48(3) Australian Journal of General Practice 96, 98.

[97] Ibid 97.

[98] Jane Tiller, Susan Morris, Toni Rice, Krystal Barter, Moeen Riaz, Louise Keogh, Martin B. Delatycki, Margaret Otlowski and Paul Lacaze, ‘Genetic Discrimination by Australian Insurance Companies: A Survey of Consumer Experiences’ (2020) 28(1) European Journal of Human Genetics 108, 109.

[99] Otlowski et al (n 96) 98.

[100] Jane Tiller, Margaret Otlowski and Paul Lacaze, ‘Should Australia Ban the Use of Genetic Test Results in Life Insurance?’ (2017) 5 Frontiers in Public Health Article 330,2.

[101] Anti-Discrimination Working Group of the Actuaries Institute (‘ADWG’), ‘The Australian Anti-Discrimination Acts: Information and Practical Suggestions for Actuaries’ (Paper Presented to the Actuaries Institute 20/20 All-Actuaries Virtual Summit, 3–28 August 2020) 28.

[102] See, eg, Ramnath Balasubramanian, Ari Libarikian and Doug McElhaney, ‘Insurance 2030: The Impact of AI on the Future of Insurance’, McKinsey & Company Insurance Practice (Article,

12 March 2021) <https://www.mckinsey.com/industries/financial-services/our-insights/insurance-2030-the-impact-of-ai-on-the-future-of-insurance>.

[103] Zuiderveen Borgesius (n 75) 1574–6, see especially discussion as to gender-biased advertising by Google, which shows to men more job ads for positions with higher salary than to women: at 1575 citing Datta, Tschantz and Datta (n 90).

[104] Lattimore et al (n 73) 14.

[105] Especially in underwriting of insurance, see ADWG (n 101) 28.

[106] Lattimore et al (n 73) 14.

[107] See, eg, Age Discrimination Act 2004 (Cth) s 14 (‘Age Discrimination Act’).

[108] Ibid.

[109] Disability Discrimination Act 1992 (Cth) (‘Disability Discrimination Act’).

[110] Sex Discrimination Act 1984 (Cth) (‘Sex Discrimination Act’).

[111] Racial Discrimination Act 1975 (Cth).

[112] Discrimination Act 1991 (ACT), Anti-Discrimination Act 1992 (NT), Anti-Discrimination Act 1998 (Tas), Equal Opportunity Act 2010 (Vic).

[113] At the federal level, see: Age Discrimination Act (n 107) s 37; Disability Discrimination Act (n 109) s 46; Sex Discrimination Act (n 110) s 41.

[114] At the federal level: Age Discrimination Act (n 107) s 44; Disability Discrimination Act (n 109) s 55; Sex Discrimination Act (n 110) s 44.

[115] Australian Human Rights Commission, Guidelines for Providers of Insurance and Superannuation under the Disability Discrimination Act 1992 (Cth) (Guidelines, November 2016) 9 (‘AHRC Guidelines’) considers that ‘[t]he question of whether it is reasonable for a provider to rely upon particular data involves “an objective judgment about the nature and quality of the actuarial or statistical data” in each case’, citing QBE Travel Insurance v Bassanelli [2004] FCA 396; (2004) 137 FCR 88, 95 [30] (‘Bassanelli’). The AHRC Guidelines provide examples of actuarial or statistical data that demonstrate that it is reasonable to rely upon (in relevant circumstances): underwriting manuals, local data, international studies, and relevant domestic and international insurance experience.

[116] As per provisions listed in nn 112–13, with discrimination on the basis of sex being the main exception, as it does not allow for discrimination when actuarial or statistical data is not available: see Sex Discrimination Act (n 110) s 41(1).

[117] Ingram v QBE Insurance (Australia) Ltd [2015] VCAT 1936 (‘Ingram’).

[118] See Prince and Schwarcz (n 58) 1274.

[119] AHRC Guidelines (n 115) 6.

[120] ADWG (n 101) 17.

[121] Cf the study described in Gert Meyers and Ine Van Hoyweghen, ‘“Happy Failures”: Experimentation with Behaviour-Based Personalisation in Car Insurance’ (2020) 7(1) Big Data and Society 1.

[122] Bassanelli (n 115) 99 [53].

[123] Ibid 97 [41].

[124] AHRC Guidelines (n 115) 11–12 [4.2(a)], 13 [4.2(c)].

[125] Ibid 12–13 [4.2(b)].

[126] Ibid 13 [4.2(d)].

[127] Ibid 14 [4.2(f)].

[128] Bassanelli (n 115) 99 [51], quoting Secretary, Department of Foreign Affairs v Styles [1989] FCA 342; (1989) 88 ALR 621, 623.

[129] AHRC Guidelines (n 115) 7, in the context of disability discrimination, but it can be extrapolated to all types of discrimination.

[130] Ibid 17.

[131] See Lattimore et al (n 73) 30–34 (Scenario 1).

[132] ADWG (n 101) 29–30.

[133] See Victorian Equal Opportunity and Human Rights Commission, Fair-Minded Cover: Investigation into Mental Health Discrimination in Travel Insurance (Report, 12 June 2019).

[134] Ibid 11.

[135] Ibid.

[136] Ingram (n 117).

[137] Insurance Council of Australia, General Insurance Code of Practice (updated 5 October 2021) pt 9, especially s 104 <https://insurancecouncil.com.au/code-of-practice/>.

[138] See eg Royal Commission Report (n 8) vol 2, 298–301, 313–15, 327–8, 344–7, 378–9, 385–90, 408–13, 425–30, 441–3, 453–4.

[139] NYSDFS, ‘RE: Use of External Consumer Data and Information Sources in Underwriting for Life Insurance’, Insurance Circular Letter No. 1 (2019) to All Insurers Authorized to Write Life Insurance in New York State (Circular Letter, 18 January 2019) (‘NYSDFS Circular Letter’).

[140] Ibid II.B.

[141] In addition to state legislation, such as the Insurance Act 1902 (NSW), and Imperial statutes such as the Life Assurance Act 1774 (Imp), Fires Prevention (Metropolis) Act 1774 (Imp) and Marine Insurance Act 1788 (Imp) all repealed by ICA.

[142] Anthony A Tarr and Julie-Anne Tarr, ‘The Insured’s Non-disclosure in the Formation of Insurance Contracts: A Comparative Perspective’ (2001) 50(3) International and Comparative Law Quarterly 577, 577.

[143] Carter v Boehm [1766] EngR 157; (1766) 3 Burr 1905; 97 ER 1162, 1164 [1909].

[144] Robin Bowley, ‘Transparency in the Insurance Contract Law of Australia’ in Pierpaolo Marano and Kyriaki Noussia (eds), Transparency in Insurance Contract Law (Springer International Publishing, 2019) 549, 552.

[145] McGurk (n 3) 141.

[146] Allianz and Deutsche Telecom provide a service whereby a smart home alerts the insurer if damage occurs: OECD (n 67) 12–13.

[147] Fitbit has a partnership with United Health in the US, and UK insurer Vitality collects activity data via Apple watches and offers discounts and direct contributions: OECD (n 67) 13. PitPat offers a similar service for pet insurance: OECD (n 67) 13.

[148] In Italy, over 2 million vehicles have been fitted with devices allowing for tracking of speed, braking, acceleration, and times of day when the vehicle is used: OECD (n 67) 12.

[149] NYSDFS Circular Letter (n 139).

[150] Julie-Anne Tarr, ‘Data Profiling and Insurance Law by Brendan McGurk’ (2019) 30(2) Insurance Law Journal 131, 131.

[151] Royal Commission Report (n 8) vol 1, 32.

[152] See, eg, Rob Merkin, ‘What Does an Assured ‘Know’ for the Purpose of Pre-Contractual Disclosure?’ (2016) 27 Insurance Law Journal 157.

[153] Royal Commission Report (n 8) vol 2, ch 4.

[154] Ibid vol 1, 297.

[155] Ibid vol 2, 341.

[156] Ibid vol 2, 339.

[157] Brigid Richmond, A Day in the Life of Data: Removing the Opacity Surrounding the Data Collection, Sharing and Use Environment in Australia (Consumer Policy Research Centre Report, May 2019) 15, 30, 34–5.

[158] Cf McGurk (n 3) 123.

[159] Royal Commission Report (n 8) vol 1, 300.

[160] Draft Explanatory Memorandum, Financial Sector Reform, Hayne Royal Commission Response – Protecting Consumers (2020 Measures) Bill 2020 (Cth), rec 4.5 (Duty of Disclosure to Insurer) [1.10]–[1.11].

[161] Financial Sector Reform (Hayne Royal Commission Response) Act 2020 (Cth) sch 2 pt 2, amending the ICA with effect from 1 January 2021.

[162] Ibid sch 2 s 37.

[163] Insurance Contracts Act 1984 (Cth) s 11AB(1) (‘ICA’). See also sub-ss (2)–(3).

[164] Ibid s 20B(1).

[165] Financial Sector Reform (Hayne Royal Commission Response) Act 2020 (Cth) sch 2 s 9 repealed ICA (n 163) s 21(A).

[166] ICA (n 163) ss 28(2)–(3).

[167] Julie-Anne Tarr, ‘Insurance Contract Disclosure: An Uncertain Balance’ (2015) 26(2) Insurance Law Journal 109, 115–16. See, eg, Michail v Australian Alliance Insurance Co Ltd (2014) 67 MVR 21, especially 29 [31].

[168] See ICA (n 163) s 20B(2)–(3) for examples of such relevant circumstances.

[169] Ibid s 20B(4).

[170] Marshall Allen, ‘Health Insurers Are Vacuuming up Details about You: And It Could Raise Your Rates’, NPR (online, 17 July 2018) <https://www.npr.org/sections/health-shots/2018/07/17/629441555/

health-insurers-are-vacuuming-up-details-about-you-and-it-could-raise-your-rates>.

[171] See the explanation of inferences in ML models in Part II(A)(4) above.

[172] Julie-Anne Tarr, ‘“Knowledge” and Pre-Contract Disclosure under the Insurance Contracts Act(2018) 46(6) Australian Business Law Review 355, 367.

[173] Ibid, see Commercial Union Assurance Co of Australia Ltd v Beard [1999] NSWCA 422; (1999) 47 NSWLR 735, 735. For the UK, see, eg, Malhi v Abbey Life Insurance [1996] LRLR 237, 237.

[174] Tarr, ‘“Knowledge” and Pre-contract Disclosure under the Insurance Contracts Act’ (n 172) 367.

[175] As discussed in Part I, data protection laws are outside the scope of this article.

[176] Cf Sex Discrimination Act (n 110) s 41(1)(e).

[177] Australian Law Reform Commission (‘ALRC’), Essentially Yours: The Protection of Human Genetic Information in Australia (Report No 96, May 2003) (‘Human Genetic Information Report’) vol 2, 717–18 [27.71]–[27.74] citing, eg, Margaret Otlowski, Submission No G159 to the ALRC, Joint Inquiry into the Protection of Human Genetic Information (24 April 2002) (‘Otlowski, Submission’) and Institute of Actuaries of Australia, Submission No G224 to the ALRC, Joint Inquiry into the Protection of Human Genetic Information (29 November 2002).

[178] Human Genetic Information Report (n 177) vol 2, 719 [27.80] citing Centre for Law and Genetics, Submission No G048 to the ALRC, Joint Inquiry into the Protection of Human Genetic Information (14 January 2002).

[179] Human Genetic Information Report (n 177) vol 2, 720 [27.81] citing Investment and Financial Services Association, Submission No G244 to the ALRC, Joint Inquiry into the Protection of Human Genetic Information (19 December 2002).

[180] Human Genetic Information Report (n 177) vol 2, 720 [27.81], citing Otlowski Submission (n 177).

[181] Australian Securities and Investments Commission Act 2001 (Cth) s 12BAA(7)(d) (‘ASIC Act’); Corporations Act 2001 (Cth) ss 764A(1)(d)–(e) (‘Corporations Act’).

[182] Corporations Act (n 181) s 912A(1)(a). For a detailed discussion of the concept, see Peter Mann and Stanley Drummond, ‘Utmost Good Faith, Unconscionable Conduct and Other Notions of Fairness: Where Are We Now?’ (2017) 29(1) Insurance Law Journal 1, 47–52.

[183] Corporations Act (n 181) s 912A(1). For a detailed discussion, see Mann and Drummond (n 182) 52–4.

[184] ASIC Act (n 181) s 12DA(1); Corporations Act (n 181) s 1041H(1).

[185] ASIC Act (n 181) ss 12CA–12CB; Corporations Act (n 181) s 991A(1). See also Mann and Drummond (n 182) 20–47.

[186] Mann and Drummond (n 182) 1–2.

[187] Ibid 2.

[188] Carter v Boehm (n 143) 1164 [1910].

[189] Ibid 1164 [1909].

[190] Peter Mann, ‘The Elusive Second Quadrant of Utmost Good Faith: What Is the Scope of an Insurer’s Pre-Contractual Duty of Utmost Good Faith?’ (2016) 27 Insurance Law Journal 176, 176; Bowley (n 144) 549; Mann and Drummond (n 182) 8–9.

[191] CGU Insurance Limited v AMP Financial Planning Pty Ltd (2007) 235 CLR 1, 12 [15].

[192] Ibid 42–3 [130]; Australian Securities and Investments Commission v Youi Pty Ltd [2020] FCA 1701, [7] (‘ASIC v Youi’).

[193] McGurk (n 3) 164.

[194] Kelly v New Zealand Insurance Co Ltd (1996) 130 FLR 97, 111–12 (‘Kelly’).

[195] Ibid 97–8.

[196] See David Lazarus, ‘Shadowy Data Brokers Make the Most of Their Invisibility Cloak’, Los Angeles Times (online, 5 November 2019) <https://www.latimes.com/business/story/2019-11-05/column-data-brokers>; Steven Mendelez and Alex Pasternack ‘Here Are the Data Brokers Quietly Buying and Selling Your Personal Information’ Fast Company (online, 3 February 2019) <https://www.fastcompany.com/90310803/here-are-the-data-brokers-quietly-buying-and-selling-your-personal-information>, Zack Whittaker, ‘Data Brokers Track Everywhere You Go, But Their Days May Be Numbered’ TechCrunch (online, 9 July 2020) <https://au.news.yahoo.com/data-brokers-track-everywhere-days-130038354.html>.

[197] The Privacy Act 1988 (Cth) places some restrictions on data collection, use and disclosure. It is beyond the scope of this article to provide a detailed analysis of this legislation in this context. However, there is ample evidence supporting a view that the legislation has, in practice, provided few real limits on use of consumer data: see, eg, Australian Competition and Consumer Commission, Digital Platforms Inquiry Final Report (Report, June 2019); Kayleen Manwaring, Katharine Kemp and Rob Nicholls, (mis)Informed Consent in Australia (Report for iappANZ, 31 March 2021) <http://handle.unsw.edu.au/1959.4/unsworks_75600> .

[198] Speno Rail Maintenance Australia Pty Ltd v Metals & Minerals Insurance Pte Ltd [2009] WASCA 31; (2009) 226 FLR 306, 330 [144], 331 [153] (Beech AJA).

[199] This idea was suggested by an anonymous reviewer of this article.

[200] See Part II(A)(3) above.

[201] AMP Financial Planning Pty Ltd v CGU Insurance Ltd [2005] FCAFC 185; (2005) 146 FCR 447, 475 [89].

[202] Ibid.

[203] Carter v Boehm (n 143) 1164 [1909].

[204] Imaging Applications Pty Ltd v Vero Insurance Ltd [2008] VSC 178, [54]–[55].

[205] Mann (n 190) 176.

[206] Ibid 180.

[207] Ryan Nattrass, ‘Extending the Unfair Contract Terms Laws to Insurance Contracts: Is the Duty of Utmost Good Faith Fair Enough?’ (2012) 23(3) Insurance Law Journal 299, 310.

[208] Introduced through Treasury Laws Amendment (Strengthening Corporate and Financial Sector Penalties) Act 2019 (Cth).

[209] In more detail, see Mann and Drummond (n 182) 53–4.

[210] See Corporations Act (n 181) s 912D(1)(b).

[211] See Explanatory Memorandum, Insurance Contracts Amendment Bill 2013 (Cth) [1.14].

[212] See, eg, Roger Clarke, ‘Regulatory Alternatives for AI’ (2019) 35(4) Computer Law & Security Review 398, 406–7.

[213] On trustworthy AI, see SA Hajkowicz, Sarvnaz Karimi, Tim Wark, Caron Chen, M Evans, Natalie Rens, Dave Dawson, Andrew Charlton, Toby Brennan, Corin Moffatt, Sriram Srikumar and K J Tong, Artificial Intelligence: Solving Problems, Growing the Economy and Improving Our Quality of Life (CSIRO Data61 Report, 2019) 56.

[214] OECD (n 67) 22–3.

[215] Kayleen Manwaring, ‘Will Emerging Information Technologies Outpace Consumer Protection Law? The Case of Digital Consumer Manipulation’ (2018) 26(2) Competition and Consumer Law Journal 141, 177–81.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/SydLawRw/2021/20.html