![]() |
Home
| Databases
| WorldLII
| Search
| Feedback
Law, Technology and Humans |
Prompts and Large Language Models: A New Tool for Drafting, Reviewing and Interpreting Contracts?
Brydon T Wang
School of Law, Queensland University of Technology, Australia
Abstract
Keywords: Large Language Models (LLMs); generative AI; parol evidence rule; contracts.
1. Introduction: Reimagining Contracts in the Age of AI
Contracts have long been the cornerstone of our commercial fabric and serve to set out the bargain arrived at by parties to an agreement.[1] The drafting of these contracts captures mutually agreed promises and obligations in a sequence of clearly and precisely selected words that seek to anticipate and resolve potential disputes while ensuring these obligations are enforceable at law throughout the lifespan of the contract. The primacy of the way a contract is drafted is safeguarded by the parol evidence rule, which restricts the use of extrinsic evidence, such as negotiation history or pre-contractual communications, to ‘add to or subtract from or in any manner vary or qualify’ the plain meaning of written terms of a contract, or to ‘assist in the interpretation of the written contract’.[2] However, the increasing use of Large Language Models (LLMs) to assist with, or even replace, lawyers in drafting contractual provisions requires us to consider how prompts engage with the parol evidence rule and how these inputs can be used in the review and interpretation of contractual clauses.
The use of prompts and AI systems to generate contracts offers a glimpse into a future where contracts are not merely drafted and interpreted, but articulated through algorithmic analysis and linguistic precision.[3] Examples of the current state of development include models that can write and summarise large volumes of text to a level of sophistication that is indistinguishable from a human actor, synthesising and summarising large volumes of disparate text, generating code and responding to unscripted instructions.[4] These models have now been introduced into the legal industry, with the majority of large law firms in Australia having implemented or expressed interest in implementing generative AI in their legal practice.[5] Law schools, such as the School of Law at Queensland University of Technology, have also begun to integrate the use of prompts and critique of generative AI in preparing students for the augmented legal practice that will become commonplace in industry.
While the use of LLMs within the context of contract drafting is still in its infancy, the speed of development clearly demonstrates its potential to impact how the industry will operate and extend the ability for non-lawyers to understand and meet legal obligations contained within contracts.[6] There is potential for contracts to be tailored to granular negotiation points, drafted with greater accuracy and be accessible to those without legal expertise.[7] LLMs can act as tireless legal assistants, generating boilerplate clauses, identifying inconsistencies and suggesting alternative wording. Such developments could also include using AI to review AI-generated clauses against ‘control’ clauses from trusted sources to carry out content similarity analyses to determine whether they are fit for purpose,[8] or whether they match the ideal risk profile or position of a client. Thus, prompt engineering, the art of crafting precise instructions for these LLMs, allows us to harness their potential for these disparate tasks of drafting, reviewing and interpreting contractual terms.[9]
As stated above, the parol evidence rule restricts use of extrinsic evidence, such as prior correspondence related to the negotiations, to alter, add to, subtract from or otherwise modify the plain meaning of the express terms of a contract or aid in its interpretation.[10] However, it is unclear whether the prompts, which guide the generation of the LLM’s production of the contractual terms, are part of the negotiation history or pre-contractual communications, or should, in practice, be documented and scheduled to form part of the contract. Likewise, it is unclear whether prompts and the decisions, policies and practices of how to formulate these prompts (‘prompt engineering’) should be treated as admissible extrinsic evidence when the courts undertake contractual interpretation. This question demands careful consideration by courts and legal scholars as the answer will shape how prompts and prompt engineering are to be documented and treated by the courts.
This article explores the potential opportunities and challenges of integrating LLM into contract drafting, examining the practical applications and legal implications of AI-assisted contract drafting and interpretation. It begins with a brief non-technical explanation of how LLMs currently operate in legal practice before focusing on prompts and contract drafting. The article then considers the current limitations of the technologies and the challenges LLMs and their developers need to overcome when deploying these tools to draft contracts. It then considers the legal status of prompts and undertakes a novel assessment of how prompts interact with the parol evidence rule across four scenarios. Finally, the article suggests a set of practice approaches for incorporating prompts into the contracting process before presenting a conclusion about the state of the evolving use of LLMs in contract drafting.
2. Integrating LLMs in Legal Practice
LLMs are built on a probability analysis of the sequence of words.[11] Simply put, the primary way in which LLMs work is to model how a particular concept or thought is communicated in a string of words. This allows the model to serve as a simulacrum of a lawyer by both drafting contractual provisions and interpreting what a contractual provision sets out to be the legal obligations of the parties. These models need to be ‘trained’ in two phases. The first pre-training phase requires human trainers to curate and annotate (‘code’) a large corpus of text to teach the model how certain words will likely be put together.[12] This helps the model understand grammar, syntax, the rules of language and context by demonstrating how words are typically combined. Following this, the model undergoes unsupervised training or self-learning. These models leverage autoregression, where the model assigns a higher probability to particular words that would follow sequentially in the given scenario. [13] Additionally, LLMs have significantly advanced word-embedding processes that cluster similar words to produce observations on the relationships between words,[14] where these ‘relationships are then exploited to provide a specific role to the words in the sentence and to determine the meaning of the sentence itself’.[15] In doing so, a model is thus able to put together (or predict) a series of words in response to a given set of text based on their training set.[16] Accordingly, these models are challenged by novel situations that have not been included in their training data.
The current LLM models are called ‘transformer’ models (giving us the ‘T’ in GPT) and operate through a neural network architecture.[17] As these models can be trained on any dataset, it is important to ‘pre-train’ a model (giving us the ‘P’ in GPT).[18] This training occurs in two stages: first, the model is trained on a large, generic dataset;[19] then, the model is ‘initialised’ and is further trained to hone in on a specific dataset to carry out the relevant task.[20] The model is also required to carry out multiple tasks, such as ‘sentiment classification, question and answering, information extraction, and text generation’.[21] Where this is applied to the language of the legal industry, the model is able to then imbibe and work with ‘legalese’.[22]
But in order for the LLM to provide an output that is useful to the user, it needs to respond to a prompt, or an input text string that contains the instruction to the LLM. Such instructions can include context and other examples of what the user is seeking as a useful output.[23] The optimisation of these instructions to the LLM is described as ‘prompt engineering’. Where there is limited description and examples, the prompt requires the model to execute a task with ‘few-shot learning’,[24] or where no examples are provided, ‘zero-shot learning’.[25] Despite these limited prompts, GPT-3 has demonstrated an ability to generate human-like texts based on making predictions on the basis of these limited prompts. This is because the model has acquired a phenomenal increase in learned variables through its training process to make predictions (or ‘parameters’). Within the span of just one year, OpenAI progressed the scale of the model from 117 million parameters (GPT-1) to 175 billion parameters (GPT-3),[26] training the model against large corpora of text that is then refined with further training using Reinforcement Learning from Human Feedback (RLHF).[27] Current iterations of the model are now likely to be over a hundred trillion parameters, allowing these models to rapidly respond to general conversation, providing answers to questions and instructions.
Given the ‘legalese’ of the legal industry,[28] there is a separate training process to take more generic large language models and train these to run specifically for the legal industry, and to then generate contractual provisions.[29] Law firms have begun to train on-premises LLMs, such as ‘Vicuna’, which have been ring-fenced from external services that allow the firm to provide a point of difference in their automation processes and service offerings, and to address client confidentiality and data privacy issues that may arise through the use of public LLMs.[30]
Since the rapid adoption and public awareness of OpenAI’s ChatGPT 3.5, which began at the end of 2022, LLMs have made significant forays into legal practice. Researchers have conducted experiments to determine whether ChatGPT could pass the Bar exam,[31] construct a law journal article,[32] pass a law school exam[33] or serve as a law professor with the assistance of a human user with no legal experience.[34] In studying how effective these LLMs are in legal practice, Hendrycks and colleagues divided the field of legal practice into three areas: professional law – the tasks typically performed in private practice; jurisprudence – the theoretical engagement with law; and international law.[35] The researchers found that the model studied was able to respond to factual questions in the areas of jurisprudence and international law with high accuracy.
However, the model showed low accuracy (‘just above the random chance mark’) when asked to carry out tasks within the area of professional law.[36] The low accuracy attributed to LLMs when engaging with professional law may be due to the limited access to training data (client contracts, memoranda of advice, etc) that typically are held tightly by law firms. Despite this, as firms are exploring how LLMs can be used as a tool to aid in technology-assisted legal review of documents (such as eDiscovery and due diligence), the increasing maturity of the technology and its application to the field of contract drafting will likely bear more accurate results in the near term.
Such efficiency comes with a caveat. LLMs are susceptible to the biases and the limitations of their training data. Unchecked, they could perpetuate existing inequalities and generate contracts that are unfair, unconscionable or perhaps even lead to a distortion of the market. This is not to say that lawyers supervising these AI legal assistants (even where there are tools to automate review of contractual provisions) will not be required to provide human oversight and review the AI-drafted provisions for error or illegality. However, responsible integration of AI into the drafting process requires careful curation of training data, transparency in algorithms and human oversight to ensure ethical and legally sound outcomes, and to guard against false patterns identified or hallucinations (outputs that are inaccurate and not grounded in input data or reality).[37] For example, in May 2024, Allens developed the Allens AI Australian Law Benchmark as an extension to the Linklaters LinksAI English Law Benchmark to test the ability of LLMs to answer legal questions specific to the jurisdiction. They found that ‘even the best performing LLMs ... were not consistently reliable when asked to answer legal questions ... [and contained] “Infection” by legal analysis from larger jurisdictions with different laws’.[38] As individual law firms and lawyers start to train their own LLMs, new products and bespoke in-house models will bring greater sophistication and a reduction in the current limitations of these models within the legal industry.
This machine ability to articulate the contractual obligations of the parties requires the LLM to replicate two primary forms of legal work. First, LLMs must translate instructions into legal writing. LLMs receive instructions via prompts to generate contractual provisions from its training set. This results in the automated drafting of contractual provisions that is intended to set out the agreed obligations negotiated between the parties. Second, LLMs must undertake contractual interpretation of various provisions to ensure they adhere to the bargain struck between the negotiating parties. The LLM is required to interpret what has been input via the prompts and any outputs that are iteratively captured in subsequent prompts to review what has been generated and ensure the output is fit for purpose and captures the intended drafting outcomes of the negotiation process.
Thus, beyond drafting, LLMs may hold immense promise for interpreting ambiguous contractual terms. Traditional methods of contractual interpretation may not keep pace with the changing face of contract administration or evolving legal interpretations. In their work applying big data to contract interpretation, Ghodoosi and Kastner observe three different types of canons of contractual interpretation: (1) textual canons, which aid the court in interpreting contractual language to determine the intent of parties; (2) substantive canons, which flow from public policy considerations; and (3) overarching goal canons, which aim to ultimately give effect to the intended bargain struck by the parties.[39] Their empirical analysis of canons of interpretation using natural language processing, machine learning and statistical tools suggests more opportunities by which LLMs could serve to untangle ambiguity in contractual terms.
LLMs can delve into the vast corpus of legal knowledge, analysing context, precedent and linguistic nuance to offer more precise and contextually relevant interpretations of ambiguous terms. Arbel and Hoffman suggest that this ‘generative interpretation’ provides a ‘cheap and predictable contract interpretation methodology ... [that represents] a major advance in contract law’.[40] This opens the door to more comprehensive and efficient dispute resolution, potentially reducing litigation costs and strengthening contractual relationships.
3. Drafting Contracts with LLMs and Prompt Engineering
The emerging nexus of LLMs and contract law offers a glimpse into a potential future where contracts are drafted by algorithms and the precision of prompt engineering. One of the most compelling advantages of AI-assisted drafting is its unparalleled speed and efficiency.[41] For example, contracts used in major projects and infrastructure delivery are notoriously complex and voluminous, and can be particularly cumbersome to draft.[42] LLMs trained on vast legal corpora could potentially identify which relevant template clauses are suitable to the particular risk profile of the client in a given scenario, identify inconsistencies in terminology and cross-references to clauses and scheduled sub-contracts, and suggest alternative wording at a pace unimaginable for even the most seasoned legal practitioner. These efficiencies add pressure on lawyers to deploy LLMs into the drafting process as lawyers play a key role in ‘[increasing] overall transaction value by reducing transactional inefficiencies.[43]
Note, however, that AI-assisted drafting is a different technology from smart contracts, which are decentralised self-executing contracts that sit on a blockchain (or distributed ledger technology). [44] These smart contracts can be programmed into the scripting language of the blockchain to automate the performance of legal obligations where programmed conditions are met. These include self-payment of invoices on shipment delivery or ‘smart stock, self-enforcing derivatives, “trustless” letters of credit and proof of existence’.[45] While there are ongoing attempts to bring together AI and blockchain technologies,[46] a distinction remains between smart contracts and AI-generated contracts.[47] Smart contracts are deterministic algorithms[48] that are part of the blockchain ecosystem, which includes Decentralised Autonomous Organisations (DAOs). These DAOs operate through smart contracts and can ‘coordinate an array of social and commercial activities’ to ‘generate and execute contracts with third parties’.[49] While this is also another example of algorithmically generated contracts, it is a different scenario from the one explored in this article, where the LLM is not deterministic but instead able to continuously learn from inputs (such as prompts) and modify decision-making processes used to generate and interpret contractual provisions. This article also recognises that there are contracts that could set out the contractual terms in a programming language, which raises separate interpretation issues regarding how a court should undertake interpretation of such programming language and how such an interpretation would differ from traditional interpretation of natural language.[50] A discussion of these challenges is outside the scope of this article.
In 2021, Aggarwal and colleagues proposed the use of an AI framework that would identify the type of contractual clause and then separately identify content for that particular clause either in full or in part to supplement an incomplete clause.[51] Such use of LLMs would be highly valuable when applied to identify ‘departures’ or proposed amendments to contractual provisions during the negotiation phase. Work performed by junior lawyers to set out tables of departures to identify how provisions have been amended, the accepted or proposed amendments to the relevant clause and the reasons for such amendments can be augmented with LLMs. For example, these AI models could be used to identify any amendments to a particular clause, how the clause departs from the preferred risk allocation (when compared with a control clause) and proposed amendments to bring the clause into closer alignment to the preferred risk allocation, and to draft a justification to the other party supporting the proposed amendments. The prospect of using AI to track and support these amendments offers an interesting opportunity to learn about the evolution of contract terms. Jennejohn, Nyarko and Talley observe that lawyers are often delegated the ‘lion’s share of negotiation points’, with lawyers often undertaking the granular task of working through the terms long after other stakeholders have departed the negotiation tables.[52] They suggest that examining this process is critical as ‘contracting conventions – and thus market practices – unfold and evolve over time’ through the actions of lawyers.[53]
Lam and colleagues suggest two techniques to use an LLM for contract review of clauses.[54] First, LLMs can perform content similarity analysis between a given clause and a clause extracted from a trusted source, [55] such as a ‘control’ contractual term located within a law firm’s bank of precedents. This task replicates the work done by junior lawyers, who often work to refine and amend template agreements or individual clauses extracted from earlier contracts of a similar nature. Second, LLMs can extract key phrases from sentences and, as these models are able to analyse vast volumes of legal documents, they can identify commonly used phrasing across the field of documents, ensuring consistency across clauses and adherence to established legal principles.[56] This precision minimises incidences of inaccurate or inconsistent wording that may lead to ambiguity and unintended loopholes, promoting clarity and predictability in the contractual relationship.[57] Alternatively, where there is a need to update a particular provision or reference (say, to a particular piece of legislation), these models are able to identify where such amendments could be made in a given set of documents with varying contractual styles. Moreover, by leveraging natural language processing techniques, LLMs can detect potential grammatical errors or inconsistencies in syntax that might escape the human eye.[58] This heightened attention to detail significantly reduces the likelihood of typographical mistakes or confusing formulations, contributing to a more polished and accurate final document. While we have had this heightened level of word processing to varying degrees in the legal market for over a decade, LLMs add a further level of rigour and functionality in being able to make drafting suggestions.
The advantage of providing automated review of drafted contracts is that it aligns with the contracting trend around standard form contracts – particularly in the construction industry.[59] The contracting environment within the construction industry is characterised by a high use of standard form contracts that are entered into and administered mostly without the input and oversight of lawyers. [60] A 2014 Research Report by Sharkey and colleagues at the University of Melbourne noted the significant use of standard form contracts in the Australian construction industry, with standard form contracts universally used on projects with contract values less than $100,000. However, despite a noted desire for standard form contracts to be adopted without amendments, these contracts were amended in 84 per cent of the cases studied, with respondents perceiving these amendments as having led to ‘increased need for legal advice’ and additional costs related to project delivery and dispute resolution.[61] Where these amendments create ambiguity, they can lead to inefficiencies and disputes arising from inconsistent language and structure.[62]
Generative AI offers the promise of reducing legal costs in several ways to meet these trends. AI-augmented or even autonomous contract drafting (restricted, perhaps, to boilerplate provisions) can serve to support contract standardisation within the construction industry while still providing flexibility in the form of bespoke amendments tailored to the immediate needs of the specific construction project.[63] By offering pre-populated templates tailored to specific construction project types, with variables and prompts to customise clauses to individual needs, LLMs can ensure consistency and clarity while preserving flexibility for bespoke adjustments to adhere to the risk profile and allocation of liability and obligations generally expected in the industry.[64] This scalability aspect becomes particularly valuable in large-scale construction projects involving multiple sub-contracts and agreements. Accordingly, standardised templates, efficiently generated and customised by LLMs, can streamline the contracting process across the entire project ecosystem, fostering interoperability and minimising contractual discrepancies.[65]
Further, if we are able to achieve the automation of contractual clause review to allow for greater amendments to template contracts and ensure that any amendments still retain fairness and comply with statutory requirements, there is potential to open up the administration of these contracts to automation. This avails an opportunity to integrate the separate technology of smart contracts with the negotiation and drafting of contracts in contract-dense transactions of construction and technology while offering efficiencies and removal of gamesmanship through the automated performance of legal obligations. These smart contracts, or self-executing contracts, automatically trigger contractual obligations upon fulfilment of predetermined conditions, eliminating the need for manual verification and facilitating seamless transactions.[66] For construction projects, smart contracts could manage milestone payments, track material deliveries and automate dispute resolution based on predefined parameters, streamlining project management and fostering trust between stakeholders.[67]
However, a likely use of generative AI will be its use by non-lawyers to decipher, reformulate and document agreements without the input and oversight of lawyers.[68] As the technology develops, these models could be trained to explain complex legal language in plain English and to provide explanations as to how various contractual provisions might shift risk positions within a given scenario. Such an approach might enable non-lawyers to draft contracts that are more tailored to their specific needs while still providing standardised language that complies with regulatory requirements and are legally sound. Lawyers could then be brought in at the tail end of negotiations to sense-check and review these contracts. In other words, generative AI has the potential to impact contracting in a way that could decrease legal costs for creating bespoke contracts and could allow contracting to become more accessible and affordable. Accordingly, the potential use of LLMs to generate documentation to capture an agreement between parties is unlikely to remain within the sole domain of legal practitioners.
AI-assisted contract drafting is already being trialled in legal practice. While still in their formative stages, these initiatives offer tantalising glimpses into the transformative potential of AI. A number of LLM tools currently being developed to serve as aids for contract drafting are presented below.
LexisNexis has launched its generative AI service across a number of jurisdictions. It launched Lexis + AI in the United States in October 2023 and in Australia in early February 2024. The generative AI model has been trained on ‘Lexis authoritative primary and secondary materials’ and claims that the model would allow users to: search iteratively through conversation; adjust and refine their search parameters; generate case summaries; and analyse, summarise and identify ‘key insights’ from a set of uploaded documents. It would also enable automated drafting of contracts and client advice.[69] The proffered AI solution integrates a number of AI models with:
the best model [used] for each individual legal use case. This approach includes working with large language models like Anthropic’s Claude 2, hosted on Amazon Bedrock from Amazon Web Services (AWS), and OpenAI’s GPT-4 and ChatGPT, hosted on Microsoft Azure.[70]
Other AI-based contract-drafting assistants include Motionize, which integrates with Microsoft Word to suggest drafting language based on a bank of precedents while allowing users to ‘see clauses in their original documents’.[71] Robin.AI describes itself as an ‘AI copilot’ that can ‘generate any common agreement’, suggesting that ‘2m 37s is the average time to draft an agreement’.[72] The product description states that the AI model:
stores all of an organisation’s signed contracts inside a powerful, searchable contract repository. With its latest Clause Compare feature, your legal team can harness the ability to compare and analyse clause variations in seconds, which in turn means they can draft and review contracts more quickly and effectively.[73]
Macro, Spellbook, Henchman and DraftWise offer similar AI-augmented contract-drafting solutions to the industry. However, with the prospects of greater automation in contractual drafting through LLMs, there is a pressing need to consider the challenges involved in deploying these tools to service the legal industry. The next two sections examine these challenges.
3.1 Challenges to Deploying LLMs for Contractual Drafting, Review And Interpretation
Despite the clear commercial benefits of integrating AI and LLMs into contract drafting, the deployment of the technology requires careful consideration and mitigation strategies with regard to its limitations. Understanding these considerations then needs to be integrated as part of the training of lawyers, particularly around prompt engineering.
One of the primary challenges posed by LLMs in contract drafting is their potential inaccuracy. The phenomenon of made-up AI-generated pieces of data presented in the guise of actual search results has been artfully termed ‘hallucination’,[74] whereby LLMs generate nonsensical text as a result of dataset noise or bias in the model rather than in response to the true context of the source input. A second term, ‘stochastic parrots’, describes the scenario where training data is repeated or a pattern-recognition exercise is undertaken without ‘any reference to meaning’.[75] While LLMs excel at mimicking linguistic patterns and generating text, they lack the nuanced legal reasoning and ethical judgement of human lawyers.[76] This limitation can lead to the generation of contracts that, while grammatically sound and apparently serve as documents that set out various performance obligations and contractual mechanisms, may not operate as intended to adequately address complex legal issues or anticipate unforeseen circumstances.
The second limitation is the lack of transparency in LLMs. Given the imprecision of text, contracts also require clarity in the genesis of a particular clause – an approach that is commonly used to untangle ambiguity and resolve conflicting interpretations of a particular provision. Without understanding the reasoning behind an LLM-generated clause, a party subsequently administering the contract or the courts, when untangling ambiguity, may find it challenging to determine how to interpret a particular contractual provision. The legal implications of the potential use of prompts to tackle ambiguity is explored in the next section.
Existing LLMs have been shown to have bias,[77] which can then creep into contract drafting. LLMs, like any AI tool, are only as good as the data they are trained on, with the lack of transparency of these AI tools hiding biases that may be built into the design of the LLMs or the training set. Biased training data can lead to the inadvertent perpetuation of harmful stereotypes and discriminatory clauses in generated contracts.[78] Such use of AI systems could lead to an ‘[exacerbation of] existing power asymmetries.[79]
In construction contracts, where gender and racial disparities persist, such bias could manifest in biased terms regarding subcontractor selection or payment schedules. Other biases can create problems within the drafting process by hampering the capture of the true bargain made by the parties, exposing the contract to claims of unconscionability and in the interpretation of contractual obligations. Even where LLMs are not involved in the contract drafting process, inherent bias towards clients has led to ‘heavily slanted contract and also ... formatting, presentation, or language techniques designed to reduce the ability of the non-drafting party to detect the unfavorable provisions’.[80]
Contracts can also be drafted to exploit particular ‘cognitive biases’.[81] Thus, with LLMs there can be any number of ‘cognitive biases, situational pressures and moral hazards’ that can emerge in using the model.[82] Training sets can harbour societal prejudices that then manifest as discriminatory clauses in generated contracts. For example, an LLM trained on legal documents riddled with gender-biased language could inadvertently produce NDAs that favour male executives.
AI developers are conscious of this spectre of bias in their models. For example, LexisNexis, developer of Lexis+ AI, states in its press release that it ‘follows the RELX Responsible AI Principles, considering the real-world impact of its solutions on people and taking action to prevent the creation or reinforcement of unfair bias’.[83] Mitigating this challenge requires a vigilant approach to data curation, actively seeking out diverse and unbiased legal datasets for LLM training. This should be accompanied by continuous monitoring and regular audits to identify and rectify any emerging biases in LLM-generated contracts. Similarly, human oversight at critical junctures of the drafting process is crucial to safeguard against the unintended perpetuation of discriminatory practices. For example, human oversight would be required where bespoke clauses are drafted to shift risk allocation away from a default position but may not be as critical where an LLM is tasked with generating boilerplate clauses, such as a governing law provision.
Other ethical issues also persist. The data collection and labelling industry associated with training AI models is a crucial segment of the AI ecosystem. The industry provides human labour necessary to gather, organise and annotate data to train AI models. However, while the industry is anticipated to grow from US$2.82 billion in 2023 to over US$14 billion by 2030, there is a human cost associated with training generative AI models.[84] For example, to ensure that the model is able to recognise ‘prompts that would generate harmful materials, algorithms must be fed examples of hate speech, violence and sexual abuse’.[85] Content moderators working for a contractor of OpenAI have petitioned the Kenyan government, alleging psychological trauma relating to the review of content for OpenAI’s ChatGPT. There are also legal and ethical questions regarding how legal practitioners should declare their use of these models when interacting with their clients and their counterparts at the negotiating table. Other ethical questions relate to the source of the material that the model has been trained on, with Open AI currently being pursued in the courts by copyright holders.[86] Given the spectrum of emerging concerns, the integration of AI in contract drafting demands more than just technical expertise. It requires a deep understanding of ethical considerations and a commitment to responsible implementation. By prioritising data bias mitigation, transparency, human oversight and ongoing collaboration, how the legal industry interacts with these emergent and emerging models can, in turn, aid us in crafting fair, accurate and trustworthy contracts.
One way to address the lack of transparency has been a broad push to legislate and harden laws around the trustworthy use case of AI. The EU Parliament passed the EU AI Act in March 2024. The United States has had a number of attempts at passing the Algorithmic Accountability Bill, but currently has an Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence (issued October 2023). In Australia, the Department of Industry, Science and Resources has developed eight Artificial Intelligence Ethics Principles and the 2023 Interim Guidance on Government Use of Public Generative AI Tools, while in January 2024 Singapore proposed a Model AI Governance Framework for Generative AI as a means of ‘fostering a Trusted Ecosystem’.[87] Likewise, China has issued the 2023 Interim Measures for the Administration of Generative Artificial Intelligence Services. However, within the area of AI and contract drafting, the UNCITRAL Working Group IV on Electronic Commerce has focused on draft provisions on the use of AI and automation in contracting.[88] These will have an increasing impact on the use of generative AI in legal practice.
In addition, a key challenge that will likely arise in the future is where these legally oriented LLMs are increasingly made accessible to non-lawyers. Currently, standard form contracts in various domains, such as construction and small sales of goods and services, are often amended and executed by the parties without the involvement of lawyers. As LLMs are applied to these contracts, there is a potential problem for courts in untangling the issues of ambiguity when LLMs are used to generate these contracts. Given these issues of a lack of transparency and accountability, bias and hallucinations in the LLM, it may be essential to retain documentation of the prompts used to generate the outputs, to provide a clear understanding of the intention of the party or, if the prompts are mutually agreed upon, the parties. Accordingly, as parties begin to use LLMs in contract drafting, they need to be conscious about how their instructions or ‘prompts’ are documented. This article argues that these prompts should be documented and stored in anticipation of disputes around mistake or ambiguity.
While law firms may currently not see the need to share their prompts with their legal counterparts in a negotiation, and only share the generated outputs that have been reviewed by lawyers, there could easily be an alternative approach. Lawyers could agree to the terms of the agreement and work out the scope, cost, duration and milestones, quality requirements, and departures from a standard allocation of risk and liability. To save time and money, they could then agree on a set of prompts that captures the agreed positions and have the LLM generate the suite of contracts. They would then schedule these prompts to the head contract to support an interpretation clause. This article argues that prompts and prompt engineering should take on a more central role in negotiations. The next section explores the legal status of prompts.
4. Legal Status of Prompts
This article explores four scenarios of potential uses of LLMs in contract drafting and how this might impact the legal status of prompts.
• Scenario A: Prompts visible only to one party. In this scenario, lawyers from one or both sides of a negotiation are separately using a generative AI model to review, interpret and draft contractual clauses. However, the prompts used and any analysis provided via the model is not shared with the other side, and in some cases may not be shared with the client. Note that as parties may be using different AI models, this might result in differing outputs that will need to be negotiated between the parties.
• Scenario B: Prompts visible to all contracting parties. Lawyers from both sides of a negotiation agree to use a generative AI model to draft contractual clauses. They agree to the prompts that will be input into the generative AI model to produce the contractual clauses. The parties review and sign the generated contract.
• Scenario C: Prompts scheduled to the contract. Lawyers from both sides of a negotiation agree to use a generative AI model to draft contractual clauses. They schedule the agreed prompts to the generated contract, then review and sign the generated contract.
• Scenario D: Prompts visible to all contracting parties with no input from lawyers. This scenario assumes that the parties do not want to involve lawyers to document an agreement they are negotiating. They agree to the prompts that will be input into the generative AI model to produce the contractual clauses. The parties review and sign the generated contract. Within this scenario, non-lawyers such as limited licence legal technicians (US) and paralegals may be involved.
4.1 Using Prompts to Interpret Ambiguity
The textual nature of the law reflects the ambiguity of language. Complex contracts are often rife with technical jargon and nuanced terms, with contracts used in specialised fields developing their own legal dialects. For example, in construction contracts, seemingly innocuous terminology such as ‘defect’ or ‘latent condition’ can evolve to become million-dollar disputes, with each party clinging to their preferred interpretation. Further, as usage of certain terms shifts, once-clear terms can become ambiguous over time as emerging technologies, changing social norms and unanticipated circumstances create questions over the originally intended meaning. For example, the term ‘communication’ may now encompass a multitude of unforeseen media channels, each with its own legal implications. In the same vein, given that LLMs will have in-built biases (and often be American-centric), generated clauses can potentially have an ‘ordinary’ and ‘customary’ meanings that depart from the intention of contracting parties outside the jurisdiction of the United States (in other words, ‘infection’ as described above). A question arises here regarding whether prompts could be adduced to aid in the interpretation or modification of a contractual provision.
In Australia, where the objective theory of contracting has primacy, the written document serves as the actual agreement. Accordingly, in the context of a contract that is wholly in writing, the parol evidence rule may operate to prevent the admission of prompts (along with other types of extrinsic evidence, such as prior or contemporaneous negotiations) that may contradict or modify the generated text of the contractual document.[89] In doing so, the rule preserves the integrity of written agreements. However, Christensen and Duncan assert that, in Australia, ‘the conclusivity of the written words of a contract have given way to the notion that ‘the existence of writing which appears to represent a written contract between parties is no more than an evidentiary foundation for an agreement that the contract is wholly in writing’.[90] Justice McPherson stated (in obiter) in Nemeth v Bayswater Road Pty Ltd [1988] 2 Qd R 406:
For my part I cannot see that the [parol evidence] rule can rest on anything more than a presumption as to the intention of the parties ... No doubt the force of the presumption will vary according to a variety of circumstances, including the nature, form and content of the written instrument concerned, making it naturally more difficult to displace when the instrument is one which, in appearance and detail, itself suggests that the parties intended it to be the exclusive record of their contractual rights and obligations. The real matter of difficulty, and the source of much of the controversy, arises in attempting to define the circumstances that may be considered in determining whether or not the presumption is rebutted.[91]
In Scenario B, where the prompts are agreed but do not form part of the executed contract, these prompts could be analogous to oral evidence of pre-contractual negotiations. If treated as such, it may be that these prompts could be adduced as evidence of pre-contractual negotiations to ‘prove the terms of the agreement’. Where these prompts are consistent with the written agreement, there may be an inference that the agreement was partly generated and partly captured in the agreed set of prompts. On the other hand, where the agreed set of prompts contradicts the generated set of contractual provisions, it is likely that the contract would not be treated as partly generated and partly captured in the agreed set of prompts; instead, the contract would likely be considered wholly contained in the generated document.
This ‘irresistible presumption’ that the contract was intended to be wholly contained in the generated contract would be further bolstered in Scenario C, where these prompts are scheduled to, and form part of, the contract. In this scenario, the parol evidence rule will apply and only Scenario C prompts will clearly have a bearing on the interpretation of the contract, subject to the usual order of precedence clause drafted to ensure the intended interpretation of the relevant contractual term. In this scenario, Scenario A prompts cannot be relied upon to aid in interpretating a challenged provision or to add or subtract contractual terms, as these prompts would be subjective evidence.
However, in limited circumstances, Scenario B prompts may potentially have a bearing on the interpretation of generated contractual provisions where a contract is wholly contained in the generated document. Despite the area of law being subjected to commentary around the unsettled nature of when an exception to the parol evidence rule applies, the rule in Codelfa Construction Pty Ltd v State Rail Authority of New South Wales[92] (‘Codelfa’) has repeatedly been approved by the High Court of Australia. The rule denies the introduction of extrinsic evidence where the plain meaning of the term in question is sufficient for the courts to address an interpretation of the parties’ obligations.[93] However, where ambiguity arises on a plain reading of the text, the courts can look to evidence that speaks to the genesis, background and nature of the contract. In these limited circumstances, the courts could potentially look at Scenario B and Scenario D prompts to ‘show the context in which the agreement was made and the surrounding circumstances objectively known to the parties’.[94] Consequently, extrinsic evidence – that is, these prompts – may be introduced to show how such uncertainty can arise from the matrix of facts around the formation of contract. Where the initial prompt to the LLM is retained, this could be admitted into evidence to allow the courts to use the text of that prompt to objectively interpret the intended meaning behind the ambiguity and give effect to the bargain.[95] While this article has noted the differences between AI-generated contracts and smart contracts, interpretation issues relating to the coding of a smart contract may be instructive. Hoffman and Cohney note that where a human user or programmer exhibits intent in the code and this ‘intent is internally contradictory and is recorded as such’, this could pose a challenge for interpretation. As such, they observe that visible metadata has been advanced as a means of untangling ‘intent’.[96] This highlights the importance of prompts when using LLMs to generate contractual clauses as these models are non-determinative, which heightens the importance of prompts in untangling intent.
As AI-augmented contractual drafting increases, the parol evidence rule risks becoming anachronistic. To ensure its continued relevance in the digital age, potential reforms and best practices for incorporating prompts into the contracting process are crucial. Such reform of the parol evidence rule could include the explicit consideration of AI-generated contracts. In reference to Scenario D, as non-lawyers increasingly use LLMs to document their agreement without the involvement of lawyers, there may be a weakening of the parol evidence rule. Under the subjective theory of contracting (dominant in the United States)[97] – where the written document merely records the agreement reached – these prompts could be tendered to provide persuasive evidence of what was agreed between the parties.
Wigmore suggests that the parol evidence rule and the gradual shift from subjective to objective theory of contracting in common law countries accompanied the spread of literacy and written contracts.[98] However, the potential spread of LLMs as the means to automatically generate contractual documents will likely see the spread of non-lawyers embracing LLMs to generated documents to captured the concluded bargain.[99] The question is whether, as the creation of contractual documents is increasingly undertaken by non-lawyers, the intention of the parties as expressed in the prompts could return us to a more subjective approach to contracting – one that may see prompts automatically exempted from the operation of the parol evidence rule. This is because the prompts used within the LLMs to drive automated contractual drafting and interpretation of negotiated terms can provide valuable insights into the parties’ intentions and the contract’s purpose. This seems a little far-fetched in Australia, but the increased prevalence of non-legal use of LLMs to draft contracts may see an overall weakening and reformulation of the parol evidence rule, perhaps to allow for interpretations that accord with commercial reality or business common sense.[100]
Another way in which the courts may admit prompts as extrinsic evidence is where a party argues the existence of a collateral contract. In Scenarios B and D, a promisee may argue that a collateral contract – that is, a separate contract from the main contract wholly contained in the generated document (‘main contract’) – arises from representations contained in the prompt that were supported by the consideration of the promisee’s entry into the main contract. Such a collateral contract will only arise where such representations contained in the prompt are promissory in character, rather than mere representation. In these circumstances, Scenario B and D prompts may be adduced to allow the courts to determine the promissory character of the words in the prompt and to support a reading of the conduct of the negotiating parties. Note that the terms of any alleged collateral contract must not be inconsistent with the main contract contained in the generated document. Christensen and Duncan observe that while the argument of a collateral contract is ‘contrary to judicially expressed views in England, it has found some judicial support in Australia’.[101] A number of questions then arise in relation to legal status of Scenario B and D prompts regarding whether a prompt (or series of prompts) could contain a representation and, where it does, whether it could take on a promissory character that would support the finding of a collateral contract.
Scenario B and D prompts may also give rise to an agreement to agree. Masters v Cameron[102] identified three categories of negotiations in which parties have reached agreement on the terms of the contract but have also agreed that a formal contract is to be forthcoming on the matter of their negotiation. [103] Under the first category, the parties intend to be ‘immediately bound’ and the bargain is concluded when the prompts are agreed (‘settled’ even), regardless of whether or not a formal contract is generated to restate the terms of the bargain ‘in a form that will be fuller or more precise but not different in effect’. Under the second category, the parties are also bound by the terms of their bargain set out in the prompts to be input in the LLM, but any obligation to perform on the obligations of the contract would not arise until a formal contract is generated by the LLM and executed by the parties. In the third category, it may be that where the prompts are agreed, the intention of the parties is not to make a concluded bargain until (or unless) the LLM generates the contract and the parties execute on the contract. A fourth category was introduced in Baulkham Hills Private Hospital Pty Ltd v GR Securities Pty Ltd,[104] where McLelland J observed that a separate category of contracting behaviour arises where ‘parties were content to be bound immediately and exclusively by the terms which they had agreed upon while expecting to make a further contract in substitution for the first contract, containing by consent, additional terms’.[105]
Yet another way in which prompts can potentially be used by the courts is to rectify a common mistake. The prompts from Scenarios A to D would be available in this instance. While a lawyer has ultimate responsibility for settling the terms of the contract, there are instances where what is drafted is a mistake. As prompts are increasingly used, a lawyer settling the generated term in Scenarios A to C may not interpret the generated term correctly. Further, as legally oriented LLMs are increasingly accessible to those without legal training, Scenario D underscores the real possibility that parties might enter agreed prompts into the LLM and accept the generated contractual terms as being sufficient to capture the agreed exchange of mutual obligations. While lawyers are often tempted to argue that as LLMs are merely legal assistants and contractual provisions would ultimately be overseen and settled by lawyers (who then arguably retain liability for any shortcomings in the drafting), this neglects the popular use of standard form contracts and how these contracts are amended and entered into, often without input from a lawyer. As access to LLMs focused on legal practice and contract drafting increases to include non-lawyers, the eventuality of AI-generated contractual provisions that have not been subjected to review by lawyers will become an increasingly common phenomenon. In these scenarios, where parties are under a common mistake as to how a particular contractual provision is to operate, prompts (including prompts from Scenario A, which would be subjective evidence) could be adduced to enable the courts to rectify the common mistake.
5. Practical Considerations in the Use of Generative AI and Contract Drafting
This section of the article presents several strategies that lawyers and the courts could examine and adopt to support the use of prompts in generative AI and the drafting, reviewing and interpretation of contracts.
5.1 Transparency and Documentation of Prompts
Lawyers should document the prompts used during both contract drafting and interpretation, along with the rationale behind their selection. This creates a transparent audit trail and facilitates judicial review in case of disputes. While prompts may be characterised as part of the interpretive process, akin to the lawyer’s internal musings while analysing a contract, the fact that this process is not fully contained within the mind of a lawyer should perhaps cause us to lean towards documenting and retaining prompts. In doing so, these prompts would enable us to consider the LLM’s analysis, highlight the nuances of the lawyer’s reasoning and foster transparency practices that can potentially mitigate concerns about the ‘black box’ nature and hidden bias of LLMs.[106]
At present, addressing this black box challenge at the contract drafting level necessitates clear boundaries for LLM usage. As mentioned earlier, tasks such as generating boilerplate clauses or verifying consistency may well be suited to AI; however, complex legal analysis and strategic negotiation may need to remain firmly within the purview of human oversight. Additionally, ongoing legal education and training for lawyers on AI-assisted drafting can facilitate effective collaboration between human expertise and algorithmic efficiency. Such training could include mitigating strategies that orient on addressing the lack of transparency, including counterfactual reasoning and attention visualisation.
Counterfactual reasoning[107] uses scenarios where different inputs, modified in specific ways, are entered in as prompts to the LLM. As the LLM generates outputs, the changes between outputs allow us to consider how significant a change made to the prompt is. In other words, by posing ‘what if’ scenarios and analysing how the output changes, lawyers are able to glean insights into the factors influencing the decision-making outputs of the LLM. However, this technique only works where meaningful counterfactuals and documentation of these scenarios exist to trace the lineage of changed outputs. As such, as the technology matures, the body of counterfactuals will increase (akin to creating a separate body of precedents) to aid understanding of how a particular LLM is performing against the contract drafting tasks.
Attention visualisation[108] works to model what input texts an LLM focuses on in generating its output. By focusing on these words or phrases, lawyers can gain visibility over the key terms on which the LLM will place emphasis. Lee and colleagues have produced LLM Attributor, ‘a Python library that provides interactive visualizations for training data attribution of an LLM’s text generation’. In doing so, they allow users to check the behaviour of the model as they fine-tune pre-trained models with a particular set of training data to avoid, as they note, lawyers being ‘penalized by federal judges for citing non-existent LLM-fabricated cases in court filings’.[109] There are a number of ways of conducting attention visualisation: attention matrices that set out the attention weights against grids, head-level attention, attention flow and interactive visualisations. In the model designed by Lee and colleagues, a comparison function between a model-generated text and problematic-generated text allows the LLM developer to consider the attributions for model- and problematic-generated texts and the attribution scores of these data points. Following this, a histogram is produced that displays the distribution of attribution scores across the training data. In doing so, the LLM developer is able to eliminate data points that are causing any distortions in the generated output and refine the training data to allow the model to consistently produce an accurate response.[110] Lawyers should be trained to understand how the attribution scores of data points can interfere and produce differing results in answer to a prompt.
However, while techniques such as counterfactual reasoning and attention visualisation can shed light on the LLM’s internal logic, these techniques are more fruitful during the prompt engineering phase of a drafting task. These techniques enable lawyers to understand how their prompts operate as instructions to the LLM, opening up options for lawyers to fine-tune prompts towards the desired drafting outcome. Despite this, these techniques do not address the underlying training set on which the LLMs are built, including addressing the issues of bias described above that will need to be addressed on the side of the technology developer.
5.2 Objectivity and Neutrality
Prompts should be crafted with objectivity and neutrality in mind, avoiding language that could favour one party over the other or skew the LLM’s analysis towards a predetermined outcome. Maintaining fairness and impartiality is paramount. However, opponents argue that while prompts are not explicitly part of the written agreement, they can significantly influence the LLM’s output, potentially tilting the interpretive scales towards a biased or predetermined outcome. Consider the scenario where a prompt is used to subtly nudge the LLM towards a specific interpretation favourable to one party, casting a shadow on the neutrality of the analytical process.
In other situations, the contra proferentem rule[111] may serve to interpret ambiguities against the party who drafted the contract. In this manner, this canon of contractual interpretation serves as a ‘tie breaker’ construed against the drafter.[112] A question arises as to whether a party who has relied on the LLM to draft a contract is able to argue against the application of the contra proferentem rule – for example, where they have retained documentation of the original prompt and can show that what has been generated does not meet the original intent of the prompt. Ebers notes that one of the difficulties in applying the contra proferentem rule is when the terms of a contract are potentially negotiated by software. In such cases, ‘no concrete intention ... can be evidenced’, which would then require the court to consider the ‘general intent of the operator’ or venture into an objective interpretation of ‘the actions of the software agent’.[113]
5.3 Tiered Systems for the Admissibility of Prompts
This article suggests that mandating the disclosure and annotation of prompts used in contract creation and interpretation will foster trust and allow independent scrutiny of potential bias. This ensures that lawyers wield these digital tools responsibly and ethically. Such transparency is crucial in scenarios where LLMs are used to untangle ambiguity in contracts.
However, there can still be a graded approach to admitting prompts by type into evidence. Recognising the diverse functions of prompts, a tiered system for their admissibility could be considered. ‘Factual prompts’, which clarify the facts of the matter, will clearly be an exception to the parol evidence rule. These prompts might serve to provide evidence to identify the subject matter, parties and real consideration within the contract. This could be followed by ‘interpretive prompts’ that were input into the LLM to clarify ambiguous terms that had previously been generated within the pre-existing contract. It is proposed that an exemption be considered for LLM-generated interpretations. This exemption would allow LLMs to assist in uncovering the meaning of contractual terms without necessarily violating the rule’s spirit. Such an approach would require careful judicial consideration to ensure that the use of LLMs complements rather than undermines the principles of contractual interpretation. These two prompts might be treated differently from ‘instructive prompts’ directing the LLM to draft specific clauses where concerns about user bias are more pronounced.
5.4 Using the LLM to Validate the Interpretation of a Contractual Clause
As diversity of LLMs increases through bespoke training or newer models, there is a burgeoning potential for LLMs to offer a different kind of machine-learning semantic analysis of documents within the legal context. For example, an LLM trained on a vast corpus of legal documents, judicial rulings and academic treatises (such as LexisNexis’ Lexis+ AI) could draw on this digital legal corpora to identify semantic relationships, patterns and subtle nuances of legal language used, and even infer meaning from context. As the technology matures, LLMs could be directed or prompted to identify and analyse ambiguous terms within a contract, delving into their historical usage, legal precedent and contextual relevance. Arbel and Hoffman suggest that such use of LLM in interpretation can avail ‘predictability and restraint, while also offering better linguistic accuracy’ and reduced litigation costs by increasing consistency of outputs and reducing exposure to gamesmanship and bias-inducing testimonies.[114] As the use of such AI in contract drafting becomes commonplace, the shifting standard of legal practice may require lawyers to run their contracts, correspondence and other documents related to their negotiations into an LLM to identify ambiguous terms and whether their clients would be vulnerable to an unfavourable interpretation.
5.5 Human Oversight and Validation
Human oversight remains crucial. Lawyers should thoroughly review and validate LLM-generated contracts and interpretations produced using prompts, ensuring accuracy and alignment with legal principles and client objectives. Ultimately, lawyers must continue to provide oversight to both AI-generated contracts and interpretations produced with prompts. This layer of human intervention ensures accuracy, aligns the results with legal principles and client objectives, and prevents AI from operating beyond the bounds of fairness and clarity. This article argues that even where AI can identify errors in human drafting, the human-in-the-loop principle should take primacy, augmenting legal expertise and facilitating more informed and nuanced interpretations.
Simultaneously, lawyers will need to hone their craft in engineering prompts with meticulous objectivity, eschewing language that favours one party or tilts the LLM’s analysis towards a predetermined outcome without explicit consent from client or their counterparts. As law firms train LLMs, there is a critical need to ensure that they recognise internal biases towards their existing clientele. Law firms also need to consider how they embrace neutrality in prompt-crafting to ensure fairness and mitigate the risk of AI-driven manipulation.
6. Conclusion
The integration of AI into contract interpretation paints a fascinating picture of possibilities, but also reveals a gamut of challenges. On the one hand, LLMs allow for deeper intricate semantic analysis to address ambiguity in contractual terms at the drafting, review and interpretation stages, discovering nuances and facilitating more efficient and accurate interpretations. On the other hand, the spectre of algorithmic bias, lack of transparency and potential manipulation of the interpretive process casts a long shadow over the potential technological benefits. Given these risks, there is a pressing need for caution in balancing the allure of innovation with the legal principles that underpin every contract. As the use of prompts and generative AI in contract drafting increases, there is also a need to consider the legal implications.
This article suggests that the parol evidence rule will need recalibration to meet the emerging needs of AI-generated contracts, shifting away from its current focus on ‘written agreements’ to focus on prompts as the digital ink behind the ‘written/generated’ contract. Such an update ensures that prompts neither undermine the written word’s primacy nor circumvent the rule’s purpose of preventing disputes. Further, we must acknowledge the diverse functions of prompts and tailor their admissibility accordingly. A tiered system will permit easier access for the courts to admit ‘factual prompts’ and ‘interpretive prompts’ to clarify ambiguities within existing contracts while subjecting ‘instructive prompts’ that guide automated draftsmanship to stricter scrutiny to safeguard against bias and manipulation.
Admitting prompts as evidence may open the door to a flood of litigation surrounding their content and potential manipulation, creating a legal battlefield on top of the existing contractual one. As such, despite the allure of potential efficiencies and cost savings of using LLMs in innovative ways to draft contracts, embracing AI in contract drafting requires a balanced and nuanced approach. By carefully considering potential reforms to the parol evidence rule and adopting best practices for incorporating prompts, we can harness the power of AI to create clearer, more enforceable agreements while safeguarding against the perils of bias and manipulation. This, ultimately, is the course we must chart to ensure that AI complements, rather than supplants, the human hand in drafting contracts.
Acknowledgements
The author would like to thank Sharon Christensen and Laina Chan for their feedback on early drafts, Nicholas Ng (Allens) and Tae Royle (Ashurst) for discussions on current usage of LLMs within law firms, and the two anonymous peer reviewers for their very helpful comments and suggested amendments. All errors are mine.
Legal materials
Cases
Baulkham Hills Private Hospital Pty Ltd v GR Securities Pty Ltd (1986) 40 NSWLR 622.
Codelfa Construction Pty Ltd v State Rail Authority of New South Wales (1982) 149 CLR 337.
Deepak Fertilisers and Petrochemicals Corp v ICI Chemicals and Polymers Ltd [1999] 1 Lloyd’s Rep 387.
Geroff v CAPD Enterprises Pty Ltd [2003] QCA 187.
Gordon v MacGregor [1909] HCA 26; (1909) 8 CLR 316.
Innrepreneur Pub Co (GL) v East Crown Ltd [2000] 2 Lloyd’s Rep 611.
Life Insurance Co of Australia Ltd v Phillips [1925] HCA 18; (1925) 36 CLR 60.
Masters v Cameron [1954] HCA 72; (1954) 91 CLR 353.
McMahon v National Foods Milk Ltd [2009] VSCA 153; (2009) 25 VR 251.
Murray Goulburn Co-operative Co Ltd v Cobram Laundry Service Pty Ltd [2001] VSCA 57.
Nemeth v Bayswater Road Pty Ltd [1988] 2 Qd R 406.
Quoine Pte Ltd v B2C2 Ltd [2020] SGCA (I) 3.
Royal Botanic Gardens and Domain Trust v South Sydney City Council (2002) 240 CLR 45.
Sinclair Horder O’Malley & Co v National Insurance Co of New Zealand Ltd [1991] NZHC 2971; [1992] 2 NZLR 706.
Sinclair, Scott & Co v Naughton [1929] HCA 34; (1929) 43 CLR 310.
State Rail Authority (NSW) v Heath Outdoor Pty Ltd (1986) 7 NSWLR 170.
Thomas National Transport (Melbourne) Pty Ltd v May & Baker (Australia) Pty Ltd [1966] HCA 46; (1966) 115 CLR 353.
Abioye, Sofiat, Lukumon Oyedele, Lukman Akanbi, Anuoluwapo Ajayi, Juan Manuel Davila Delgado, Muhammad Bilal, Olugbenga Akinade and Ashraf Ahmed. “Artificial Intelligence in the Construction Industry: A Review of Present Status, Opportunities and Future Challenges.” Journal of Building Engineering 44 (2021): 1–13. https://doi.org/10.1016/j.jobe.2021.103299.
Aggarwal, Vinay. “ClauseRec: A Clause Recommendation Framework for AI-aided Contract Authoring.” Paper presented at the Conference on Empirical Methods in Natural Language Processing, Punta Cana, Dominican Republic, 7–11 November 2021. https://aclanthology.org/2021.emnlp-main.691.
Allens, “Lawyer or Language Model? Testing AI’s Competence in Answering Australian Legal Questions.” May 28, 2024. https://www.allens.com.au/aibenchmark.
Arbel, Yonathan and Shmuel I Becher. “Contracts in the Age of Smart Readers.” George Washington Law Review 90, no 1 (2022): 83–146.
Arbel, Yonathan and David A Hoffman, “Generative Interpretation.” New York University Law Review 99, no 2 (2024): 451–514. https://scholarship.law.upenn.edu/faculty_articles/417.
Bender, Emily, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Paper presented at the ACM Conference on Fairness, Accountability and Transparency, March 1, 2021.
Benson, Robert W. “The End of Legalese.” New York University Review of Law & Social Change 13, no 3 (1984): 519–74.
Betts, Kathryn D, and Kyle R Jaep. “The Dawn of Filly Automated Contract Drafting: Machine Learning Breathes New Life into a Decades-Old Promise.” Duke Law and Technology Review 15, no 1 (2016): 216–33.
Bommarito, Michael James and Daniel Martin Katz. “GPT Takes the Bar Exam.” Working paper, December 29, 2022. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4314839.
Chen, Dekun, “Word-Level Interpretation of ChatGPT Detector Based on Classification Contribution.” Highlights in Science, Engineering and Technology 70 (2023): 272–78. https://doi.org/10.54097/hset.v70i.12204.
Choi, Jonathan H. “ChatGPT Goes to Law School.” Journal of Legal Education 71, no 3 (2022): 387–400. https://jle.aals.org/home/vol71/iss3/2.
Christensen, SA, and WD Duncan. The Construction and Performance of Commercial Contracts. Sydney: Federation Press, 2023.
Cohney, Shaanan and David A Hoffman, “Transactional Scripts in Contract Stacks.” Minnesota Law Review 105, no 1 (2020): 319–88.
Cole, Tony. “The Parol Evidence Rule: A Comparative Analysis and Proposal.” UNSW Law Journal 26, no 3 (2003): 680–702.
Cowan, Miles and Zachary Smith. “Law as Code. Reality, Possibility and Potential.” Presentation at Columbia Law School, April 2014.
Dunlap, Bridgette. “Anyone Can ‘Think Like a Lawyer’: How the Lawyers’ Monopoly on Legal Understanding Undermines Democracy and the Rule of Law in the United States.” Fordham Law Review 82, no 6 (2014) 2817–42.
Ebers, Martin, “Artificial Intelligence, Contracting and Contract Law: An Introduction.” In Contracting and Contract Law in the Age of Artificial Intelligence, edited by Martin Ebers, Cristina Poncibo and Mimi Zou, 19–40. Oxford: Hart, 2022.
Feder, Amir, Nadav Oved, Uri Shalit and Roi Reichart. “CausaLM: Causal Model Explanation Through Counterfactual Language Models.” Computational Linguistics 47, no 2 (2021): 333–86. https://aclanthology.org/2021.cl-2.13.
Fu, Yongcheng, Chenglong Xu, Lihan Zhang and Yongqiang Chen. “Control, Coordination, and Adaptation Functions in Construction Contracts: A Machine-Coding Model.” Automation in Construction 152 (2023): 1–12. https://doi.org/10.1016/j.autcon.2023.104890.
Galindo, José A. “Large Language Models to Generate Meaningful Future Model Instances.” Paper presented at the ACM International Systems and Software Product Line Conference, Tokyo, August 28, 2023. https://dl.acm.org/doi/10.1145/3579027.3608973.
Garg, Sahaj, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H Chi, and Alex Beutel. “Counterfactual Fairness in Text Classification Through Robustness.” Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (2019): 219–26.
Giancaspro, Mark. “Is a ‘Smart Contract’ Really a Smart Idea? Insights from a Legal Perspective.” Computer Law & Security Review 33, no 6 (2017): 825–35. https://doi.org/10.1016/j.clsr.2017.05.007.
Gilson, Ronald J. “Value Creation by Business Lawyers: Legal Skills and Asset Pricing.” Yale Law Journal 94, no 2 (1984): 239–313. https://scholarship.law.upenn.edu/faculty_scholarship/2561.
GlobalData. Data Collection and Labelling Market Size, Share, Trends and Analysis by Region, Type, Vertical, and Segment Forecast to 2030. May 3, 2023. https://www.globaldata.com/store/report/data-collection-and-labelling-market-analysis.
Guha, Neel et al. “LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models.” Working paper, August 20, 2023. https://arxiv.org/abs/2308.11462.
Izacard, Gautier et al. “Atlas: Few-Shot Learning with Retrieval Augmented Language Models.” Working paper, November 16, 2022. https://arxiv.org/abs/2208.03299.
Jennejohn, Matthew, Julian Nyarko and Eric Talley. “Contractual Evolution.” The University of Chicago Law Review 69, no 5 (2022): 901–78.
Ji, Ziwei Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. “Survey of Hallucination in Natural Language Generation.” ACM Computing Surveys 55, no 12 (2023): 1–38. https://doi.org/10.1145/3571730.
Jozefowicz, Rafal, Oriol Vinyals, Mike Schuster, Noam Shazeer and Yonghui Wu. “Exploring the Limits of Language Modelling.” Working paper, February 11, 2016, https://arxiv.org/abs/1602.02410.
Jurafsky, D and HM James, Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Englewood Cliffs, NJ: Prentice Hall, 2006.
Katrak, Malcolm. “The Role of Language Prediction Models in Contractual Interpretation: The Challenges and Future Prospects of GPT-3.” In Legal Analytics: The Future of Analytics in Law, edited by Namita Singh Malik et al, 47–61. Boca Raton, LA: CRC Press, 2023.
Klein, Jason M. “No Fool for a Client: The Finance and Incentives Behind Stock-Based Compensation for Corporate Attorneys.” Columbia Business Law Review 2 (1999): 329–64.
Kolt, Noam. “Predicting Consumer Contracts.” Berkeley Technology Law Journal 37, no 1 (2022) 71–138.
Lam, Kwok-Yan, Victor C W Cheng, and Zee Kin Yeong, “Applying Large Language Models for Enhancing Contract Drafting.” Paper presented at the Workshop on Artificial Intelligence and Intelligent Assistance for Legal Professionals in the Digital Workplace, Braga, Portugal, June 19, 2023. https://ceur-ws.org/Vol-3423/paper7.pdf.
Lee, Seongmin, Zijie J Wang, Aishwarya Chakravarthy, Alec Helbling, Sheng Yun Peng, Mansi Phute, Duen Horng Chau and Minsuk Kahng. “LLM Attributor: Interactive Visual Attribution for LLM Generation”. https://arxiv.org/pdf/2404.01361.
LexisNexis, Halsbury’s Laws of Australia (online, 31 January 2024) 110 Contract, “1 Definition and Nature of Contract.”
LexisNexis, Halsbury’s Laws of Australia (online, 27 February 2024) 110 Contract, “5 Construction Rules.”
LexisNexis, “LexisNexis Launches Lexis+ AI, a Generative AI Solution with Linked Hallucination-Free Legal Citations.” 25 October 2023. https://www.lexisnexis.com/community/pressroom/b/news/posts/lexisnexis-launches-lexis-ai-a-generative-ai-solution-with-hallucination-free-linked-legal-citations.
Liao, Yi, Xin Jiang and Qun Liu, “Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order.” Paper presented at the Annual Meeting of the Association for Computational Linguistics, July 2022. https://aclanthology.org/2020.acl-main.24.
Mars, Mourad. “From Word Embeddings to Pre-Trained Language Models: A State-of-the-Art Walkthrough.” Applied Sciences 12, no 7 (2022): 1–19: https://doi.org/10.3390/app12178805.
Martinelli, Silvia and Carlo Rossi Chauvenet. “From Document to Data: Revolution of Contract Through Legal Technologies”. In Contracting and Contract Law in the Age of Artificial Intelligence, edited by Martin Ebers, Cristina Poncibo and Mimi Zou. Oxford: Hart, 2022, 81–98.
McGraw, Gary, Richie Bonett, Harold Figueroa and Katie McMahon, “23 Security Risks in Black-Box Large Language Model Foundation Models” Computer 57, no 4 (2024): 160–64. https://doi.ieeecomputersociety.org/10.1109/MC.2024.3363250.
McNamara, Alan. “Automating the Chaos: Intelligent Construction Contract.” In Smart Cities and Construction Technologies, edited by Sara Shirowzhan and Kefeng Zhang, 119–37. IntechOpen, 2020.
Mitchell, Catherine. “Entire Agreement Clauses: Contracting Out of Contextualism.” Journal of Contract Law 22, no 3 (2006): 222–45.
Motionize. https://motionize.io.
Navigli, Roberto, and Simone Conia. “Biases in Large Language Models: Origins, Inventory, and Discussion.” Journal of Data and Information Quality 15, no 2 (2023): 1–21. https://doi.org/10.1145/3597307.
Oltz, Tammy Pettinato. “ChatGPT, Professor of Law.” Working paper, February 6, 2023. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4347630.
O’Shields, Reggie. “Smart Contracts: Legal Agreements for the Blockchain.” North Carolina Banking Institute 21 (2017): 177–92.
Patil, Rajvardhan and Venkat Gudivada, “A Review of Current Trends, Techniques, and Challenges in Large Language Models (LLMs).” Applied Sciences 14, no 5 (2004): 2074. https://doi.org/10.3390/app14052074.
Perlman, Andrew. “The Implications of ChatGPT for Legal Services and Society.” Research Paper No. 22-14, Suffolk University Law School, 5 December 2022. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4294197.
Portinale, Luigi. “Mapping Artificial Intelligence: Perspectives from Computer Science.” In Contracting and Contract Law in the Age of Artificial Intelligence, edited by Martin Ebers, Cristina Poncibo and Mimi Zou, 3–18. Oxford: Hart, 2022.
Robin AI. https://www.robinai.com/product/draft.
Robin AI. “Product Update: Identify and Analyse Clause Variations in Seconds with Robin AI’s Clause Compare.” https://www.robinai.com/post/analyse-clause-variations-with-clause-compare.
Rowe, Niamh. “‘It’s Destroyed Me Completely’: Kenyan Moderators Decry Toll of Training of AI Models.” The Guardian, August 3, 2023. https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai.
Savelka, Jaromir and Kevin D Ashley. “The Unreasonable Effectiveness of Large Language Models in Zero-Shot Semantic Annotation of Legal Texts.” Frontiers in Artificial Intelligence 6 (2023): 1–14. https://doi.org/10.3389/frai.2023.1279794.
Semmler, Sean and Zeeve Rose. “Artificial Intelligence: Application Today and Implications Tomorrow.” Duke Law & Technology Review 16 (2017–18): 85–99.
Shaghagian, Shohreh, Luna (Yue)Feng, Borna Jafarpour and Nicolai Pogrebnyakov. “Customizing Contexualized Language Models for Legal Document Reviews.” Working paper, February 10, 2021. https://arxiv.org/abs/2102.05757.
Sharkey, John, Matthew Bell, Wayne Jocic and Rami Marginean. Standard Forms of Contract in the Australian Construction Industry. Melbourne: University of Melbourne, 2014.
Shou, Chaofan, Jing Liu, Doudou Lu and Koushik Sen. “LLM4FUZZ: Guided Fuzzing of Smart Contracts with Large Language Models.” Working paper, 20 January 2024. https://arxiv.org/abs/2401.11108.
Sia, Suzanna, Anton Belyy, Amjad Almahairi, Madian Khabsa, Luke Zettlemoyer and Lambert Mathias. “Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI.” In Brian Williams, Yiling Chen and Jennifer Neville (eds.), Thirty-Seventh AAAI Conference on Artificial Intelligence, AAAI 2023, Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence, IAAI 2023, Thirteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2023, Washington, DC, 2023: 9837–45.
Song, J, H He, Z Lu, C Su, G Xu and W Wang ‘An Efficient Vulnerability Detection Model for Ethereum Smart Contracts.” In Network and System Security NSS 2019 edited by J Liu and X Huang. New York: Springer, 2019. https://doi.org/10.1007/978-3-030-36938-5_26.
Stathis, Georgios, Athanasios Trantas, Giulia Biagioni, Jaap van den Herik, Bart Custers, Laura Daniele and Theofilos Katsigiannis. “Towards a Foundation for Intelligent Contracts.” Paper presented at International Conference on Agents and Artificial Intelligence, Lisbon, 22 February 2023. https://www.scitepress.org/Papers/2023/116282/116282.pdf.
Tay, Yi, Dara Bahri, Donald Metzler, Da-Cheng Juan, Zhe Zhao and Che Zheng. “Synthesizer: Rethinking Self-Attention for Transformer Models.” Paper presented at the International Conference on Machine Learning, July 2021. https://arxiv.org/abs/2005.00743.
Tholander, Jakob, and Martin Jonsson, “Design Ideation with AI-Sketching, Thinking and Talking with Generative Machine Learning Models.” Paper presented at the ACM Designing Interactive Systems Conference, Copenhagen, July 10, 2023. https://dl.acm.org/doi/abs/10.1145/3563657.3596014.
Thomson Reuters, 2023 Australia: State of the Legal Market Report, 7 September 2023. https://insight.thomsonreuters.com.au/legal/resources/resource/australian-legal-market-makes-a-remarkable-comeback-report.
Tsimpoukelli, Maria et al. “Multimodal Few-Shot Learning with Frozen Language Models.” Paper presented at the Annual Conference on Neural Information Processing Systems, 6–14 December 2021. https://proceedings.neurips.cc/paper/2021/hash/01b7575c38dac42f3cfb7d500438b875-Abstract.html.
Tu, Sean, Amy Cyphert and Sam Perl, “Limits of Using Artificial Intelligence and GPT-3 in Patent Prosecution.” Texas Tech Law Review 54, no 2 (2002): 255–78.
UN General Assembly, United Nations Commission on International Trade Law Working Group IV (Electronic Commerce), Advancing Work on Automated Contracting, A/CN.9/WG.IV/WP.179, 10–14 April 2023.
UN General Assembly, United Nations Commission on International Trade Law Working Group IV (Electronic Commerce), Developing New Provisions to Address Legal Issues Related to Automated Contracting, A/CN.9/WG.IV/WP.177, 31 October–4 November 2022.
UN General Assembly, United Nations Commission on International Trade Law Working Group IV (Electronic Commerce), Draft Provisions on Automated Contracting, A/CN.9/WG.IV/WP.182, 16–20 October 2023.
UN General Assembly, United Nations Commission on International Trade Law Working Group IV (Electronic Commerce), Provisions of UNCITRAL Texts Applicable to Automated Contracting, A/CN.9/WG.IV/WP.176, 31 October–4 November 2022.
UN General Assembly, United Nations Commission on International Trade Law Working Group IV (Electronic Commerce), The Use of Artificial Intelligence in Automation, A/CN.9/WG.IV/WP.173, 4–8 April 2022.
Upadhyay, Kritagya et al. “Paradigm Shift from Paper Contracts to Smart Contracts.” Paper presented at the IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA), 13–15 December 2021. https://ieeexplore.ieee.org/document/9750226.
Wang, Brydon. “Addressing Financial Fragility in the Construction Industry Through the Blockchain and Smart Construction Contracts.” Australian Construction Law Bulletin 30, nos 1–2 (2018): 116–23.
Wang, Brydon. “Blockchain and the Law.” Internet Law Bulletin 19, no 1 (2016) 250–54.
Wang, Brydon and Mark Burdon. “Augmenting Superintendent Discretion: Trustworthiness and the Automation of Construction Contracts.” Australian National University Journal of Law and Technology 2, no 1 (2021): 119–49.
Wei, Chengwei et al. “An Overview of Language Models: Recent Developments and Outlook.” Working paper, July 3, 2023. https://arxiv.org/abs/2303.05759.
Westlaw, The Laws of Australia (online, February 27, 2024). 7 Contract: General Principles, “2 Classification and Construction of Terms.”
Williams, Spencer. “Predictive Contracting.” Columbia Business Law Review 2 (2019): 621–95.
Yu, Fangyi, Lee Quartey and Frank Schilder. “Legal Prompting: Teaching a Language Model to Think Like a Lawyer.” Paper presented at the Natural Legal Language Processing Workshop, Abu Dhabi, August 8, 2022. https://arxiv.org/abs/2212.01326.
Zacks, Eric A. “Contract Review: Cognitive Bias, Moral Hazard, and Situational Pressure.” Ohio State Entrepreneurial Business Law Journal 9, no 2 (2015): 379–428.
Zeng, Wei et al. PanGu-α: Large-Scale Autoregressive Pretrained Chinese Language Models with Auto-Parallel Computation, April 26, 2021. https://arxiv.org/abs/2104.12369.
Zhou, Hong, Binwei Gao, Shilong Tang, Bing Li and Shuyu Wang. “Intelligent Detection on Construction Project Contract Missing Clauses Based on Deep Learning and NLP.” Engineering, Construction and Architectural Management (2023): 1–35. https://doi.org/10.1108/ECAM-02-2023-0172.
Zhu, Xiaojin, Andrew Goldberg, Michael Rabbat and Robert Nowak. “Learning Bigrams from Unigrams.” Paper presented at the ACL-08 HLT, Columbus, OH, June 2008. https://aclanthology.org/P08-1075.pdf.
Zmigrod, Ran, Sabrina J Mielke, Hanna Wallach and Ryan Cotterell. “Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology.” Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, July 2019: 1651–61.
[1] The primacy of contract law amidst the law of obligation has been variously described. LexisNexis, Halsbury’s Laws of Australia [110–10], referring to Sinclair Horder O’Malley & Co v National Insurance Co of New Zealand Ltd [1991] NZHC 2971; [1992] 2 NZLR 706, 719.
[2] Christensen, The Construction and Performance of Commercial Contracts, 212.
[3] Lam, “Applying Large Language Models for Enhancing Contract Drafting,” 3.
[4] Tu, “Limits of Using Artificial Intelligence,” 225–26; Tholander, “Design Ideation with AI-Sketching,” 1.
[5] Their smaller counterparts are following suit with over 40 per cent of mid-tier firms and just under 40 per cent of boutique firms having implemented or are interested in implementing generative AI in their practices. See Thomson Reuters, Australia: State of the Legal Market Report.
[6] See Dunlap, “Anyone Can “Think Like a Lawyer,” 2838. See also Semmler, “Artificial Intelligence,” 89.
[7] See generally Dunlap, “Anyone Can “Think Like a Lawyer.” See also Semmler, “Artificial Intelligence.”
[8] See generally Lam, “Applying Large Language Models for Enhancing Contract Drafting.”
[9] Yu, “Legal Prompting: Teaching a Language Model to Think Like a Lawyer,” 5.
[10] Christensen, The Construction and Performance of Commercial Contracts, 212.
[11] Jozefowicz, “Exploring the Limits of Language Modeling,” 2. This was similarly described as a modelling of ‘the generation of languages’: Lam, “Applying Large Language Models for Enhancing Contract Drafting,” 3.
[12] See Galindo, “Large Language Models,” 1–2.
[13] Choi, “ChatGPT Goes to Law School,” 387.
[14] Patil and Gudivada, “A Review of Current Trends.”
[15] Portinale, “Mapping Artificial Intelligence,” 12, citing Jurafsky and James, Speech and Language Processing”, Ch 6.
[16] Kolt, “Predicting Consumer Contracts,” 79. See also Shou, “LLM4FUZZ,” 3.
[17] Lam, “Applying Large Language Models,” 3.
[18] See Lam, “Applying Large Language Models,” 8.
[19] Lam, “Applying Large Language Models,” 3; Katrak, “The Role of Language Prediction Models,” 49.
[20] Katrak, “The Role of Language Prediction Models,” 49, citing Sarker, “Machine Learning,” 2. See Lam, “Applying Large Language Models,” 3.
[21] Lam, “Applying Large Language Models,” 4.
[22] See Guha, “LegalBench,” 18. See also Shaghagian, “Customizing Contextualized Language Models,” 2, 5.
[23] Shree, “The Journey of Open AI GPT Models,” cited in Choi, “ChatGPT Goes to Law School,” 387.
[24] Katrak, “The Role of Language Prediction Models,” 50, citing Brown, “Language Models are Few-Shot Learners”; Tsimpoukelli, “Multimodal Few-Shot Learning,” 1; Izacard, “Atlas,” 1.
[25] Katrak, “The Role of Language Prediction Models,” 49–50; Savelka, “The Unreasonable Effectiveness of Large Language Models,” 2.
[26] Katrak, “The Role of Language Prediction Models,” 49–50, citing Brown, “Language Models are Few-Shot Learners.”
[27] See Christiano, “Deep Reinforcement Learning from Human Preferences,” cited in Choi, “ChatGPT Goes to Law School,” 388.
[28] Benson, “The End of Legalese,” 520.
[29] See Kolt, “Predicting Consumer Contracts,” 90, citing Chalkidis, “LexGLUE,” 5–6.
[30] Lam, “Applying Large Language Models for Enhancing Contract Drafting,” 1.
[31] Bommarito, “GPT Takes the Bar Exam.”
[32] Perlman, “The Implications of ChatGPT for Legal Services and Society.”
[33] Choi, “ChatGPT Goes to Law School,” 388.
[34] Oltz, “ChatGPT, Professor of Law.”
[35] Katrak, “The Role of Language Prediction Models,” 53, citing Hendrycks, “Measuring Massive Multitask Language Understanding.”
[36] Katrak, “The Role of Language Prediction Models,” 53, citing Hendrycks, “Measuring Massive Multitask Language Understanding.”
[37] Discussed further in section 3 of this article.
[38] Allens, “Lawyer or language model?”
[39] Ghodoosi and Kastner, “Big Data on Contract Interpretation,” 2562.
[40] Arbel and Hoffman, “Generative Interpretation,” 454.
[41] Betts, “The Dawn of Fully Automated Contract Drafting,” 217, 220.
[42] Abioye, “Artificial Intelligence in the Construction Industry,” 8.
[43] Zacks, “Contract Review,” 383. See also Gilson, “Value Creation by Business Lawyers,” 255; Klein, “No Fool for a Client,” 350.
[44] Wang, “Blockchain and the Law.”
[45] Cowan and Smith, “Law as Code.”
[46] Song et al, “An Efficient Vulnerability Detection Model”; see also Zou, “When AI Meets Smart Contracts.”
[47] Ebers, “Artificial Intelligence, Contracting and Contract Law,” 20.
[48] Ebers, “Artificial Intelligence, Contracting and Contract Law,” 31–32.
[49] Zou, “When AI Meets Smart Contracts,” 41–42. For an example of automatic execution of contracts on the blockchain, see the contracts made to trade cryptocurrency Ethereum in exchange for Bitcoin by deterministic algorithms in the Singaporean appellate case Quoine Pte Ltd v B2C2 Ltd [2020] SGCA (I) 3.
[50] Eber, “Artificial Intelligence, Contracting and Contract Law,” 35.
[51] Aggarwal, “ClauseRec,” 8770–71.
[52] Jennejohn, “Contractual Evolution,” 906–07.
[53] Jennejohn, “Contractual Evolution,” 906.
[54] Lam, “Applying Large Language Models.” They suggest a third technique that combines LLMs with uniform maniform approximation and projection (UMAP). Given the constraints of this article, this technique is not discussed in detail.
[55] Lam, “Applying Large Language Models,” 2, 5–6.
[56] Lam, “Applying Large Language Models,” 2, 3, 5, 6.
[57] See generally Zhou, “Intelligent Detection on Construction Project Contract Missing Clauses.”
[58] Upadhyay, “Paradigm Shift from Paper Contracts to Smart Contracts,” 263–64.
[59] Wang and Burdon, “Augmenting Superintendent Discretion,” 125.
[60] Fu, “Control, Coordination, and Adaptation Functions,” 3–4.
[61] Sharkey, “Standard Forms of Contract,” 5.
[62] Fu, “Control, Coordination, and Adaptation Functions,” 3–4.
[63] McNamara, “Automating the Chaos,” 120–1.
[64] See McNamara, “Automating the Chaos,” 120–1.
[65] See Stathis, “Towards a Foundation for Intelligent Contracts,” 89, 95.
[66] Wang, “Addressing Financial Fragility in the Construction Industry,” 1; Upadhyay, “Paradigm Shift from Paper Contracts to Smart Contracts,” 262.
[67] McNamara, “Automating the Chaos,” 128–130.
[68]Martinelli and Chauvenet, “From Document to Data,” 85–86.
[69] LexisNexis, “LexisNexis Launches Lexis+ AI.”
[70] LexisNexis, “LexisNexis Launches Lexis+ AI.”
[71] Motionize.
[72] Robin AI.
[73] Robin AI, “Product Update.”
[74] Ji, “Survey of Hallucination,” 5–6.
[75] Bender, “On the Dangers of Stochastic Parrots,” 616–17.
[76] See Katrak, “The Role of Language Prediction Models,” 56.
[77] Navigli, “Biases in Large Language Models,” 1.
[78] See Navigli, “Biases in Large Language Models,” 7.
[79] Ebers, “Artificial Intelligence, Contracting and Contract Law,” 25.
[80] Zacks, “Contract Review,” 391.
[81] Zacks, “Contract Review,” 393.
[82] Zacks, “Contract Review,” 393.
[83] LexisNexis, “LexisNexis Launches Lexis+ AI,” emphasis added.
[84] GlobalData, “Data Collection and Labelling Market Size.”
[85] Rowe, “It’s Destroyed Me Completely.”
[86] New York Times, “The Times Sues OpenAI and Microsoft”; The Guardian, “The Intercept, Raw Story and AlterNet Sue OpenAI.”
[87] Singapore, Proposed Model AI Governance Framework for Generative AI.
[88] The UNCITRAL framework in progress has had a number of sessions examining the use of AI within the area of automated contracting, including: UN General Assembly, “Advancing Work on Automated Contracting”; “Developing New Provisions to Address Legal Issues Related to Automated Contracting”; “Draft Provisions on Automated Contracting”; “Provisions of UNCITRAL Texts Applicable to Automated Contracting”; and “The Use of Artificial Intelligence in Automation.”
[89] Gordon v MacGregor [1909] HCA 26; (1909) 8 CLR 316, 322; see Christensen, The Construction and Performance of Commercial Contracts, 212; LexisNexis, Halsbury’s Laws of Australia, [110–2250], citing Codelfa Construction Pty Ltd v State Rail Authority of New South Wales (1982) 149 CLR 337, 347–48 (Mason J).
[90] Christensen, The Construction and Performance of Commercial Contracts, citing State Rail Authority (NSW) v Heath Outdoor Pty Ltd (1986) 7 NSWLR 170, 191 (McHugh JA in dissent).
[91] Nemeth v Bayswater Road Pty Ltd [1988] 2 Qd R 406, 414 (McPherson J).
[92] Codelfa Construction Pty Ltd v State Rail Authority of New South Wales (1982) 149 CLR 337.
[93] Codelfa Construction Pty Ltd v State Rail Authority of New South Wales (1982) 149 CLR 337, 352 (Mason J).
[94] Christensen, The Construction and Performance of Commercial Contracts, citing Codelfa Construction Pty Ltd v State Rail Authority of New South Wales (1982) 149 CLR 337, 352; Royal Botanic Gardens and Domain Trust v South Sydney City Council (2002) 240 CLR 45, [30].
[95] Westlaw, The Laws of Australia, [7.4.630], citing Life Insurance Co of Australia Ltd v Phillips [1925] HCA 18; (1925) 36 CLR 60, 79 (Isaacs J). See generally Giancaspro, “Is a ‘Smart Contract’ Really a Smart Idea?” 832–33.
[96] Cohney and Hoffman, “Transactional Scripts in Contract Stacks,” 378.
[97] Tony Cole, “The Parol Evidence Rule,” 681.
[98] Wigmore, “A Brief History of the Parol Evidence Rule,” 338, cited in Cole, “The Parol Evidence Rule,” 682.
[99] Williams, “Predictive Contracting,” 661–62.
[100] Murray Goulburn Co-operative Co Ltd v Cobram Laundry Service Pty Ltd [2001] VSCA 57 [18]; Geroff v CAPD Enterprises Pty Ltd [2003] QCA 187 [36].
[101] Christensen, The Construction and Performance of Commercial Contracts, referring to the English cases of Innrepreneur Pub Co (GL) v East Crown Ltd [2000] 2 Lloyd’s Rep 611, 614 and Deepak Fertilisers and Petrochemicals Corp v ICI Chemicals and Polymers Ltd [1999] 1 Lloyd’s Rep 387, and to the Australian case of McMahon v National Foods Milk Ltd [2009] VSCA 153; (2009) 25 VR 251 (footnotes omitted); see also Mitchell, “Entire Agreement Clauses,” 225–26.
[102] [1954] HCA 72; (1954) 91 CLR 353.
[103] Masters v Cameron [1954] HCA 72; (1954) 91 CLR 353, 361 (Dixon CJ, McTiernan and Kitto JJ).
[105] Sinclair, Scott & Co v Naughton [1929] HCA 34; (1929) 43 CLR 310, 317, quoted in Baulkham Hills Private Hospital Pty Ltd v GR Securities Pty Ltd (1986) 40 NSWLR 622, 628.
[106] McGraw, “23 Security Risks in Black-Box Large Language Model Foundation Models,” 160–64.
[107] For example, Garg, “Counterfactual Fairness in Text Classification,” 219–26; Feder, “CausaLM,” 333–86; Zmigrod, “Counterfactual Data Augmentation,” 1651–61; Sia, “Logical Satisfiability of Counterfactuals,” 9837–45.
[108] Chen, “Word-Level Interpretation of ChatGPT Detector,” 272–78; see also Shariatmadari et al, “Harnessing the Power of Knowledge Graphs.”
[109] Lee et al, “LLM Attributor”; see also Strom, “Fake ChatGPT Cases.”
[110] Lee et al, “LLM Attributor,” 6.
[111] Christensen, The Construction and Performance of Commercial Contracts [9.265], citing Thomas National Transport (Melbourne) Pty Ltd v May & Baker (Australia) Pty Ltd [1966] HCA 46; (1966) 115 CLR 353 [273] (Windeyer J).
[112] Ghodoosi and Kastner, “Big Data on Contract Interpretation,” 2569.
[113] Ebers, “Artificial Intelligence, Contracting and Contract Law,” 35.
[114] Arbel and Hoffman, “Generative Interpretation,” 510–11.
AustLII:
Copyright Policy
|
Disclaimers
|
Privacy Policy
|
Feedback
URL: http://www.austlii.edu.au/au/journals/LawTechHum/2024/13.html