By Sophia Maranan, Jerry To and Tim Yang
In recent years, we have seen unprecedented technological advancement across broad disciplines, showing great potential and revolutionising the way certain tasks are operated. Such technology is remarkably applicable in the area of law, where artificial intelligence (AI) systems are being used to automate and augment legal procedures such as legal research, e-discovery, contract drafting, and predicting litigation outcomes. In June 2020, OpenAI released a new AI system called GPT-3, a developed language model that has 175 billion parameters.[1] Due to its size, GPT-3 can carry out specific tasks such as generating code, solving problems, and composing poems without the need to input large amounts of complex data.[2] As it is in its beta stage, it is only accessible by a limited number of individuals.[3] Regardless, the introduction of GPT-3 gives access to new opportunities pertinent to the field of law.
The practicality of GPT-3 could help in attaining efficient legal procedures, the development of access to justice, and the reduction of costs for legal procedures such as contract drafting. This article will discuss some of GPT-3’s capabilities in more detail, particularly in the area of contract law. This includes its ability to incorporate vast amounts of information in anticipating potentially frustrating events. On the other hand, potential pitfalls of GPT-3 will also be discussed, such as the issue of attributing liability when a contract made with GPT-3 is found illegal.
The upside: Anticipating future events and lowering the cost of contract drafting
COVID-19 has spurred numerous recent changes in government policy and the economy, which has been significantly detrimental to the operation of many commercial contracts and thus contributed to the expected increase in contract dispute volumes for the year of 2020.[4] These disputes partially arise from the trouble that the many signatories of contracts likely would not have established a clause that gave them the right to terminate on grounds of the disruptions caused by COVID-19. As such, these circumstances have led many to turn to the doctrine of frustration to discharge their onerous contractual obligations, as evidenced by the near 500% increase in search interest for the “contract frustration” term on Google.[5]
The doctrine of frustration enables courts to terminate a contract should it become incapable of being performed in a way that is radically different from what the contracting parties had originally intended at the time of formation, and the doctrine will not be invoked if the parties ought to have reasonably foreseen the eventuation of the frustrating event.[6] The latter part of this rule is problematic for parties since what events they can foresee given their resource constraints, lack of knowledge, and a myriad of other idiosyncrasies may differ vastly from what the court believes they should reasonably have foreseen. Consequently, contracting parties who face a great difficulty but not an impossibility in performing their contractual obligations will not be entitled to invoke the doctrine of frustration.
However, with GPT-3’s superior language learning model, many of the frustrating events of the future like COVID-19 that are just starting to become reported in the news cycle may easily be spotted and gathered, and so allow legal practitioners to draft more comprehensive contracts that account for the impacts of these future frustrating events. Such news sentiment extraction and analysis has already been applied successfully in the related field of economics without GPT-3, where the creation of a sentiment index by researchers that can extract keywords in news articles was shown to be capable of providing statistically significant predictions of near-term economic conditions.[7] GPT-3 will not only speed up this process of trend-spotting but also increase the accuracy of the results given its ability to draw on over 175 billion parameters, which can include vast arrays of textual data in the news cycle. Hence, GPT-3 can help at least reduce or at most eliminate the possibility of costly contractual disputes for parties by helping draft contracts that allow parties to terminate when an event that can impact on their transaction occurs.
More generally, GPT-3 can assist with contract drafting by reducing the time taken for drafting and also minimizing any blind-spots often missed whilst drafting. This improvement in efficiency is expected since the rapid natural language processing abilities of AIs like GPT-3 allow it to “interpret and understand questions presented in plain language… by analysing the words, sentence structure and patterns of human communications ” to draft contract-like documents.[8] For example, tools such as LawGeex that are based on natural language processing engines help identify and include frequently missed clauses for contract drafting, which has been claimed to help minimise the time it takes practitioners to review contractual documents by up to 80%, and speed up the drafting and contract signing process by up to 300%.[9] Given the already impactful nature of legal software based on older natural language processing engines, the inclusion of GPT-3, an exponentially more powerful engine, would only bolster the favourable impact aforementioned on the legal profession’s costs and speed of operations.
The downside: Liability in AI-created contracts
Despite the overwhelming benefits provided by GPT-3, its widespread usage in the creation of contracts poses significant concerns as to the apportioning of liability towards non-human agents who played a part in creating said contracts when they are held to be unenforceable. Although it has long been held by the High Court that there is a duty of care owed by professionals to their client and failure to fulfil that duty constitutes negligence, extending that doctrine of negligence to cover AI liability is a whole other issue. This underscores the fundamental difficulty of reconciling the ancient and sometimes anachronistic nature of the common law with the incredible advances of modern technology.
The first and perhaps most simple challenge is that under the current law, machines are regarded as either services or products and as such have no legal personality. There have recently been calls within the legal profession, such as that from International Bar Association, for courts to start recognizing a form of “AI-personhood” akin to that of corporations in order for them to be sued, but it is still very unclear as to how this doctrine would work in practice. The second and much more difficult issue is how to apply the principles of negligence to a non-human agent. One of the most useful, and indeed ingenious, legal fiction in all of law is that of the ‘reasonable person’, whose conduct serves as the benchmark against which the parties are judged. The reasonable person would certainly weigh whether the potential loss was reasonably foreseeable before executing an action. However, since the decision-making process of AI differs immensely from that of humans, the application of this standard cannot be applied. Advanced AI such as GPT-3 utilises machine learning and massive data sets to solve problems without human interference. Can it really be said then, that the programmer/firm that is responsible for programming the AI is also responsible for the path that the AI took in reaching its conclusion, even though that path is completely void of human guidance and thus utterly unforeseeable?
Perhaps this philosophical dilemma might never be satisfactorily resolved, but for the time being, the most feasible compromise is perhaps implementing regulations which make the use of AI such as GPT-3 to be the exclusive prerogative of certified legal professionals, which not only allow an element of human oversight over the whole process but will also make it easier in terms of tracing liability. The Government could create two agencies to fulfil this goal: one that would legislate the relevant standards and ethical requirements and one that enforces them. The Courts could then simply adopt the view that the AI in question is merely an agent akin to the vicarious liability doctrine currently applied to liability against businesses for employees acting in the course of their employment,[10] which would shift the liability burden from AI to that of lawyers. This method solves the main issues regarding AI liability: by tying AI liability to human agents, the applicability of the reasonable person standard is maintained without undoing centuries of common law jurisprudence.
In conclusion, GPT-3’s operational processes could lead to favourable outcomes in the legal profession and create greater opportunities for conflict resolution and establishing a more effective and accurate justice system. However, there will also be risks and drawbacks regarding the presence of technology in a typically traditional field. For instance, the concept of liability is one that must be notably tackled. This, however, should not hinder or prevent GPT-3 from showing its full potential and being applied to the legal profession. Overall, GPT-3's capabilities are exceptionally promising and could pave the way to a stable use of technology in law.
[1] Will Douglas Heaven, ‘OpenAI’s new language generator GPT-3 is shockingly good – and completely mindless’, MIT Technology Review (Blog, 2020) 3< https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/>.
[2] Dale Markowitz, ‘GPT-3 Explained in Under 3 Minutes’, Dale on AI (Blog, 2020) 8 < https://daleonai.com/gpt3-explained-fast>.
[3] Ibid 10.
[4] Norton Rose Fulbright, 2019 Litigation Trends Annual Survey (Survey, 2019) 5 <https://www.nortonrosefulbright.com/-/media/files/nrf/nrfweb/knowledge-pdfs/final---2019-litigation-trends-annual-survey.pdf>.
[5] Google, ‘Google Trends’, Google Trends Explore (Web page, 19th of September 2020) <https://trends.google.com/trends/explore?date=today%205-y&q=contract%20frustration>.
[6] Davis Contractors v Fareham Urban District Council [1956] AC 696; Codelfa Construction Pty Ltd v State Rail Authority (NSW) (1982) 149 CLR 337.
[7] Kim Nguyen and Gianni La Cava, ‘News Sentiment and the Economy’ (Bulletin, Reserve Bank of Australia, 19th June 2020).
[8] Sean Semmler and Zeeve Rose, ‘Artificial Intelligence: Application Today and Implications Tomorrow’ (2017) 16 Duke Law & Technology Review 85, 87.
[9] LawGeex, ‘LawGeex’, LawGeex (Web page, 19th of September 2020).
[10] Prince Alfred College Inc v ADC (2016) 258 CLR 134.