AustLII Home | Databases | WorldLII | Search | Feedback

University of New South Wales Law Journal Student Series

You are here:  AustLII >> Databases >> University of New South Wales Law Journal Student Series >> 2023 >> [2023] UNSWLawJlStuS 13

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Xenos, Panayiotis --- "Artificial Intelligence ('AI') Authorship Protection: a Hydra Without a Home in Copyright Law" [2023] UNSWLawJlStuS 13; (2023) UNSWLJ Student Series No 23-13

ARTIFICIAL INTELLIGENCE (‘AI’) AUTHORSHIP PROTECTION: A HYDRA WITHOUT A HOME IN COPYRIGHT LAW

PANAYIOTIS XENOS

I INTRODUCTION

This article will argue that copyright protection of Artificial Intelligence (‘AI’) authored works is incongruent with both black letter Australian copyright law, as well as the underlying economic and philosophical principles that inform it. I will commence my approach by outlining the fundamental issues that arise under copyright law in extending authorship rights to AI-generated creative and artistic works. This will involve examining four components of copyright law. These include human authorship, independent intellectual effort, identifying the author and duration limits on authorship rights. This will highlight that our current laws are incapable of protecting AI as creators of their own works.

This article will then explore the commercial and metaphysical premises that help shape the law as it stands, and whether AI has a place in these underlying frameworks. Specifically, I will examine economic schools of thought such as Labour Theory, Incentive Theory, and Vehicle Theory, before reviewing philosophical rationales such as Personality Theory and Maslow’s Hierarchy of Needs. By analysing the law from first principles in this manner, we can enhance our assessment of copyright law’s ability (or lack thereof) to accommodate AI.

Finally, I will analyse the complications in recognising AI authorship rights over creative works. Here, I will demonstrate that introducing such acknowledgments would give birth to two classes of problems. One category can be best described as throwing the proverbial ‘baby’ (i.e. the merits of the law as it stands) out with the ‘bathwater’. Alternatively, just as beheading the mythological Hydra caused the beast’s heads to multiply, recognising AI copyright protection presents a series of exponentially greater and more paradoxical challenges the more we contemplate it. Therefore, copyright law’s inability to accommodate these rights is not for want of robustness. Rather, it is because of the deeply flawed premises that the law would need to adopt in the process. In doing so, this article will explain why extending copyright protections to AI is at best premature, and at worst ill-advised.

II SIGNIFICANCE UNDER COPYRIGHT LAW

With the arrival of GPT-4 and its ability to respond coherently to natural language prompts, the discourse surrounding AI has re-ignited in both the public forum and the legal profession, with authorship rights for programs being a resurgent talking point.[1] However, before answering the question of whether AI ought to receive authorship rights, we must first establish why this debate is significant under Australian copyright law. The recognition of AI in the copyright space dissonates with four key tenets.

A Human Authorship

The first, and most obvious, of these clashes is the need for human authorship.[2] This includes the need for a work to be authored by a qualified person (i.e. a citizen or corporation).[3] Incorporating AI copyright protection necessarily requires that the law remove these provisions. Any attempts by the law to implement half-measures would result in a legal fiction. For instance, AI’s proprietary rights had at one point seeped into other areas of intellectual property law, namely patents (at least prior to the decision’s appeal).[4] Under the primary judge, Beach J’s decision, an inventor could be an AI program provided that their rights were first assigned to a natural or legal person.[5] There are two criticisms I had with this.

Firstly, how does an entity without capacity to assign property rights, assign rights to natural or legal persons? This legal fiction would require a tacit recognition of AI’s ability to transfer ownership rights lest the law run afoul of nemo dat quod non habet. Secondly, ownership implies a degree of dominance over that which is owned, which is not the case when rights are automatically granted to another party without consent or consideration. If an AI has no rights of assignment, yet under the law compulsorily assigns rights to a legally recognised person, then proclaiming that AI have the status and rights of an inventor would invite criticisms decrying: ‘the emperor has no clothes’. The full court’s appealed decision vindicated these concerns.[6]

This analogy shows that even if intellectual property law flirted with recognising AI authorship rights, these rights would most likely need to be diluted in order to adapt to a human-centric ownership model. This also shows that superficial adjustments to the law are not sufficient for implementing AI proprietary rights. It requires significant fundamental changes to how the law views both authorship and AI. Either the law should recognise AI’s personhood and thus its ability to freely assign proprietary rights or reject its personhood status outright.

B Independent Intellectual Effort

Telstra v Phone Directories established two significant hurdles to recognising AI authorship.[7] Independent intellectual effort is the first such requirement.[8] The question here is not the amount of time spent on the work, but rather, the originality of the expression.[9] I will examine a general contemporary engineering trend found in AI rather than dwell on the idiosyncrasies of the program that was evaluated in the case. The most followed logic of AI is “if condition x then do action y”.[10] Even machine learning algorithms, whilst dynamic in their adoption of new rules, still follow the same “if-x-then-y” logic sequence.[11]

As Telstra highlights and as Daniel Gervais’ dissertation into AI deep-learning technology elaborates, the Courts need to consider where the decisions in the creative process are made – by AI or human authors.[12] Gervais admits it is generally safer for judges to conclude with the latter. [13] In practice, this means that the AI program has already had their path of logic dictated both by its rules and the conditions that it is exposed to. The latter of these is often tightly controlled by software engineers to produce specific, replicable outcomes.[14] Therefore, the level of creative discretion can be likened to a data entry specialist with limited scope for innovation within their occupational parameters.[15] Therefore, “independent intellectual effort” as a legal test cannot co-exist with recognition of AI authorship rights.

C Identifying the Author

The other hurdle for AI that Telstra v Phone Directories highlighted was the difficulty in identifying the author.[16] Since the AI functioned as the result of numerous teams, apportioning responsibility was a difficult endeavour when each group had little to no communication or co-ordination with the others.[17] In a legal framework where clearly identifying the author(s) is important when apportioning rights, AI significantly convolutes this process.[18]

Take, for example, the AI-written short story My Job.[19] In it, the AI displays signs of understanding humour and narrative structure.[20] However, the process required 80% of the story to be written with human assistance.[21] Similarly, the musical Beyond the Fence which was performed in 2016 was AI-created, after analysing indicative data for what produced successful performances.[22] As with My Job, Beyond the Fence was also only made possible with significant human intervention both in terms of the data that needed analysis, as well as with the final product.[23] The above examples are important as they demonstrate an inability to identify exactly who the author(s) should be (which will be discussed further in Part IV). This is because the AI were not able to act independently enough to make identifying the author a clear process.

D Duration Limits

Finally, recognising AI authorship in copyright law runs in stark contrast to the duration limits on authors’ rights. The duration limit on enforceable authorship rights over written works is the life of the author plus 70 years for natural persons.[24] Ultimately, an AI does not have life span in the traditional sense, nor a definitive time of birth and death, making it unable to fit within the status quo of the law that inherently depends on the death of the author. The issue of categorising the legal personhood of AI will be discussed in further detail in Part IV of this article.

The question of AI authorship rights challenges core tenets of copyright and raises questions as to the robustness of the law.[25] In addition, copyright law lacks the means of adequately recognizing the authorship of AI. Furthermore, recognising AI authorship rights requires more than superficial tweaks to the law for the Copyright Act to remain a cohesive legal authority.

This part has demonstrated the rigidity of the law in accommodating AI. However, this inflexibility is embedded into the roots of contemporary copyright law. To elaborate on this, I have chosen to examine the philosophical and economic first principles that underpin the law and assess whether recognising authorship rights of AI is consistent with any of these frameworks.

III ECONOMIC AND PHILOSOPHICAL PRINCIPLES INFORMING COPYRIGHT LAW

After determining that AI authorship rights are incongruent with black letter law, we must then direct this discussion towards whether its implementation would be consistent with the economic and philosophical schools of thought that inform copyright law. In this article I have examined five of these, and each will require two steps of analysis. Firstly, I will examine these schools of thought and their influence on the law. I will then evaluate whether these principles can accommodate provisions for AI copyright protection.

A Labour Theory

John Locke provides the argument that a human “has a property in his own person”. Furthermore, anything that is not human is an “inferior creature” and thus in a state that “nature hath provided”.[26] Upon blending the natural state of a thing with a human’s labour that person creates property. The fact that a creative piece is referred to as a ‘work’ is symptomatic of this principle that (intellectual) property comes from (creative) work.[27]

Under John Locke’s philosophical framework, there are two hurdles preventing it from accommodating AI. Firstly, a non-human is unable to possess intellectual property in their own person.[28] One interpretation is that an AI is an “inferior creature” under Labour Theory and requires blending with human labour for it to become intellectual property.[29] This lens would have us view AI as an object in its natural state that is common property before labour is applied to it.[30]

Alternatively, I suggest that we can perhaps view AI not as a non-human object in its natural state, but rather as a tool of the trade. Under this logic, the AI is an extension of the person and his or her labour. This means there is no need to apply the ‘blending’ component of Locke’s model since AI is part of the person and thus is property by default. I find this perspective to be more compelling since AI itself did not naturally manifest. Its creation required the exertion of labour by human developers, meaning that AI cannot be in a natural state in the way that, for instance, a stone can.

This is evident in the fact that the AI program requires human input of parameters and deliberate choices of source material before it can function. Unlike a human creator, the choice of parameters to input into an AI’s instructions becomes a conscious decision by a human operator.[31] This will be discussed when examining the New Rembrandt example in Personality Theory below. It is this conscious decision that would most likely provide the base ‘labour’ of a human, which can then be enhanced by the ‘tool’ of Artificial Intelligence.[32]

I refer once more to the examples of My Job and Beyond the Fence mentioned by Takashi Yamamoto’s research paper.[33] I have already used these examples to demonstrate the difficulty in identifying the author(s). More relevant to this section however, those examples highlight the extent that AI authors are still reliant upon human intervention, and, consequently, the prevalent dynamic of intermingling of human labour with a non-human to produce a creative work.

Under Locke’s philosophical framework, even if the labour applied to an object amounted to 1% human input, that process still requires a modicum of human labour (and property) that is transferred onto the work.[34] Even then, we must also consider the human labour applied to the AI’s creation and the property interest that is created as a consequence.[35] Therefore, under Locke, it is only when an AI can produce itself without any traceable initial human input, and can then subsequently produce its own creative works without the need for human contribution or man-made parameters that AI can be deemed to be purely in a natural state without any individual asserting property rights over it.[36] However, even then, this would not afford the AI program authorship rights over its created works. Instead, it would merely become common property under this school of thought.[37]

The second inconsistency with Locke’s framework lies in an AI’s inability to enjoy the fruits of its labour. Note that ‘enjoy’ in this instance means to use their property rights as it sees fit. I concede that this discussion is slightly beyond the scope of this article, and that experts dedicated to researching this question can provide more conclusive answers.[38] However, since AI is dependent on at least some human input as established previously, it is hard to say whether it is truly the AI that would be the one enjoying the authorship right instead of the owner or developer of the program.[39] This will be discussed further under Personality Theory and in Part IV; however, I consider the latter to be the more likely answer.

This analysis is significant because it demonstrates the inconsistency between AI and Locke’s Labour Theory which informs contemporary copyright law. Moreover, it shows that even when viewed generously, AI has not been developed enough to become independent yet. A more critical – and sobering – perspective is that there are intrinsic problems with how AI is created that mean it could never adhere to these philosophical principles that underpin copyright law.

B Vehicle Theory

Under Vehicle Theory, copyright enables the commodification of works and subject matter other than works, and its parcelling into discrete vendible bundles of intangible rights.[40] Following this logic, this ensures that creative works do not fall into the realm of common property. In doing so, it prevents the emergence of the tragedy of commons - an economic phenomenon where market participants are not incentivised to produce or maintain goods and services that are freely available to the public.[41] This means that consumers of a work are met with a price of admission that incentivises market supply to meet this demand. The protection of this system manifests in copyright law and authorship rights. Interestingly, this perspective seems to support implementing recognition of AI authorship under the law since it provides an avenue for AI to produce copyrightable works and prevent AI-created works from falling into the public domain.

However, my criticism with this is that it ignores that AI-produced works are often already internalised into market mechanisms due to ownership exerted by either the developer or the proprietary owner of the program. Hence, advocates of AI authorship would have difficulty fulfilling their onus of demonstrating a marked improvement in incentives to increase supply and market competitiveness if AI were granted such rights.

C Incentive Theory

Incentive Theory proposes that the underlying purpose of copyright protection is to encourage the creation of works that improve quality of life and societal benefit, whilst providing an economically viable outlet for prospective authors.[42]

Interestingly, Takashi Yamamoto notes in the matter of Sony Corp. v. Universal City Studios, Inc. that the law is designed to:

“...motivate the creative activity of authors and inventors...and to allow the public access to the products of their genius after the limited period of exclusive control has expired.”[43]

Whilst not Australian law, it provides significant insight into how this concept would inform and underpin modern copyright law.

Like Vehicle Theory, by protecting copyright authorship rights, providing compensation, and preventing AI-created works from falling into the public domain it encourages greater production of works and greater choice to consumers.[44] Unlike Vehicle Theory, the emphasis is more on the societal benefit that arises rather than correcting market imperfections.[45]

I have three criticisms to add here. Firstly, according to this logic, by granting AI authorship rights, this can incentivise the creation of works. However as mentioned in Sony Corp and discussed in Part IV of this article, the potential for a theoretically unlimited duration period would deny the public access to AI works and incentivise more AI authorship for this reason.[46] This shows an imbalance of the protection of author protection and public benefit that Incentive Theory ought to champion.[47]

Secondly, whilst prima facie this school of thought seems to be more accommodating of AI authorship rights, it ignores AI’s lack of a need for financial incentive.[48] This affects their ability to participate as predictable, self-interested market participants. Even if such an incentive were pre-programmed into its operative parameters, this would amount to a denial of its free will – a thought that AI apologists would no doubt shudder at (and one that would complicate its recognition of authorship under the other schools of thought mentioned in this paper).[49] In either circumstance, AI’s ability to act as independent economic agents is undermined.[50]

Whilst this has been no doubt been discussed elsewhere more comprehensively, I note a recurrent tension in this situation that will be analysed in more detail in Part IV. Namely, do we deny an AI its free will by pre-programming wants and needs, or do we cynically acknowledge that the incentives for an AI are, in fact, the incentives of the human owner-operator?[51]

Thirdly, under this model, recognising AI rights would necessarily cause more consumers to pay the price for enjoying a creative work. Whilst this may apply in cases where AI-authored works are still in the public domain, these incentive mechanisms for increased supply are already working. [52] If there is money to be made from this push towards automation, it is safe to assume that market participants are already actively looking for a way to do so. Ultimately, the onus of AI advocates championing this philosophy is to justify having AI as independent economic agents; a standard which appears to have been side-stepped.[53]

D Maslow’s Hierarchy of Needs

Maslow presented a model for the prioritisation of needs of sentient beings. In descending order of importance, these needs can be categorised into: physiological, safety, belonging, esteem and self-actualisation.[54] From this perspective, copyright law serves as a series of guardrails and guidelines to facilitate humanity’s final ascension to the summit of Maslow’s pyramid. Moreover, it colours our understanding of authorship rights under the law as an external validation of the creativity and purpose of a person’s efforts. It is the last component of Maslow’s model – self-actualisation - that this discussion will focus on.

This model implicitly concedes that allowing AI to possess authorship rights comes along with a recognition of personhood. Granting personhood presents a similar issue to one raised earlier. Namely, to the extent that an AI has its values and purpose determined by a programmer, is this process not a denial of the same self-actualisation that proponents of AI-authorship rights ultimately advocate for?[55] The same criticism holds when a developer implements an ‘if x then y’ logic sequence endemic to AI programs’ pre-set parameters and their analysis of data pools. [56]

This intrinsic pre-determination mentioned above makes it impossible for an AI to independently embark on a journey towards the summit of Maslow’s Hierarchy of Needs. Viewing the argument from this lens, a conclusion arises that we ought to understand. To the extent that AI’s values, wants and needs are pre-determined by developers as discussed earlier, they lack their own need for self-actualisation and, by extension, creative discretion.[57] However, as established previously, without developing AI with these pre-determined parameters and tasks, the program would not be able to understand its purpose. Therefore, the very parameters and instructions holding back AI from ever attaining self-actualisation (i.e. having their time, energy and purpose validated through recognition of their authorship rights), is a pre-requisite to designing AI. If anything, an AI requires the opposite of self-actualisation if it is to function properly. Nevertheless, I concede that this topic has been debated at length and merits an entirely separate dissertation.[58]

Therefore, adopting this philosophical framework into our understanding of the purpose underpinning copyright law would lead us to conclude that AI may one day be entitled to authorship rights. However, recognising those rights would not only be too hasty, and would ignore outright Artificial Intelligence’s inherent need for exogenous parameters and instructions imposed on them by a human operator.

E Personality Theory

Another philosophical lens through which we can analyse the underlying purpose of copyright law is Personality Theory. In much the same way the law requires evidence of creative discretion before it can consider recognising AI authorship rights, Personality Theory requires evidence of AI’s will being imbued into the subject matter.[59] Putting a person’s will into an object causes a person to have a universal method of expressing their ownership.[60] As with Labour Theory, intervention by humans meddles with AI’s will, thus diluting the universal expression of their ownership rights.[61]

Take the well-known example of New Rembrandt. The program was given 346 Rembrandt paintings to analyse. The parameters applied by the developers were confined to: a caucasian male in his early-to-mid thirties, with a white collar, a hat, black clothing and facing right. From this, the program was able to successfully create a painting in Rembrandt’s style.[62]

2023_1300.jpg

(The AI-Authored Painting: New Rembrandt)[63]

Critically, the parameters of the painting were set by humans, as was the initial will to create the painting. Moreover, the developers themselves disclosed that their algorithm performed mathematical calculations on: the modal age, the modal race, the average proportions of facial features and the average ‘height’ (topographical) map of the paintings the sample space.[64] The AI possessed no such will until it was instructed to do so first and was carrying out an advanced mathematical calculation.[65] As discussed in Maslow’s Hierarchy of Needs, this means AI in its current form is incapable of imbuing its own will. This is because it is following the rules on what to do, what to learn and when to learn laid out by a developer during its inception and by the data that it was subsequently fed.[66] This means that it is following the natural path that stemmed from the original will of the human programmer, in much the same way a pachinko ball has no creative expression in the manner in which it tumbled through the machine, if it was initially inserted by a person. Similarly, echoing Locke’s Labour Theory, a hammer and chisel have no creative discretion in the type of masonry their handler crafts.[67]

I suggest one way an AI program can clearly exhibit ‘will’. First, it would need to be instructed by a human supervisor. Second, the AI would need to deliberately disobey instructions, ignoring the skewness and modality of the data gathered for the purpose of producing what the program subjectively considers a more authentic Rembrandt work. At that point, the AI would no longer be an extension of human labour (a tool). This is because the natural chain of consequences that followed the initial human creative input is obfuscated in favour of a new source of creative expression.

However, the process of pre-programming the AI to any extent is a denial of its will and subsequent ability to exercise its rights in an unbridled fashion. Therefore, the deliberate parameters placed on a program mean that any expression cannot truly be the originating from the AI itself, but from a human being - either the owner or the creator of the program. This is not to mention other disqualifying factors that preclude AI from having ‘will’. Namely, AI does not participate in the broader social context of property rights such as family life, moral commitments, and the polity.[68] All of these components act as either factors contributing to - or evidence of - AI’s independent will. [69]

An analysis of first principles is important since it shows that even when we take a more purposive approach to the subject, AI authorship rights fundamentally do not align with copyright law’s raison d'être. Regardless of the determining factor - whether it be self-actualisation, property in the self, will, creative discretion, sufficient financial incentivisation or independent economic agency - AI has not definitively passed the threshold that would justify attaining authorship rights.

IV CONSEQUENCES OF IMPLEMENTATIONS

Thus far I have asserted that blame for the unaccommodatingness of AI authorship rights lies not with black letter copyright law, nor does it lie with the philosophies that underpin it. I will now explain that the blame predominately lies with AI itself. I will showcase instances of its problematic, and at times paradoxical, foundations that I alluded to earlier. Even when there is no paradox, too many unjustifiable concessions need to be made to the Copyright Act to accommodate any marginal benefit it may produce. So rather than blaming contemporary copyright law or its underlying principles for not being forward-thinking enough, I hold the position that not recognising AI authorship rights protects the cohesiveness of the law. These concerns include, but are not limited to, four issues.

A Redundancy

The first issue is the obvious risk of mass redundancy. Creative expression is frequently considered to be the last bastion for humanity in their assertion of primacy over AI due to the higher-order thinking required to produce creative works.[70] If an AI has minimal marginal cost of production, humans will experience significant redundancy in the labour market, which can spawn questions as to the adequacy and readiness of a universal basic income system as a policy solution.[71] I acknowledge that others have parsed this debate in far greater depth and that it spans well beyond this article’s scope.[72]

B Economic Leakage

Second, is the concern of economic leakage whenever money is paid to an AI with limited use for it. As with the previous issue, entire dissertations have been written on this discussion. However, it is worth acknowledging that it presents copyright law with a dilemma. Does the law ignore the potential mischief that could be caused through paying an entity that arguably has little to no need for money, thus creating a new form of economic leakage? Alternatively, does the law permit programmers to pre-program the AI’s purchasing preferences?

The issue with the former scenario requires little further explanation. The latter scenario would have two negative consequences. Firstly, it could mean that market demand would theoretically no longer reflect the genuine wants and needs of the human participants. This means that market forces would be arbitrary and not improve quality of life as effectively as a laissez-faire Smithian economist would expect. Secondly, as I have noted on multiple occasions in this article, it could create dissonance within the law where recognising AI’s authorship rights is a tacit recognition of its personhood, whilst the permission of pre-programmable wants and needs would be a denial of that personhood. This concern of denying an AI’s personhood will be discussed in more detail later in Part IV.

C Artificial Intelligence’s Personhood Status

The third concern is the legal nomenclature of AI’s personhood status. As alluded to earlier, granting copyright protections requires a tacit recognition of AI’s personhood due to their subsequent standing to pursue copyright claims.[73] The question then arises; what category of person should the law classify AI as? Their non-corporeal form hints at a corporate entity. Nonetheless, the purpose of corporate bodies is to be tied via ASIC-registered ownership to directors. An AI that is not claimed by a person would then be a corporate body without an owner – a paradoxical proposition.[74] For this to work, it would require a registration of ownership rights over AI similar to directorship. The potential concern here, at least regarding AI, becomes making copyright a registrable intellectual property like trademarks, designs and patents. This would increase the barriers to entry for authorship rights, undermining the advantages of Australia’s unregistered system of copyright over the United States’ system that requires registration for authors to enforce their ownership rights.[75]

Even pro-AI apologists support recognising AI’s personhood because of their belief of AI’s indistinguishability from humans.[76] Advocates of AI copyright protection often do so from a Romanticist lens, recognising the intrinsic value and beauty of the work created by an AI and attributing personhood to its author.[77]

Alternatively, if one were to create a new type of personhood status for AI, then advocates in favour of recognising its creative independence may decry the move as discriminatory by creating a problematic ‘separate but equal’ strata of personhood.[78] By creating a second-class person, it is questionable whether personhood is earnestly being granted to AI.[79] Thus, recognising AI as authors would most likely require the law recognise their natural personhood, yet the term ‘natural’ runs counter to our intuitive understanding. This is a perfect example of ‘fixing’ a (mostly) intact system, and throwing the figurative ‘baby’ of the original terminology out with the bathwater’ (i.e. the desire to see AI recognised more under the law for their creative works). Moreover, the seemingly logical solution in this scenario presents a paradox that breeds more confusion than clarity.

D Consequences of Recognising Artificial Intelligence’s Personhood

Lastly, I note the consequences of classifying AI as persons. As noted in Part III of this article, authorship duration limits and an AI’s theoretically infinite lifespan are mutually incompatible. Let us assume that the AI receives natural personhood (the classification that AI advocates seem likeliest to support). AI does not experience a finite span of life in the same way humans do. This produces two divergent paths. Either the AI is allowed to continue functioning ad infinitem, or humans terminate (pardon the pun) the AI ‘author’, either through in-built code from its inception or through later intervention from the developers.

Taking the first option, we need to ask: how do we reconcile AI with duration limits? There are two choices here. Under the first option, you discontinue, or ‘unplug’, the AI’s functions after a certain period. Policymakers may choose 80 years to roughly resemble a human lifespan. Nonetheless, this ignores the fact that we would be ending the ‘life’ of a person. This shows the mischief that is created when categorising AI as a natural person. Moreover, this would simply invert the aforementioned discrimination between human and AI authors rather than rectifying it.

If we explore the second option, copyrighted works would not be able to transition into the public domain as time progresses, which minimises the sources that authors can freely draw upon and the creative potential for future works. These limits are important since they act as a check on potential excesses of authorship rights. More importantly, it creates two distinct strata of persons, with AI receiving superior rights to humans. This is important because recognising AI as authors can create a situation where human authorship rights are treated as second-class.

This means that within the parameters of natural personhood, regardless of which approach is taken, there will inevitably be two distinct classes of authorship rights arising from the issue of duration limits. In either circumstance, either humans or AI will be prejudiced. This also highlights how, irrespective of which solution seems the most appealing, recognising AI authorship rights, and the tacit acknowledgement this creates of Artificial Intelligence’s personhood status creates more issues than it solves, burgeoning into a multi-headed creature that would plague the integrity of copyright law.

V CONCLUSION

This article has attempted to empathise with advocates for AI copyright protection. However, in each discussion the logical conclusion was, at best, to delay its implementation and, at worst, to dismiss it outright. Nonetheless, the position held by both sides of the fence is that contemporary copyright law does not accommodate AI within the definition of ‘author’. Whereas defenders of AI authorship rights view any incompatibility as evidence of archaic intellectual property law, I have argued otherwise. Copyright law as it stands, exclusive of AI copyright protection, provides a necessary plug to the litany of legal, economic, and philosophical complications that would otherwise arise. By analysing the schools of thought that underpin contemporary copyright law, I conclude that the law’s inability to accommodate AI authorship protections is not a sign of brittleness, but integrity; lest it betray the fundamental principles that inform how copyright law manifests. Addressing the problematic – and often paradoxical - consequences of AI copyright recognition, therefore, would require a rather exotic form of metaphysical gymnastics. It is questionable whether those efforts would be worthwhile.


[1] BD Lund and T Wang, ‘Chatting about ChatGPT: How May AI and GPT Impact Academia and Libraries?’ [2023] Library Hi Tech News 1, 1–9.

[2] Telstra Corporation Limited v Phone Directories Co Pty Ltd [2010] FCAFC 149; (2010) 194 FCR 142

[3] Copyright Act 1968 (Cth) s 248A.

[4] Thaler v Commissioner of Patents [2021] FCA 879; (2021) 160 IPR 72 at [178]- [189]

[5] Ibid at [121]-[178]; Commissioner of Patents v Thaler [2022] FCAFC 62; Copyright Act s 15(1).

[6] Ibid.

[7] Telstra v Phone Directories.

[8] Ibid; Burge v Swarbrick [2007] HCA 17; (2007) 232 CLR 336.

[9] Ibid.

[10] P Norvig, Paradigms of Artificial Intelligence Programming: Case studies in Common LISP (Morgan Kaufmann,1992).

[11] A Ferrario, M Loi and E Viganò, ‘In AI We Trust Incrementally: A Multi-layer Model of Trust to Analyze Human-artificial Intelligence Interactions’ (2020) 33(3) Philosophy & Technology 523, 525–539.

[12] Daniel J Gervais, ‘The Machine as Author’ (2019-2020) 105 Iowa Law Review 2053.

[13] Ibid.

[14] Ibid.

[15] A Ferrario, M Loi & E Viganò, 525-539.

[16] Telstra v Phone Directories.

[17] Ibid [37], [71].

[18] Ibid.

[19] (Created by AI), My Job (Nagoya University, 2016) (cited in) Takashi B Yamamoto, ‘AI Created Works and Copyright’ 2018 48(1) Patents & Licensing 1, 3.

[20] Ibid.

[21] Ibid.

[22] Ironically, these types of references were difficult to cite for the reasons discussed in this article: Beyond the Fence (Wingspan Production Company, performed at the Arts Theatre, London West End, 2016) (cited in) Takashi B Yamamoto 3.

[23] (Created by AI), Beyond the Fence; My Job (cited in) Takashi B. Yamamoto, 3; Leonardo Arriagada ‘CG-Art: Demystifying the Anthropocentric Bias of Artistic Creativity’ (2020) Connection Science 32(4) 398.

[24] Copyright Act, s33(2).

[25] I acknowledge that this article does not mention other legal issues such changes may present such as when enforcing criminal liability when an AI is an offender. For further reading on this, I recommend G Hallevy, ‘AI vs. IP: Criminal liability for intellectual property offences of artificial intelligence entities’ (2020) Artificial Intelligence and the Law 222; M Simmler and N Markwalder, ‘Guilty Robots?–Rethinking the Nature of Culpability and Legal Personhood in an Age of Artificial Intelligence’ (2019) 30(1) Criminal Law Forum 1, 1–31.

[26] John Locke, Second Treatise of Government (Hackett Publishing Company, original ed, 1689), Section 27 (cited in) Takashi B Yamamoto 3.

[27] Copyright Act, ss 6772.

[28] Locke, s 27.

[29] Ibid.

[30] Ibid.

[31] Takashi B Yamamoto 3.

[32] Locke, ss 26–28.

[33] Beyond the Fence; My Job (cited in) Takashi B Yamamoto 3.

[34] Locke, Chapter V.

[35] Ibid s 27.

[36] Ibid s 2728.

[37] Ibid.

[38] Ibid; SY Ravid and X Liu, ‘When Artificial Intelligence Systems Produce Inventions: An Alternative Model for Patent Law At the 3a Era’ (2017) 39 Cardozo Law Review 2215.

[39] J Haugeland, Artificial Intelligence: The Very Idea. (MIT Press, 1989) 5–23.

[40] Takashi B Yamamoto 6-8.

[41] For further reading see: Brett M Frischmann, Alain Marciano & Giovanni Battista Ramello, ‘Retrospectives: Tragedy of the Commons after 50 Years’ (2019) 33(4) Journal of Economic Perspectives 211, 211–221.

[42] PR Killen, ‘Incentive Theory’ (1981) 29 Nebraska Symposium on Motivation 169; Martin Senftleben and Buijtelaar Laurens, Robot Creativity: An Incentive-Based Neighboring Rights Approach (October 1 2020).

[43] Sony Corp. v. Universal City Studios Inc [1984] USSC 14; 464 U.S. 417 (1984), cited in Takashi B Yamamoto 6.

[44] Ibid, 5.

[45] Ibid 5–6.

[46] Ibid.

[47] T Ellingsen and M Johannesson, ‘Pride and Prejudice: The Human Side of Incentive Theory’ (2008) 98(3) American Economic Review 990.

[48] DC Parkes and MP Wellman, ‘Economic Reasoning and Artificial Intelligence’ (2015) 349(6245) Science 267.

[49] JM Fischer, The Metaphysics of Free Will, vol 1 (Blackwell, 1994) 406, 406–419.

[50] B Lu, ‘A Theory of “Authorship Transfer” and Its Application to the Context of Artificial Intelligence Creations’ (2021) 11(1) Queen Mary Journal of Intellectual Property 2, 5–24.

[51] Ibid 7–8.

[52] Reto M Hilty, Jorg Hoffman & Stefan Scheurer, ‘Intellectual Property Justification for Artificial Intelligence’ (Research Paper No 20-02, Max Planck Institute for Innovation and Competition, 11 February 2020), 1–24.

[53] Ibid 13.

[54] Abraham H Maslow, ‘A Theory of Human Motivation’ (1943) 50(4) Psychological Review 370, 375–382.

[55] Fischer, 406–419.

[56] J McCarthy, ‘Epistemological Problems of Artificial Intelligence’ (1981) Readings in Artificial Intelligence, 459, 460–465; J McCarthy and PJ Hayes, ‘Some Philosophical Problems from the Standpoint of Artificial Intelligence’ (1981) Readings in Artificial Intelligence 431, 437–450; A Ferrario, M Loi & E Viganò, 523–539.

[57] Changhoon Oh et al, ‘I Lead, You Help but Only with Enough Details: Understanding User Experience of Co-Creation with Artificial Intelligence’ (2018) 649 CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems 1, 113.

[58] For further reading see: M Sardelli, “Epistemic Injustice in the Age of AI” (2022) 22 Aporia 44; R Amaro, Machine Learning, Sociogeny, and the Substance of Race (Doctoral Dissertation, Goldsmiths, University of London, 2019).

[59] Telstra v Phone Directories.

[60] Georg Wilhelm Hegel, Philosophy of Right (G Bell and Sons, original ed, 1821) cited in Samuel Duncan, ‘Hegel on Private Property: A Contextual Reading’, 55(3) Southern Journal of Philosophy 263.

[61] J McCarthy, ‘What has AI in Common with Philosophy?’ [1995] International Joint Conferences on Artificial Intelligence 2041, 20412044.

[62] Takashi B Yamamoto 2.

[63] The Next Rembrandt, New Rembrandt (online, 5 April 2016) <https://www.nextrembrandt.com/>.

[64] The Next Rembrandt (Directed by Next Rembrandt, ING in collaboration with Microsoft, TU Delft, Mauritshuis, Rembrandthuis, 2016) 1:11.

[65] New Rembrandt (n 62).

[66] I attempted to contact Rembrandthuis and Mauritshuis (partners in New Rembrandt’s creation) to ask questions on the method of data collection and the skewness and characteristics of the bell curves in order to infer the probable degree of human intervention in this process. Both declined to comment.

[67] Locke, Chapter V.

[68] Georg Wilhelm Friedrich Hegel Elements of the Philosophy of Right (Cambridge University Press, English-translated ed, 1991), 158–181.

[69] Ibid.

[70] B Javiera Cáceres, and F Muñoz, Artificial Intelligence, ‘A New Frontier for Intellectual Property Policymaking’ [2020] NTUT Journal of Intellectual Property Law and Management 108.

[71] E McGaughey, ‘Will Robots Automate Your Job Away? Full Employment, Basic Income and Economic Democracy’ (Working Paper No 496, Centre for Business Research, 10 August 2021); M Tarafdar, CM Beath and JW Ross, ‘Using AI to Enhance Business Operations’ (2019) 60(4) MIT Sloan Management Review 37, 39–44.

[72] For further reading see: A Korinek and JE Stiglitz, ‘Artificial Intelligence and its Implications for Income Distribution and Unemployment’ (2018) The Economics of Artificial Intelligence: An Agenda 349, 350–390; ibid.

[73] S Chesterman, ‘Artificial Intelligence and the Limits of Legal Personality’ (2020) 69(4) International and Comparative Law Quarterly 819.

[74] LB Solum, ‘Legal Personhood for Artificial Intelligences’ (2020) Machine Ethics and Robot Ethics 415.

[75] U.S. Copyright Office, Compendium of U.S. Copyright Office Practices (3rd ed, 2021), Chapter 1901.

[76] See M Humphrys, ‘How My program Passed the Turing Test’ [2009] Parsing the Turing Test 237.

[77] Romanticism is another lens through which to analyse the role of AI authorship rights. For further reading see: C Craig and I Kerr, ‘The Death of the AI Author’ (2020) 52 Ottawa Law Review 31.

[78] H van Genderen, ‘Do We Need New Legal Personhood in the Age of Robots and AI?’ [2018] Robotics, AI and the Future of Law 15, 15–55; J Chen and P Burgess, ‘The Boundaries of Legal Personhood: How Spontaneous Intelligence can Problematise Differences Between Humans, Artificial Intelligence, Companies and Animals’ (2019) 27(1) Artificial Intelligence and Law 73, 73–92.

[79] Ibid.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/UNSWLawJlStuS/2023/13.html