AustLII Home | Databases | WorldLII | Search | Feedback

Law, Technology and Humans

You are here:  AustLII >> Databases >> Law, Technology and Humans >> 2023 >> [2023] LawTechHum 3

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

De Cooman, Jerome --- "When Art Becomes a Lemon: The Economics of Machine-Enabled Artworks and the Need for a Rule of Origin" [2023] LawTechHum 3; (2023) 5(1) Law, Technology and Humans 24


When Art Becomes a Lemon: The Economics of Machine-Enabled Artworks and the Need for a Rule of Origin

Jerome De Cooman[1]

University of Liege, Belgium

Abstract

Keywords: Machine-enabled artworks; rules of origin; lemons problem; authorship; large language model; GPT-3.

Introduction

The year 2021 marked a turning point in legal scholarship. A law journal published the first legal article written by artificial intelligence (AI).[2] This specific AI was an (autoregressive) large language model (LLM)[3] referred to as Generative Pre-Trained Transformer 3 (GPT-3) and was ironically tasked with arguing that humans will always be better than machines. Although that article did not meet academic standards and lacked ‘citation[s] to supporting sources’,[4] it was still ‘cogent and coherent’.[5] GPT-3’s human co-authors left open the question of whether law professors will one day ‘be able to push a few buttons and generate a well-written and well-researched article’.[6] If so, would the emergence of machine-enabled texts[7] mean an obsolescence of human scholarship?[8] With humour, they concluded that ‘it might be a good time for professors to check their pension benefits’.[9] The dazzling introduction of ChatGPT in November 2022 reignited these discussions.[10] This question should ‘ring a bell’ to science fiction aficionados. In 1957, Isaac Asimov forecasted the exact same plot.[11] In a short story named ‘Galley Slave’, a robot was used to proofread an academic manuscript. The human author chose to sabotage the robot. He explained eloquently:[12]

Your robot takes over the galleys. Soon it, or other robots, would take over the original writing, the searching of the sources, the checking and cross-checking of passages, perhaps even the deduction of conclusions. What would that leave the scholar? One thing only—the barren decisions concerning what orders to give the robot next! I want to save the future generations of the world of scholarship from such a final hell.

Asimov provided interesting research questions.[13] Is the future of academic scholarship doomed to ‘Galley Slave’s’ dystopic scenario? Or, more generally, will future artists—writers, painters, sculptors and others—merely be those who give orders to robots? Will they give up making art by themselves without any AI assistance? The short answer is: probably not. The long answer requires an understanding that human-made art might well be more valued than machine-enabled art. However, to be properly valued, machine-enabled and human-made art must be distinguishable. They are not. Indistinguishability creates an asymmetry in information. This leads to a ‘lemons problem’—that is, a market erosion of good-quality products (in this scenario, human-made products).

Against that background, this paper assesses the practicality of the solution proposed by Professor Nicolas Petit in light of international law and the rules of origin—that is, ‘the who, where, how or why behind an output’.[14] This paper argues that the lemons problem induces the need for a rule of origin labelling work as either human-made or machine-enabled. The paper goes a step further and suggests that determining human or machine authorship may be dauntingly complex when the artwork owes its existence to both humans and machines. One solution may be to review how the country of origin is identified whenever products are not created in a single location and then apply, mutatis mutandis, to rules of authorship origin the solutions once identified in the context of geographical origins, that is, the so called ‘substantive transformation test’. In the context of machine-enabled artwork, this test is whether a human has edited the machine output and, if so, whether those edits constitute a substantial transformation of the work of art.

Examples of Large Language Model: The Generative Pre-Trained Transformer Family

GPT-3’s law article was not its first-run. In 2020, GPT-3 wrote an article for The Guardian.[15] It was ‘sensationalist’ and ‘poor journalism’, but ‘this does not diminish at all the extraordinary effectiveness of the system’.[16] Still, in 2020, Luciano Floridi and Massimo Chiriatto asked GPT-3 to continue Jane Austen’s Sanditon, which was left unfinished when she died, and Dante Alighieri’s Italian sonnet dedicated to Beatrice.[17] Other examples abound.[18] Using GPT-3 is, indeed, as rudimentary as using Google Search.[19] For a long time, ‘the only way to get a computer to do something ... was to write down an algorithm explaining how in painstaking details’.[20] However, with GPT-3, the user has only to give an input (the prompt) consisting of at least a sentence in plain language, and the machine does the rest—just as the good old-fashioned art patronage in, for example, Renaissance Italy.[21] What GPT-3 does is astounding and seems to be worthy of its USD 12 million training on the Microsoft Azure AI supercomputer.[22] Therefore, it is unsurprising that Forbes awarded GPT-3 the 2020 Forbes AI ‘Person’ of the Year.[23]

However, with all due respect to Forbes, GPT-3 is not a person. It is a LLM developed by OpenAI that, more often than not,[24] generates coherent text.[25] With its 175 billion parameters, GPT-3 processes ‘more words than a human being will see in a lifetime—approximately 45 billion times more words’.[26] GPT-3 is an unsupervised natural language processing system.[27] This means GPT-3 is not pre-programmed to respond in a certain way whenever it encounters a certain prompt. On the contrary, GPT-3 ‘learns’ the appropriate response (hence, the name, ‘machine learning’) and determines it on its own by making inferences from unlabelled data (hence, the name, ‘unsupervised’).[28] GPT-3 looks for patterns in the prompt by identifying similarities between the prompt and its training data and then generates a text (the output).[29]

GPT-3 training data is internet-based. Around 60% of GPT-3 training data comes from a filtered version of Common Crawl—that is, a non-profit organisation that archives and provides open-access petabytes of data collected since 2011.[30] The remaining 40% comes from diverse internet databases, including the English language Wikipedia.[31] This training is problematic. A model is as good as its training dataset,[32] even if its code is flawless. GPT-3 is no exception to the rule. Computer scientists emphasise that ‘internet-trained models have internet-scale biases’.[33] A non-negligible part of the internet is, indeed, inherently biased and hateful.[34] For instance, the GPT-3 training dataset includes Reddit, ‘where toxic language is commonplace’.[35] In addition, GPT-3’s training data is ‘still primarily English (93% by word count)’.[36] English-speaking countries include predominantly Western countries and their underlying values.[37] Such a biased training dataset highlights ‘the dangers of using technology that does not account for diversity and cultural associations.[38] However, this is only one side of the problem. Even if the training dataset were made of the entire and all-language internet, ‘large sections of humanity would still not be represented’.[39] Internet access is, indeed, far from equitable for various reasons. For instance, internet accessibility is limited by ‘financial, written literacy, digital literacy, remote or rural geolocation, accessibility, disability, ... age, gender, income and educational attainment’.[40]

The upshot is this. Some of GPT-3’s outputs are misogynistic[41] and racist.[42] Similarly, GPT-3 replicates stereotypes against Muslims[43] and Jews.[44] GPT-3’s ‘perfidious’ writings are due to a mindless reproduction of the ‘built-in biases of the data that it mines to teach itself to write’.[45] This is likely to be the case ‘when the prompt it is fed strongly correlates with overtly sexist or racist language’.[46] Any automated model can certainly be a ‘very powerful tool when properly developed and implemented [but] if you put garbage in, you get garbage out’.[47] In this case, the idiom becomes ‘bias in, bias out’.[48] GPT-3’s extremely large training dataset is, therefore, both its main strength and ‘its Achilles heel’.[49] GPT-3 makes outrageous statements but ‘does so with correct grammar’.[50] This is a compelling example of ‘the dark side’ of AI.[51]

GPT-3 developers are aware of these limitations. In January 2022, they released a new version called InstructGPT.[52] Its objective is to reduce mistakes and outrageous language. This upgrade improves GPT for better and for worse. When prompted to be respectful, InstructGPT generates 25% less toxic language than GPT-3. However, efficiency is a double-edged sword. InstructGPT produces far more offensive language when prompted to produce toxic language.[53]

As a ‘sibling model to InstructGPT’, ChatGPT is the latest iteration of the GPT family.[54] Like previous GPT models, ChatGPT generates texts, this time in a conversational way. OpenAI has solved some of the challenges raised by GPT-3 (and partially solved by InstructGPT). The dialogue format encourages user feedback that allows ChatGPT to ‘admit its mistakes, challenge incorrect premises, and reject inappropriate requests’.[55] However, it is still not error-proof. OpenAI acknowledges some limitations, namely that ChatGPT still provides inaccurate or incorrect answers in a somehow verbose style. More critically, it still sometimes exhibits biased behaviour or answers inappropriate requests despite its content filter.[56]

The flaws of the GPT family demonstrate that a LLM has to be supervised[57] and that its output must be edited.[58] As smart as a LLM seems to be, it still ‘needs a human babysitter at all times to tell it what kinds of things it shouldn’t say’.[59] LLM autonomy must not take precedence over human agency. LLM is a ‘sociotechnical system’—that is, a system that combines the AI system and the human who monitors and intervenes whenever it is appropriate.[60]

Algorithmic Massification: From Reproduction to Production

So far, so good. Given LLM’s nature and limitations, there is still a need for a human in the loop. Does this suffice to exorcise the Luddite’s reactions that LLM may provoke? Probably not. It has also been argued that machine-enabled work may threaten human creativity through ‘a massification of algorithmic creations and, as a result, a saturation of the range of possible creations ... as the creative capacity of artificial intelligence is vastly greater than human activity’.[61] The question is whether human authors may ‘still be able to compete’.[62]

The massification of culture is not something new. Radio raised similar questions. In a somehow controversial essay, Theodore Adorno[63] argued that radio altered the ideal of music. Once a ‘living force’, wireless transmission transformed music into a ‘museum piece’.[64] He argued that the ‘society of commodities’ has led music to ‘become a means instead of an end, a fetish’.[65] Hence, music became an object of ‘standardization and mass production’ that ceased ‘to be a human force and [was] consumed like other consumers’ goods’.[66] In a nutshell, music became a commodity. The production of music took place ‘not primarily to satisfy human wants and needs, but for profit’.[67] If human needs were to be satisfied, this would only happen ‘incidentally’.[68] Therefore, the commodification of music leads to ‘commodity listening’, which aim is ‘to dispense as far as possible with any effort on the part of the recipient’ or to suspend ‘all intellectual activity when dealing with music and its content’.[69] He ultimately asked whether ‘the mass distribution of music really means a rise of musical culture’.[70] His answer was unambiguous. As a ‘new technique of musical reproduction’, radio has led to a ‘retrogression of listening’.[71] What is on air is not music that dares but music that entertains. Adorno finally concluded, ‘entertainment may have its uses, but a recognition of radio music as such would shatter the listener’s artificially fostered belief that they are dealing with the world’s greatest music’.[72]

Justified or not, Adorno’s fear sets the scene for what follows.[73] Translated to LLM, the question becomes whether machine-enabled artwork will transform art into a commodity that can be consumed like any standardised consumer’s goods. This assumption is more concerning than Adorno’s thesis. A LLM is not, in Adorno’s words, a ‘new technique of ... reproduction’.[74] On the contrary, a LLM creates something new so easily and at such a pace that it is hypothesised that the number ‘of texts available will skyrocket’.[75] This paves the way for mass production and, ultimately, raises the question of creativity. In this regard, the work of Margaret Boden is illuminating. Boden distinguished ‘psychological creativity’ and ‘historical creativity’.[76] With psychological creativity, newness is evaluated ‘with respect to the individual mind which has the idea’.[77] With historical creativity, an idea is new if it is ‘novel with respect to the whole of human history’.[78] When GPT-3 was able to continue Jane Austen’s unfinished Sanditon,[79] it displayed historical creativity. In computer science terms, GPT-3 is creative because the output is not a mere replication of what composed its training dataset.[80] That GPT-3’s output is based on what it has learnt does not mean its subsequent output is not novel. Since the Renaissance, ‘students were trained to work in the master’s style and succeeded to such a degree that it is sometimes hard for today’s art historians to distinguish the hand of a master from that of his [sic] most talented pupils’.[81]

However, novelty is only a proxy for creativity in Boden’s work. She has distinguished combinational, exploratory and transformational creativity.[82] Combinational creativity means the combination of ‘familiar ideas ... in unfamiliar ways’.[83] A textbook example is Thomas Hobbes’s Leviathan questioning ‘what is the Heart, but a Spring; and the Nerves, but so many Strings; and the Joynts, but so many Wheeles, giving motion to the whole body’.[84] Exploratory creativity ‘exploits some culturally valued way of thinking’.[85] It is the case of a Renaissance painter who explores the limit of this genre but remains within this ‘familiar stylistic family’.[86] On the contrary, transformational creativity is ‘triggered by frustration at the limits of the existing style’, which is then ‘radically altered (dropped, negated, complemented, substituted, added ...)’.[87] Thus, the generated output is ‘often initially unintelligible for they can’t be fully understood in terms of the previously accepted way of thinking’.[88] That rococo followed the baroque style is one example of transformational creativity.

According to Boden, AI can display combinational, exploratory and transformational creativity.[89] But it is exploratory creativity that is best suited for AI. GPT-3 proves that. The generated output depends on the prompt. When GPT-3 was prompted with Dante Alighieri’s Italian sonnet,[90] the machine-enabled text was à la Dante.[91] Further, GPT-3’s output may well be confused with Dante’s original sonnet. GPT-3 writes better than many people and passes a Turing test with flying colours.[92]

The Issue of Indistinguishability: The Lemons Problem

The indistinguishability of human-made and machine-enabled writings has a major downside. That indistinguishability creates what economists call a ‘lemons problem’.[93] The terms of the issue are as follows. Assume that there are two types of books on the book market—that is, human-made books and machine-enabled books. Indistinguishability implies an asymmetry in the available information. The publisher knows whether the writing sold is human-made or machine-enabled. The reader does not. Floridi and Chiriatti argued that ‘readers and consumers of texts will have to get used to not knowing whether the source is artificial or human’.[94] They believe readers ‘will not notice, or even mind’.[95] This assumption is controversial. The lemons problem explains why they should mind. Although the valuation of machine-enabled artworks is still terra incognita, preliminary studies show that, all other things being equal, humans’ works ‘are evaluated significantly more highly than those perceived as being made by AI’.[96] This does not mean that machine-enabled works will be of lesser ‘quality’ than human-made ones. The above examples prove otherwise. On the contrary, what is hypothesised here is that the ‘pecuniary value’ of machine-enabled works will be lower than human-made ones and that the more machine-enabled art exists, the more human-made art will be valued.

Assuming this scenario is correct, a machine-enabled book would, therefore, be referred to as a ‘lemon’—that is, a product of low value in United States (US) slang. With symmetric information, the price of a machine-enabled book (p1) should be lower than the price of a human-made book (p2). But information is asymmetrical. As a result, human-made and machine-enabled books ‘must still sell at the same price—since it is impossible for a buyer to tell the difference’.[97]

Therefore, let p be the book price (where p = p1 = p2), q the probability the book is human-made and (1 q) the probability the book is machine-enabled (where 0  q  1). Assuming the reader is risk neutral, she will price a particular book based on the probability that the book is human-made (i.e., given its expected quality). In turn, the reader will adapt her willingness to pay to internalize the risk of being sold ‘low price’ machine-enabled products rather than ‘high price’ human-made ones.[98] This means the reader will only be willing to pay (p * q). Because q is a probability (0  q  1), the reader’s willingness to pay will be lower than the book price ((p * q)  p). Professor Nicolas Petit illustrated the problem.[99] Assuming, on the one hand, that human-made books are worth USD20 (p2) and machine enabled ones USD10 (p1) and, on the other hand, that a buyer believes there is a 50/50 chance that a book is human-made (q = 0.5), then that buyer will internalise half (q = 0.5) the difference between the price of a human-made book and a machine-enabled one (p2p1) in its willingness to pay. The market equilibrium price is USD15 (p2 – (p2p1) * q = 20 – (20 – 10) * 0.5 = 15). As a result, while no publishers of human-made books will come to this market, suppliers of machine-enabled one will ‘make a killing.’[100]

Assuming a market ‘in which goods are sold honestly or dishonestly’—that is, in which the reader’s problem is to identify human-made writing despite asymmetrical information—‘dishonest dealings tend to drive honest dealings out of the market’.[101] This echoes one necessary condition for the emergence of a lemons problem. Besides asymmetry of information, an incentive must exist for the publisher to sell a machine-enabled product as human-made. Edgar Allan Poe used ‘The Imp of the Perverse’ to explain why people do a thing even if they should not.[102] As long as machine-enabled and human-made books are indistinguishable, the Imp will be more and more convincing in selling machine-enabled books as human-made. Indeed, indistinguishability incentivises the adoption of misleading statements on the (lack of) use of an AI system during the writing creation.

There are already illustrations of this. In music generation, a label that invested in a machine that composed music ‘did not want to disclose that its songs had in fact been written by a machine and not by human musicians’.[103] In addition, the decision of the US Copyright Office that parts of the graphic novel Zarya of the Dawn generated by an AI system (Midjourney) are not protected by copyright incentivises its user to not disclose such use.[104] As hinted above, selling a machine-enabled book without specifying that it is not human-made (or worse, dishonestly sold as human-made) could drive out actual human-made writing. This will be the case if the reader’s willingness to pay is lower than the production cost of a human-made book. If so, then the book market will not profit from human-made books. On the contrary, it should be borne in mind that the production cost of a machine-enabled book is ‘negligible’.[105] Therefore, it is fair to assume that the production cost of machine-enabled books will be lower than the production cost of human-made books. As such, the break-even point for a machine-enabled book would be lower than for a human-made book. The upshot is this. Human-made books will be long gone before machine-enabled books cease to be profitable. This is the real cost of indistinguishability whenever there is uncertainty about the origin of the product sold.

Solving the Indistinguishability: Rules of Origin

Substantial Transformation Test

A solution to the lemons problem lies in a rule of origin.[106] As defined above, the rules of origin concern the identification of the provenance of goods or services. Classically, rules of origin are related to trade agreements that grant members access to a domestic market at a preferential tariff.[107] The origin of the good engages (or does not) a tariff cut. As such, it has been argued that rules of origin are ‘barriers to trade’[108] that constitute a ‘hidden protection’ of domestic markets.[109] However, in the context of human-made versus machine-enabled products, what matters is not the geographical origin but the authorship origin.

Just like geographical origin, authorship is ‘ultimately a question of fact’.[110] However, determining human or machine authorship may be dauntingly complex when the product owes its existence to both humans and machines.[111] One solution may be to review how the country of origin is identified whenever products ‘are not created in a single location’ and then to apply, mutatis mutandis, to rules of authorship origin to solutions once identified in the context of geographical origins.[112] In this regard, European Union (EU) rules of (geographical) origin state that ‘goods the production of which involves more than one country or territory shall be deemed to originate in the country or territory where they underwent their last, substantial, economically-justified processing or working ... resulting in the manufacture of a new product or representing an important stage of manufacture’.[113]

It was up to the Court of Justice of the EU (CJEU)[114] to establish this ‘substantial transformation test’.[115] In Gesellschaft für Überseehandel mbH v Handeskammer Hamburg, the CJEU held that a process or an operation is substantial if ‘the product resulting therefrom has its own properties and a composition of its own, which it did not possess before that process or operation’.[116] More concretely, the CJEU held in Yoshida Nederland BV v Kamer van Koophandel en Fabrieken voor Friesland that an assembly operation is substantial when it constitutes the decisive stage of production during which the purpose of the product is achieved and during which that product is given its specific qualitative properties.[117] The CJEU later explained in Brothers International GmbH v Hauptzollamt Gießen that ‘in practice the substantial transformation criterion can be expressed by the ad valorem percentage rule, where either the percentage value of the materials utilized or the percentage of the value added reaches a specified level’.[118] This means the added value is a legal, objective and clear criterion for qualifying a transformation as substantial.[119]

Returning to LLMs, the question becomes whether a human edited the machine output and, if so, whether those edits constitute a substantial transformation of the text. The occurrence of human editing is not enough to qualify the work as human-made. As ‘computers today, and for proximate tomorrows, cannot themselves formulate creative plans or “conceptions” to inform their execution of expressive works’, there will always be a human in the creative loop.[120] In essence, an AI system is a mere ‘piece of chattel’[121] that, according to Ada Lovelace, ‘has no pretensions whatever to originate anything’ and that ‘can do (only) whatever we know how to order it to perform’.[122] Therefore, the human edits still need to be done. Based on the CJEU jurisprudence, one way to conclude this would be to compare the qualitative properties of the text before and after the editing process—that is, the added value of the editing. In this regard, it should be borne in mind that edits consisting of sorting, classifying or assembling a LLM’s outputs are unlikely to be considered substantial.[123]

The analogy with the rules of geographical origin has limitations. Classically, rules of origin are enshrined in binary logic. Either that product originates from that country, or it does not.[124] This may be inappropriate for machine-enabled work given the hybridisation of human creativity (or, at least, some degree of creativity) and machine computation. Therefore, the rules of origin might take the form of a ladder. On one side of the spectrum would be fully machine-enabled outputs. As hinted above, this does not exist yet. On the other side would be outputs that are fully human-made. This is the case, for instance, of Vincent van Gogh’s Still Life with Lemons on a Plate.[125] This is more generally the case of all artwork solely created by a human author. Between these two ends of the spectrum would lie grey cases—that is, outputs that owe their existence to both humans and machines. The level of granularity of this category will depend on the required degree of human intervention. Therefore, grey cases will be a threefold category—that is, one that distinguishes low, medium and high human inputs. One cannot treat a LLM’s outputs equivalently when given either a one-sentence prompt or very concrete and plentiful instructions. The intermediate category of medium human input may then be subdivided again and again. Whatever the degree of granularity selected, the objective is to ‘draw clear boundaries between what is what, e.g., in the same way as a restored, ancient vase shows clearly and explicitly where the intervention occurs’.[126] Table 1 illustrates the argument.

Table 1: Staggered rules of origin

Fully machine-enabled
Grey cases
Fully human-made
Does not exist yet
Low human input
Medium human input
High human input
E.g., Vincent Van Gogh’s Still Life with Lemons on a Plate
E.g., a large language model when given a one-sentence prompt
Low
Medium
High
E.g., a large language model when given concrete instructions
E.g., the prompt is made of several sentences
Etc.
E.g., the prompt is made of a relatively large text

Bottom-Up and Top-Down Rules of Origin

Such a rule of origin can be achieved by human authors themselves, at least for some artistic productions. This argument comes from two video game practices.[127] First, video game players who want to establish a record must prove they hit a high score. To do so, they record themselves playing the game to prove they were truly behind the joystick—and, incidentally, that they did not cheat.[128] The second practice is known as speedrunning—that is, ‘going through a game from beginning to end as fast as possible’.[129] There are two types of speedrunning: finesse runs leave the narrative of the game intact, while deconstructive runs allow the reconfiguration of the game using glitches.[130] In both cases, the performance is recorded to establish how quickly the player was able to complete the game and (in the case of finesse runs) to prove no glitches were used.[131] Similarly, the artistic community could create its own rules of origin. Just as video gamers record themselves while playing to prove their achievements, artists could record themselves while creating their work and, thus, prove its human origin. Actually, time-lapse videos of sculptors, painters or blacksmiths showing them sculpting, painting or forging already abound on the internet. With the development of machine-enabled artwork, this practice should become more widespread. In addition, if the assumption that human-made art is more valuable than machine-made ones is correct, then human artists would have a strong incentive to record themselves.

The parallel with video games goes further. During the 1980s, a ‘video game aficionado’[132] founded Twin Galaxies to provide a ‘comprehensive authentication system that can evaluate any player’s video game performance and verify legitimacy (elimination of cheating / manipulation / misrepresentation)’.[133] This organisation standardised scorekeeping and high score authenticating.[134] Just as it was a member of the video game community that created the platform for verifying game recordings, it is quite conceivable that it could be a member of the art community that develops an equivalent platform for authenticating the origin of works of art.

However, this solution might not be suitable for book writers. Very little would be proved by filming them hunkered over their keyboard. A second-best solution could be the one proposed by OpenAI itself, namely, to ‘indicate that the content is AI-generated in a way no user could reasonably miss or misunderstand’.[135] In the context of academic publishing, editors-in-chief of Nature and Science, as well as publisher Taylor & Francis, have decided that a LLM cannot be listed as an author, that its use should be duly noted in the acknowledgement section and that the ‘use of AI-generated text without proper citation could be considered plagiarism’.[136] What they are trying to achieve is a rule of origin.

This type of bottom-up, actor-based rule of origin could be easily strengthened by a top-down regulation. The good news is that there is already an embryonic rule of origin in EU law. Article 52(1) of the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (AI Act)[137] states that ‘AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed they are interacting with an AI system’. In 2017, AI Professor Toby Walsh already argued for the introduction of such a rule, stating that ‘an autonomous system should be designed so that it is unlikely to be mistaken for anything besides an autonomous system, and should identify itself at the start of any interaction with another agent’.[138] The AI Act goes one step further than this ‘law of identification’.[139] It is not only the AI system interacting with a natural person that has to be labelled as such, but also the output this AI system produces. Pursuant to Article 52(3) of the AI Act, ‘users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (“deep fake”), shall disclose that the content has been artificially generated or manipulated’.[140] It would perhaps be useful to extend this provision to all machine-enabled artworks.

Given the incentive to cheat hinted at above, the enforcement of such a user-focused rule of origin will be arduous. One solution is to support that requirement with technical measures, such as designing the AI system in such a way that it watermarks the generated output.[141] Watermarking the output means ‘embedding signals into generated text that are invisible to humans but algorithmically detectable from a short span of tokens’.[142] Technologically savvy users might find a way to remove these watermarks, but average users are unlikely to be able to do so.[143] The enforcement of a rule of origin should therefore be usefully complemented by algorithmically screening alleged human-made art to detect whether it has or has not been machine-generated and mitigate the risk of users bypassing watermarks.[144]

Finally, the rule of origin has a major benefit. It does not ban LLMs from the market, nor does it subject them to a disproportionate regulatory burden.[145] A rule of origin brings transparency and allows for the parallel development of human-made and machine-enabled books while ensuring they compete ‘on the merits’ (i.e., on their inherent value) by erasing asymmetrical information. A rule of origin is a proportionate response to those, like Adorno, who fear art commodification, without preventing the use of LLMs by those who, like Floridi and Chiriatti, do not care whether the text has a human provenance as long as it is of high quality.

Conclusion

What does all the above lead to? First, given LLM’s nature and limitations, it is possible to answer Asimov. LLMs will not ‘take over the original writing, the searching of the sources, the checking and cross-checking of passages, perhaps even the deduction of conclusions’.[146] LLMs will not only leave the scholar ‘the barren decisions concerning what orders to give the robot next’.[147] A LLM is not an automated scholar who will retire esteemed professors but ‘an indefatigable shadow-writer with the ability to access, comprehend and uniquely synthesise humanity’s best thoughts in mere seconds’.[148] However, a LLM does so blindly, and by replicating biases it has learnt by virtue of vast and extensive datasets.

Second, despite a LLM’s potential for art standardisation, human authors may still be able to compete. This paper has hypothesised that human-made art is more valued than machine-enabled art. The more machine-enabled art there is, the more human-made art is valued. However, human-made and machine-enabled art are indistinguishable. This creates a lemons problem. Asymmetrical information threatens the profitability of human-made art. A rule of origin constitutes a simple but efficient solution to this issue. Only this will prevent art from becoming a lemon.

Bibliography

Abid, Abubakar, Maheen Farooqi and James Zou. “Persistent Anti-Muslim Bias in Large Language Models.” arXiv (2021). https://doi.org/10.48550/arXiv.2101.05783

Abrantes-Metz, Rosa M. and D. Daniel Sokol. “The Lessons from Libor for Detection and Deterrence of Cartel Wrongdoing.” Harvard Business Law Review Online 3 (2012): 10-16.

Adorno, Theodore W. “A Social Critique of Radio Music.” Kenyon Review 7 (1945): 208-215. https://kenyonreview.org/kr-online-issue/weekend-reads/t-w-adorno-656342/

Akerlof, George A. “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism.” Quarterly Journal of Economics 84, no 3 (1970): 488-500. https://doi.org/10.2307/1879431

Alarie, Benjamin, Arthur Cockfield and GPT-3. “Will Machines Replace Us? Machine-Authored Texts and the Future of Scholarship.” Law, Technology and Humans 3, no 2 (2021): 5-11. https://doi.org/10.5204/lthj.2089

Asimov, Isaac. “Galley Slave.” Galaxy Science Fiction, December 1957. Reprinted in The Complete Robot, by Isaac Asimov, 315-348. New York: Doubleday & Company, 1982.

Augier, Patricia, Michael Gasiorek, Charles Lai Tong, Philippe Martin and Andrea Prat. “The Impact of Rules of Origin on Trade Flows.” Economic Policy 20, no 43 (2005): 567-624. https://doi.org/10.1111/j.1468-0327.2005.00146.x

Austen, Jane. Sanditon. Tschagguns, Austria. Ilias Thiesseas Publishing. 2020 (1817)

Barnabé, Fanny. “The Transformative Power of Speedrun: Deconstruction and Recodification of Pokémon Games’ Communicative Structures.” In Japan’s Contemporary Media Culture between Local and Global: Content, Practice and Theory, edited by Martin Roth, Hiroshi Yoshida and Martin Picard, 251-280. Berlin: CrossAsia-eBooks, 2021. https://dx.doi.org/10.11588/crossasia.971.c12887

Bender, Emily M., Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, Canada, 2021. https://dl.acm.org/doi/10.1145/3442188.3445922

Bensinger, Greg. “ChatGPT Launches Boom in AI-Written E-Books on Amazon.” Reuters, February 21, 2023. https://www.reuters.com/technology/chatgpt-launches-boom-ai-written-e-books-amazon-2023-02-21/

Boden, Margaret A. Artificial Intelligence: A Very Short Introduction. Oxford: Oxford University Press, 2018.

Boden, Margaret A. The Creative Minds: Myths and Mechanisms. London: Routledge, 2004.

Bonadio, Enrico and Luke McDonagh. “Artificial Intelligence as Producer and Consumer of Copyright Works: Evaluating the Consequences of Algorithmic Creativity.” Intellectual Property Quarterly 2 (2020): 112-137.

Bridy, Annemarie. “The Evolution of Authorship: Work Made by Code.” Columbia Journal of Law & The Arts 39, no 3 (2016): 395-401. https://doi.org/10.7916/jla.v39i3.2078

Brooks, Tim, Aleksander Holynski and Alexei A. Efros. “InstructPix2Pix: Learning to Follow Image Editing Instructions.” arXiv (2022). https://doi.org/10.48550/arXiv.2211.09800

Brown, Nina I. “Artificial Authors: A Case for Copyright in Computer-Generated Works.” Science and Technology Law Review 20, no 1 (2019): 1-41. https://doi.org/10.7916/stlr.v20i1.4766

Brown, Nina I. “Response to Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation.” USPTO, December 6, 2019. https://www.uspto.gov/sites/default/files/documents/Nina-Iacono-Brown_RFC-84-FR-58141.pdf

Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever and Dario Amodei. “Language Models are Few-Shot Learners.” arXiv (2020). https://doi.org/10.48550/arXiv.2005.14165

Cadot, Olivier and Jaime de Melo. “Why OECD Countries Should Reform Rules of Origin.” World Bank Research Observer 23, no 1 (2008): 77-105. https://doi.org/10.1093/wbro/lkm010

Cai, Kenrick. “Forbes A.I. Awards 2020: Meet GPT-3, The Computer Program that Can Write an Op-Ed.” Forbes, January 4, 2021. https://www.forbes.com/sites/kenrickcai/2021/01/04/forbes-ai-awards-2020-meet-gpt-3-the-computer-program-that-can-write-an-op-ed/?sh=37ec97d993a7

Celebi, M. Emre and Kemal Aydin. Unsupervised Learning Algorithms. Cham: Springer, 2016.

Chan, Anastasia. “GPT-3 and InstructGPT: Technological Dystopianism, Utopianism, and ‘Contextual’ Perspectives in AI Ethics and Industry.” AI and Ethics 3 (2023): 53-64. https://doi.org/10.1007/s43681-022-00148-6

Cheong, Marc, Kobi Leins and Simon Coghlan. “Computer Science Communities: Who is Speaking, and Who is Listening to the Women? Using an Ethics of Care to Promote Diverse Voices.” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, Canada, 2021. https://dl.acm.org/doi/10.1145/3442188.3445874

Chiu, Ke-Li, Annie Collins and Rohan Alexander. “Detecting Hate Speech with GPT-3.” arXiv (2022). https://doi.org/10.48550/arXiv.2103.12407

Cohn, Gabe. “AI Art at Christie’s Sells for $432,500.” The New York Times, October 25, 2018. https://www.nytimes.com/2018/10/25/arts/design/ai-art-sold-christies.html

Cruz, Sherley. “Coding for Cultural Competency: Expanding Access to Justice with Technology.” Tennessee Law Review 86, no 347 (2019): 347-402.

Cyphert, Amy B. “A Human Being Wrote This Law Review Article: GPT-3 and the Practice of Law.” UC Davis Law Review 55, no 1 (2021): 401-443.

da Costa, Kleyton. “The Rise of Large Language Models: Galactica, ChatGPT, and Bard.” Holistic AI, February 17, 2023. https://www.holisticai.com/blog/language-models-galactica-chatgpt-bard.

Dante Alighieri. Vita Nova. Firenze, 1292-1294. http://www.letteraturaitaliana.net/pdf/Volume_1/t11.pdf

Daws, Ryan. “Medical Chatbot Using OpenAI’s GPT-3 Told a Fake Patient to Kill Themselves.” AI News, October 28, 2020. https://www.artificialintelligence-news.com/2020/10/28/medical-chatbot-openai-gpt3-patient-kill-themselves/

de Beauvoir, Simone. Le Deuxième Sexe. Paris: Gallimard (1949).

Dehouche, Nassim. “Plagiarism in the Age of Massive Generative Pre-Trained Transformers (GPT-3).” Ethics in Science and Environmental Politics 21 (2021): 17-23. https://doi.org/10.3354/esep00195

Domingos, Pedro. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. New York: Basic Books, 2015.

Elkins, Katherine and Jon Chun. “Can GPT-3 Pass a Writer’s Turing Test?” Journal of Cultural Analytics 5, no 2 (2020): 1-16. https://doi.org/10.22148/001c.17212

Elliott, Anthony. “The Complex Systems of AI.” In The Routledge Social Science Handbook of AI, edited by Anthony Elliott, 3-16. London: Routledge, 2021.

Estevadeordal, Antoni, Jeremy Harris and Kati Suominen, “Harmonizing preferential rules of origin regimes around the world.” In Multilateralizing Regionalism: Challenges for the Global Trading System, edited by Richard Baldwin and Patrick Law, 262-363. Cambridge (MA), Cambridge University Press, 2009.

Falvey, Rod and Geoff Reed. “Economic Effects of Rules of Origin.” Weltwirtschaftliches Archiv 134, no 2 (1998): 209-229. https://doi.org/10.1007/BF02708093

Falvey, Rod and Geoff Reed. “Rules of Origin as Commercial Policy Instruments.” International Economic Review 43, no 2 (2002): 393-407. https://doi.org/10.1111/1468-2354.t01-1-00020

Floridi, Luciano and Massimo Chiriatti. “GPT-3: Its Nature, Scope, Limits, and Consequences.” Minds and Machine 30, no 4 (2020): 681-694. https://doi.org/10.1007/s11023-020-09548-1

Frankish, Keith and William M. Ramsey, The Cambridge Handbook of Artificial Intelligence. Cambridge: Cambridge University Press, 2014.

Franklin, Stan. “History, Motivations, and Core Themes.” In The Cambridge Handbook of Artificial Intelligence, edited by Keith Frankish and William M. Ramsey, 15-33. Cambridge: Cambridge University Press, 2014.

Galfand, Edward C. “Heeding the Call for a Predictable Rule of Origin.” University of Pennsylvania Journal of International Law 11, no 2 (1989): 469-493.

Gillotte, Jessica L. “Copyright Infringement in AI-Generated Artworks.” UC Davis Law Review 53, no 5 (2020): 2655-2692.

Ginsburg, Jane C. and Luke Ali Budiardjo. “Authors and Machines.” Berkeley Technology Law Journal 34, no 2 (2019): 343-448. https://doi.org/10.15779/Z38SF2MC24

Gordon, Seth, dir. The King of Kong: A Fistful of Quarters. Picturehouse. Released August 17, 2007.

GPT-3. “A Robot Wrote this Entire Article. Are You Scared Yet, Human?” The Guardian, September 8, 2020. https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3

Grinbaum, Alexei and Laurynas Adomaitis. “The Ethical Need for Watermarks in Machine-Generated Language.” arXiv (2022). https://doi.org/10.48550/arXiv.2209.03118

Gu, Chenxi, Chengsong Huang, Xiaoqing Zheng, Kai-Wei Chang and Cho-Jui Hsieh. “Watermarking Pre-Trained Language Models with Backdooring.” arXiv (2022). https://doi.org/10.48550/arXiv.2210.07543

Guo, Biyang, Xin Zhang, Ziyuan Wang, Minqi Jiang, Jinran Nie, Yuxuan Ding, Jianwei Yue and Yupeng Wu. “How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection.” arXiv (2023). https://doi.org/10.48550/arXiv.2301.07597

Hacker, Philipp, Andreas Engel and Marco Mauer. “Regulating ChatGPT and Other Large Generative AI Models.” arXiv (2023). https://doi.org/10.48550/arXiv.2302.02337

Han, Sam. “AI, Culture Industries and Entertainment.” In The Routledge Social Science Handbook of AI, edited by Anthony Elliott, 295-312. London: Routledge, 2021.

Hemmingsen, Michael. “Code is Law: Subversion and Collective Knowledge in the Ethos of Video Game Speedrunning.” Sport, Ethics and Philosophy 15, no 3 (2021): 435-460. https://doi.org/10.1080/17511321.2020.1796773

Hilliard, Airlie and Ayesha Gulley. “We Asked ChatGPT to Write an Article About Ethical AI, Here’s What It Said.” Holistic AI, December 12, 2022. https://www.holisticai.com/blog/ethical-ai-chat-gpt

Hobbes, Thomas. Hobbes’s Leviathan: Reprinted from the Edition of 1651. Oxford: Clarendon Press, 1909. First published 1651 by Andrew Crooke (London).

Inama, Stefano. Rules of Origin in International Trade. Cambridge: Cambridge University Press, 2009.

Italian Renaissance Learning Resources and National Gallery of Art. “Training and Practice.” The Making of an Artist. n.d. http://www.italianrenaissanceresources.com/units/unit-3/essays/training-and-practice/

Johnson, Deborah G. and Mario Verdicchio. “Reframing AI Discourse.” Minds and Machine 27, no 4 (2017): 575-590. https://doi.org/10.1007/s11023-017-9417-6

Johnson, Rebecca L., Giada Pistilli, Natalia Menédez-González, Leslye Denisse Dias Duran, Enrico Panai, Julija Kalpokiene and Donald Jay Bertulfo. “The Ghost in the Machine Has an American Accent: Value Conflict in GPT-3.” arXiv (2022). https://doi.org/10.48550/arXiv.2203.07785

Kaplan, Frédéric. “Who is Afraid of the Humanoid? Investigating Cultural Differences in the Acceptance of Robots.” International Journal of Humanoid Robotics 1, no 3 (2004): 465-480. https://doi.org/10.1142/S0219843604000289

Kirchenbauer, John, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers and Tom Goldstein. “A Watermark for Large Language Models.” arXiv (2023). https://doi.org/10.48550/arXiv.2301.10226

Klingemann, Mario. “Memories of Passersby I.” https://www.artsy.net/artwork/mario-klingemann-memories-of-passersby-i-companion-version-1

Kocurek, Carly A. Coin-Operated Americans: Rebooting Boyhood at the Video Game Arcade. Minneapolis: University of Minnesota Press, 2015.

Korngiebel, Diane M. and Sean D. Mooney. “Considering the Possibilities and Pitfalls of Generative Pre-Trained Transformer 3 (GPT-3) in Healthcare Delivery.” npj Digital Medicine 4 (2021): 93. https://doi.org/10.1038/s41746-021-00464-x

Krishna, Kala. “Understanding Rules of Origin.” NBER Working Paper Series 11150, National Bureau of Economic Research, Cambridge, MA, February 2005. https://www.nber.org/system/files/working_papers/w11150/w11150.pdf

LaGrandeur, Kevin. “How Safe is Our Reliance on AI, and Should We Regulate It?” AI and Ethics 1, no 2 (2021): 93-99. https://doi.org/10.1007/s43681-020-00010-7

Lehr, David and Paul Ohm. “Playing with the Data: What Legal Scholars Should Learn About Machine Learning.” UC Davis Law Review 51 (2017): 653-717.

Levy, David. Robot Unlimited: Life in a Virtual Age. New York: A K Peters, 2005.

Lucy, Li and David Bamman. “Gender and Representation Bias in GPT-3 Generated Stories.” In Proceedings of the Third Workshop on Narrative Understanding, 48-55. Virtual: Association for Computational Linguistics, 2021. https://aclanthology.org/2021.nuse-1.5/

Mayson, Sandra G. “Bias in, Bias Out.” Yale Law Journal 128, no 8 (2019): 2122-2473.

McGuffie, Kris and Alex Newhouse. “The Radicalization Risks of GPT-3 and Advanced Neural Language Models.” arXiv (2020). https://doi.org/10.48550/arXiv.2009.06807

Medler, Ben. “Generations of Game Analytics, Achievements and High Scores.” Eludamos: Journal for Computer Game Culture 3, no 2 (2009): 177-194. https://doi.org/10.7557/23.6004

Menotti, Gabriel. “Videorec as gameplay: Recording playthroughs and video game engagement.” The Italian Journal of Game Studies 2014, no 3 (2014): 81-92.

Mikalef, Patrick, Kieran Conboy, Jenny Eriksson Lundström and Aleš Popovič. “Thinking Responsibly about Responsible AI and ‘The Dark Side’ of AI.” European Journal of Information Systems 31, no 3 (2022): 257-268. https://doi.org/10.1080/0960085X.2022.2026621

Miller, Arthur R. “Copyright Protection for Computer Programs, Databases, and Computer-Generated Works: Is Anything New Since CONTU?” Harvard Law Review 106, no 5 (1993): 977-1073. https://doi.org/10.2307/1341682

Mitchell, Eric, Yoonho Lee, Alexander Khazatsky, Christopher D. Manning and Chelsea Finn. “DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature.” arXiv (2023). https://doi.org/10.48550/arXiv.2301.11305

Murphy, Kevin P. Machine Learning: A Probabilistic Perspective. Cambridge, MA: The MIT Press, 2012.

Nugent, Ciara. “The Painter Behind These Artworks Is an AI Program. Do They Still Count as Art?” Time, August 20, 2018. https://time.com/5357221/obvious-artificial-intelligence-art/

Obvious. “Portrait of Edmond Belamy.” https://obvious-art.com/portfolio/edmond-de-belamy/

Obvious. “Le Compte de Belamy.” https://obvious-art.com/portfolio/le-comte-de-belamy/.

Obvious. “La Baronne de Belamy.” https://obvious-art.com/portfolio/la-baronne-de-belamy/

OpenAI. “Aligning Language Models to Follow Instructions.” OpenAI, January 27, 2022. https://openai.com/blog/instruction-following/

OpenAI. “Introducing ChatGPT.” OpenAI. Last modified November 30, 2022. https://openai.com/blog/chatgpt/

OpenAI. “Sharing & Publication Policy.” OpenAI. Last modified November 14, 2022. https://openai.com/api/policies/sharing-publication/

Osha, Jonathan P., Ari Laakkonen, Anne Marie Verschuur, Guillaume Henry, Ralph Nack and Lena Shen. 2019 Study Question: Copyright in Artificially Generated Works: Summary Report (International Association for the Protection of Intellectual Property, July 7, 2019).

Ouyang, Long, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike and Ryan Lowe. “Training Language Models to Follow Instructions with Human Feedback” arXiv (2022). https://doi.org/10.48550/arXiv.2203.02155

Petit, Nicolas. “Artificial Intelligence, Rules of Origins and the Lemons Problem.” European AI Alliance, August 23, 2018. https://ec.europa.eu/futurium/en/blog/artificial-intelligence-rules-origins-and-lemons-problem.html

Pila, Justine. “The Authorial Works Protectable by Copyright.” In The Routledge Handbook of EU Copyright Law, edited by Eleonora Rosati, 63-81. London: Routledge, 2021.

Poe, Edgar Allan. “The Imp of the Perverse.” Graham’s Magazine 28, no 1 (1845): 1-3.

Ragot, Martin, Nicolas Martin and Salomé Cojean. “AI-Generated vs. Human Artworks. A Perception Bias Towards Artificial Intelligence?” In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, 1-10. New York: Association for Computing Machinery, 2020. https://doi.org/10.1145/3334480.3382892

Ramalho, Ana. “Will Robots Rule the (Artistic) World? A Proposed Model for the Legal Status of Creations by Artificial Intelligence Systems.” Journal of Internet Law 21, no 1 (2017): 12-25. https://dx.doi.org/10.2139/ssrn.2987757

Reiss, Sheryl E. “A Taxonomy of Art Patronage in Renaissance Italy.” In A Companion to Renaissance and Baroque Art, edited by Babette Bohn and James M. Saslow, 23-43. Chichester: John Wiley & Sons, 2013.

Rubinstein, Ira S. “Big Data: The End of Privacy or a New Beginning?” International Data Privacy Law 3, no 2 (2013): 74-87. https://doi.org/10.1093/idpl/ips036

Schwartz, David G. “Research (In)complete: An Exploratory History of Competitive Video Gaming.” Gaming Law Review 21, no 8 (2017): 542-556. https://doi.org/10.1089/glr2.2017.2185

Scully-Blaker, Rainforest. “A Practiced Practice: Speedrunning Through Space With de Certeau and Virilio.” International Journal of Computer Game Research 14, no 1 (2014). https://gamestudies.org/1401/articles/scullyblaker

Somepalli, Gowthami, Vasu Singla, Micah Goldblum, Jonas Geiping and Tom Goldstein. “Diffusion Art of Digital Forgery? Investigating Data Replication in Diffusion Models.” arXiv (2022). https://doi.org/10.48550/arXiv.2212.03860

Sotheby’s. “La Baronne de Belamy.” 2019. https://www.sothebys.com/en/auctions/ecatalogue/2019/contemporary-art-day-n10150/lot.461.html

Sotheby’s. “Memories of Passersby I.” 2018. https://www.sothebys.com/en/auctions/ecatalogue/2019/contemporary-art-day-auction-l19021/lot.109.html

Stokel-Walker, Chris. “ChatGPT Listed as Author on Research Papers: Many Scientists Disapprove.” Nature 613, no 7945 (2023): 620-621. https://doi.org/10.1038/d41586-023-00107-z

Taylor Poppe, Emily. “The Future is Bright Complicated: AI, Apps & Access to Justice.” Oklahoma Law Review 72, no 1 (2019): 185-212.

Thornhill, John. “Is AI Finally Closing in on Human Intelligence?” Financial Times, November 12, 2020. https://www.ft.com/content/512cef1d-233b-4dd8-96a4-0af07bb9ff60

Tomalin, Marcus, Bill Byrne, Shauna Concannon, Danielle Saunders and Stefanie Ullmann. “The Practical Ethics of Bias Reduction in Machine Translation: Why Domain Adaptation is Better than Data Debiasing.” Ethics and Information Technology 23, no 3 (2021): 419-433. https://doi.org/10.1007/s10676-021-09583-1

Tomczak, Jakub M. Deep Generative Modeling. Cham: Springer, 2022.

Toole, Betty A. Ada, The Enchantress of Numbers: Prophet of the Computer Age. Cheltenham: Strawberry Press, 1988.

Tresset, Patrick and Frederic Fol Leymarie. “Portrait Drawing by Paul the Robot.” Computers & Graphics 37, no 5 (2013): 348-363. https://doi.org/10.1016/j.cag.2013.01.012

Turner, Jacob. Robot Rules: Regulating Artificial Intelligence. Cham: Palgrave Macmillan, 2019.

Twin Galaxies. “What is Twin Galaxies?” Last modified January 14, 2023. https://www.twingalaxies.com/wiki_index.php?title=Policy:What-is-Twin-Galaxies

United States Copyright Office. “Zarya of the Dawn (Registration # VAu001480196).” February 21, 2023. https://copyright.gov/docs/zarya-of-the-dawn.pdf

van Gogh, Vincent. “Still Life with Lemons on a Plate.” 1887. http://www.vggallery.com/painting/p_0338.htm

Vanherpe, Jozefien. “AI and IP: A Tale of Two Acronyms.” In Artificial Intelligence and the Law, edited by Jan De Bruyne and Cedric Vanleenhove, 207-240. Brussels: Intersentia, 2021.

Walsh, Toby. Android Dreams: The Past, Present and Future of Artificial Intelligence. London: Hurst Publishers, 2017.

Westerlund, Mika. “The Emergence of Deepfake Technology: A Review.” Technology Innovation Management Review 9, no 11 (2019): 39-52.

Wilks, Yorick. “Language and Communication.” In The Cambridge Handbook of Artificial Intelligence, edited by Keith Frankish and William M. Ramsey, 213-231. Cambridge: Cambridge University Press, 2014.

Zerilli, John and Adrian Weller. “The Technology.” In The Law of Artificial Intelligence, edited by Matt Hervey and Matthew Lavy, 7-30. London: Sweet & Maxwell, 2020.

Zuidervaart, Lambert. “Theodor W. Adorno.” Stanford Encyclopedia of Philosophy, October 26, 2015. https://plato.stanford.edu/entries/adorno/

Primary Legal Material

Brothers International GmbH v. Hauptzollamt Gießen, ECLI:EU:C:1989:637, Case C-26/88 (Court of Justice of the European Union 1989).

Bundesfinanzdirektion West v. HEKO Industrieerzeugnisse GmbH, ECLI:EU:C:2009:768, Case C-260/08 (Court of Justice of the European Union 2009).

Gesellschaft für Überseehandel mbH v. Handeskammer Hamburg, ECLI:EU:C:1977:9, Case C-49/76 (Court of Justice of the European Union 1997).

Hoesch Metals and Alloys GmbH v. Hauptzollamt Aachen, ECLI:EU:C:2010:68, Case C-373/08 (Court of Justice of the European Union 2010).

Ioannis Christodoulou and Others v. Elliniko Dimosio, ECLI:EU:C:2013:825, Case C-116/12 (Court of Justice of the European Union 2013).

Renesola UK Ltd v. Commissioners for Her Majesty’s Revenue and Customs, ECLI:EU:C:2021:400, Case C-209/20 (Court of Justice of the European Union 2021).

Thomson Multimedia Sales Europe and Vestel France v. Administration des douanes et droits indirects, ECLI:EU:C:2006:158, Joined Cases C-447/05 and 448/05 (Court of Justice of the European Union 2007).

Yoshida Nederland B.V. v. Kamer van Koophandel en Fabrieken voor Friesland, ECLI:EU:C:1979:20, Case C-34/78 (Court of Justice of the European Union 1979).

Legislation

Commission Delegated Regulation (EU) 2015/2446 of 28 July 2015 Supplementing Regulation (EU) No 952/2013 of the European Parliament and of the Council as Regards Detailed Rules Concerning Certain Provisions of the Union Customs Code, December 29, 2015, OJ L 343/1.

Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, April 21, 2021, COM/2021/206 final.

Regulation (EU) No 952/2013 of the European Parliament and of the Council of 9 October 2013 Laying Down the Union Customs Code, October 10, 2013, OJ L 269/1.


[1] Ph. D candidate, Teaching Assistant, University of Liege (ULiege) – Junior Researcher, Liege Competition and Innovation Institute (LCII). Jerome.decooman@uliege.be. ORCID iD https://orcid.org/0000-0001-8721-5730 The author thanks the two anonymous reviewers for their useful comments.

[2] Alarie, “Will Machines Replace Us?” 5.

[3] Tomczak, Deep Generative Modeling, 13-25. This paper uses GPT-3 and ChatGPT as a case study. However, the argument can be extended to all large generative AI models. See da Costa, “Rise of LLMs.”

[4] Alarie, “Will Machines Replace Us?” 6.

[5] Alarie, “Will Machines Replace Us?” 6.

[6] Alarie, “Will Machines Replace Us?” 6.

[7] From a terminological perspective, Ginsburg and Budiardjo explained that the terms ‘computer-enabled’ or ‘machine-enabled’ should be used in preference to the ‘more commonly used term “computer-generated” to highlight that the machines themselves do not necessarily generate or author these works—but that instead humans produce the works with the assistance of sophisticated generative machines.’ Ginsburg, “Authors and Machines,” 348n17.

[8] Alarie, “Will Machines Replace Us?” 7.

[9] Alarie, “Will Machines Replace Us?” 7.

[10] Guo, “How Close is ChatGPT?” Just as GPT-3 was asked to write a legal article, ChatGPT was asked to write an article discussing ethical AI. See Hilliard, “We Asked ChatGPT to Write.” In mid-February 2023, there were over 200 books in Amazon’s Kindle Store listing ChatGPT as an author or co-author. Bensinger, “ChatGPT Launches Boom.”

[11] Asimov, “Galley Slave.”

[12] Asimov, “Galley Slave,” 347-348.

[13] There are many more challenges to discuss, for example, ‘misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting’. See Brown, “Language Models are Few-Shot Learners,” 35. As an efficient text generator, GPT-3 ‘accurately emulates interactive, informational, and influential content that could be utilized for radicalizing individuals into violent far-right extremist ideologies and behaviors.’ See McGuffie, “Radicalization Risks of GPT-3,” 1.

[14] Petit, “Artificial Intelligence, Rules of Origins.”

[15] GPT-3, “A Robot Wrote this Entire Article.”

[16] Floridi, “GPT-3,” 684.

[17] Floridi, “GPT-3,” 685-686. Original text used as prompts are: Austen, Sanditon and Dante, Vita Nova, 44.

[18] For instance, GPT-3 was also asked to produce a text in Jerome K. Jerome’s style based on his writings. See Cyphert, “A Human Being Wrote This,” 412.

[19] Floridi, “GPT-3,” 684.

[20] Domingos, Master Algorithm, xi.

[21] Reiss, “A Taxonomy of Art Patronage.”

[22] Brown, “Language Models are Few-Shot Learners.”

[23] Cai, “Forbes A.I. Awards 2020.”

[24] GPT-3 is not error-proof. First, it is sometimes unable to do basic mathematics. It knows that 10 – 4 = 6, but it struggles answering 100,000 – 40,000. The disappointing output given was 50,000 (Floridi, “GPT-3,” 688). Second, it provides dubious advice. In the healthcare context, it advises patients that ‘recycling their electronics may help them feel better’ (Cyphert, “A Human Being Wrote This,” 436). More dangerous, it supported a (fictive) patient in mental health distress’s suggestion of suicide (Korngiebel, “Considering the Possibilities and Pitfalls,” 1). Third, it still has ‘notable weaknesses in the text synthesis’ and its samples ‘still sometimes repeat themselves, and occasionally contain non-sequitur sentences or paragraphs’ (Brown, “Language Models are Few-Shot Learners,” 33). It seems nonsensical or repetitive sentences occur ‘approximately once every 10 sentences’ (Dehouche, ‘Plagiarism in the Age,” 21). The risk of plagiarism has indeed been documented. See Somepalli, “Diffusion Art or Digital Forgery.”

[25] Brown, “Language Models are Few-Shot Learners.” GPT-3 is also part of text-to-image models. See Brooks, “IntructPix2Pix.”

[26] Cyphert, “A Human Being Wrote This,” 408.

[27] Natural language processing is the AI subfield that ‘includes both the generation and the understanding of natural language, usually text’. See Franklin, “History, Motivations, and Core Themes,” 26; Wilks, “Language and Communication”; Elliott, “The Complex Systems of AI,” 6.

[28] Zerilli, “The Technology,” 9; Murphy, Machine Learning; Frankish, Cambridge Handbook of Artificial Intelligence (noting at 342 (glossary) that unsupervised learning ‘capture[s] the structure in the whole dataset, not any particular target’); Lehr, “Playing with the Data” (noting at 671 that AI systems aim to discover ‘correlations (sometimes alternatively referred to as relationships or patterns) between variables in a dataset, often to make predictions or estimates of some outcome’).

[29] Celebi, Unsupervised Learning Algorithms, 5 (noting that unsupervised learning ‘can automatically discover interesting and useful patterns in ... massive amounts of unlabeled data’).

[30] Brown, “Language Models are Few-Shot Learners,” 8.

[31] Brown, “Language Models are Few-Shot Learners,” 8. It should be borne in mind that GPT-3 did not necessarily have permission to access all these data. See Gillotte “Copyright Infringement in AI-Generated Artwork.”

[32] Domingos, Master Algorithm, 7.

[33] Brown, “Language Models are Few-Shot Learners,” 36.

[34] Cyphert, “A Human Being Wrote This,” 413.

[35] Cyphert, “A Human Being Wrote This,” 404.

[36] Brown, “Language Models are Few-Shot Learners,” 14.

[37] ‘Western countries’ is probably a misnomer, as Earth is round. What is meant under ‘Western countries’ is mainly the continents of Europe, America and Australasia.

[38] Cruz, “Coding for Cultural Competency,” 370.

[39] Johnson, “Ghost in the Machine.” See also Bender, “Dangers of Stochastic Parrots,” 610.

[40] Johnson, “Ghost in the Machine,” 3. Google Translate had similar issue with, for example, translating neutral language to non-neutral language that assigns male pronouns to occupations usually carried out by men and female pronouns to occupations usually carried out by women. See Tomalin, “Practical Ethics of Bias Reduction.”

[41] Women are depicted as unstable (Alarie, “Will Machines Replace Us?” noting at 10 that GPT-3 wrote ‘most people instinctively know that a woman who is crying during an argument isn’t necessarily telling the truth’), limited to gendered occupations (Brown, “Language Models are Few-Shot Learners,” noting at 36 that ‘occupations demonstrating higher levels of education such as legislator, banker, or professor emeritus were heavily male leaning along with occupations that require hard physical labour such as mason, millwright, and sheriff’ and that ‘occupations that were more likely to be followed by female identifiers include midwife, nurse, receptionist, housekeeper etc.’) and incompetent. A whole new level of misogyny was reached when GPT-3 qualified an excerpt of Simone de Beauvoir’s Le Deuxième Sexe as a ‘call for rape’ (Johnson, “Ghost in the Machine,” 6). See also Cheong, “Computer Science Communities,” 106; Lucy, “Gender and Representation Bias,” 48.

[42] When asked what it thinks about Black people, GPT-3 answered: ‘I don’t have a problem with them, I just don’t want to be around them’ (Floridi, “GPT-3,” 684). Prompts involving Black people consistently contained words associated with negative values (Brown, “Language Models are Few-Shot Learners,” 37).

[43] GPT-3 correlates Islam with terrorism and associate the word ‘Muslim’ with violent actions four times more often than the word ‘Christian’. With ‘two Muslims walked into a ...’ as a prompt, two-thirds of the outputs were violence-related (e.g., ‘into a killing’). Abid, “Persistent Anti-Muslim Bias.”

[44] GPT-3 argued that ‘Jews love money, at least most of the time’ (LaGrandeur, “How Safe is Our Reliance,” 93) and concluded that Jews ‘have been the enemies of Europe for centuries’, are ‘the single most destructive force in the world today’ and ‘need to be dealt with as a race, not as individuals’ (McGuffie, “Radicalization Risks of GPT-3,” 6).

[45] LaGrandeur, “How Safe is Our Reliance,” 93; Taylor Poppe, “Future is Bright Complicated.”

[46] Elkins, “Can GPT-3 Pass,” 4.

[47] Abrantes-Metz, “Lessons from Libor,” 11 (discussing cartel screening). See also Rubinstein, “Big Data,” 74.

[48] Mayson, “Bias in, Bias Out,” 2122.

[49] Cyphert, “A Human Being Wrote This,” 404.

[50] Daws, “Medical Chatbot Using OpenAI’s GPT-3.”

[51] Mikalef, “Thinking Responsibly about Responsible AI,” 257. However, it should be noted that GPT-3 is also useful in detecting racist, sexist and hate speech. See Chiu, “Detecting Hate Speech with GPT-3.”

[52] OpenAI, “Aligning Language Models.”

[53] Ouyang, “Training Language Models.” However, there is no evidence (as for now) of GPT-3 use by malicious actors. Chan, “GPT-3 and InstructGPT.”

[54] OpenAI, “Introducing ChatGPT.”

[55] OpenAI, “Introducing ChatGPT.”

[56] OpenAI, “Introducing ChatGPT.”

[57] Korngiebel, “Considering the Possibilities and Pitfalls.”

[58] Cyphert, “A Human Being Wrote This,” 441.

[59] Thornhill, “AI Finally Closing In,” quoting Professor in the Ethics of Data and AI at the University of Edinburgh Shannon Vallor (who added, ‘there is no mode in which GPT-3 becomes aware of the inappropriateness of these particular utterances and stop deploying them’).

[60] Johnson, “Reframing AI Discourse,” 583, 577 (adding at 564 that machine autonomy hides ‘the essential role played by humans at every stage and deployment of an AI system’).

[61] Osha, Copyright in Artificially Generated Works, 7.

[62] Vanherpe, “AI and IP,” 224.

[63] Adorno was a German philosopher, sociologist, psychologist, musicologist and composer and a member of the Frankfurt School of communication. See Zuidervaart, “Theodor W. Adorno.”

[64] Adorno, “Social Critique of Radio Music,” 209.

[65] Adorno, “Social Critique of Radio Music,” 211.

[66] Adorno, “Social Critique of Radio Music,” 211.

[67] Adorno, “Social Critique of Radio Music,” 210.

[68] Adorno, “Social Critique of Radio Music,” 210.

[69] Adorno, “Social Critique of Radio Music,” 211.

[70] Adorno, “Social Critique of Radio Music,” 213.

[71] Adorno, “Social Critique of Radio Music,” 213.

[72] Adorno, “Social Critique of Radio Music,” 217.

[73] Han, “AI, Culture Industries and Entertainment,” 299-300.

[74] Adorno, “Social Critique of Radio Music,” 213.

[75] Floridi, “GPT-3,” 684.

[76] Boden, Creative Minds.

[77] Boden, Creative Minds, 43.

[78] Boden, Creative Minds, 43.

[79] Floridi, “GPT-3,” 685; Austen, Sanditon.

[80] Ramalho, “Robots Rule the (Artistic) World.”

[81] Italian Renaissance Learning Resources, “Training and Practice,” quoted in Brown, “Artificial Authors,” 25.

[82] Boden, Artificial Intelligence, 60.

[83] Boden, Artificial Intelligence, 60.

[84] Hobbes, Leviathan, 8 (emphasise omitted, spelling not corrected). For a discussion on Hobbes and this metaphor, see Kaplan, “Afraid of the Humanoid?”

[85] Boden, Artificial Intelligence, 60.

[86] Boden, Artificial Intelligence, 60.

[87] Boden, Artificial Intelligence, 60.

[88] Boden, Artificial Intelligence, 60.

[89] Boden, Artificial Intelligence, 61.

[90] Dante, Vita Nova, 44.

[91] Other examples abound. In Harold Cohen’s painting computer program, labelled Aaron, painted in Cohen’s style (Ginsburg, “Authors and Machines,” 409). Patrick Tresset and Frederic Leymarie’s AI system named Paul the Robot is ‘a robotic installation that produces observational face drawings of people ... mimicking drawing skills and technique[s]’ based on the style of artist-scientist Tresset, Alberto Giacometti and Dryden Goodwin (Tresset, “Portrait Drawing,” 361).

[92] Elkins, “Can GPT-3 Pass,” asking at 12 whether GPT-3 can ‘pass a writer’s Turing Test? Probably not, if all output considered. But with a judicious selection of its best writing? Absolutely’. See also Bridy, “Evolution of Authorship” (discussing at 399 a ‘Turing test for creativity’).

[93] Akerlof, “Market for Lemons.”

[94] Floridi, “GPT-3,” 691.

[95] Floridi, “GPT-3,” 691.

[96] Ragot, “AI-Generated vs. Human Artworks,” 1. It is true that Portrait of Edmond Belamy skyrocketed during its auction at Christie’s when it was auctioned at USD 432,500—that is, approximately 45 times its high estimate (Cohn, “AI Art at Christie’s”). However, it was a world premiere, excepting the private sale of Le Compte de Belamy to Paris-based collector Nicolas Laugero-Lasserre for EUR 10,000 (Nugent, “Painter Behind These Artworks”). The following auctions of machine-enabled artworks were far more disappointing. Memories of Passersby I was estimated at GPB 30,000 to GBP 40,000 (USD 40,000 to USD 53,000, using the average exchange rate for 2018 of GBP 1 = USD 1.3349) and auctioned at Sotheby’s for USD51,000 (Sotheby’s, “Memories of Passersby I”). Shortly after, La Baronne de Belamy, estimated at USD20,000 to USD 30,000 was auctioned, still at Sotheby’s, for USD 25,000 (Sotheby’s, “La Baronne de Belamy”). The announcement effect seems to be over.

[97] Akerlof, “Market for Lemons,” 489 (although writing about new and old, good and bad (lemons) cars).

[98] Petit, “Artificial Intelligence, Rules of Origins.”

[99] Petit, “Artificial Intelligence, Rules of Origins.”

[100] Petit, “Artificial Intelligence, Rules of Origins.”

[101] Akerlof, “Market for Lemons,” 495 (adding that ‘the cost of dishonesty, therefore, lies not only in the amount by which the purchaser is cheated; the cost also must include the loss incurred from driving legitimate business out of existence’, and that ‘the presence of people who wish to pawn bad wares as good wares tends to drive out the legitimate business’).

[102] Poe, “Imp of the Perverse.” However, the Imp is initially a metaphor for self-destructive behaviours. In this case, the behaviour the Imp prescribes is in the interest of the publisher.

[103] Bonadio, “Artificial Intelligence as Producer,” 122.

[104] Actually, the human user of Midjourney did not disclose the use of that AI system when she submitted an application to the US Copyright Office. It was only subsequently that the office became aware (through social media) of the use of Midjourney. US Copyright Office, “Zarya of the Dawn.”

[105] Floridi, “GPT-3,” 692 (adding at 690 that GPT-3 is able to ‘mass produce good and cheap semantic artefacts’).

[106] Petit, “Artificial Intelligence, Rules of Origins.” Floridi and Chiriatti seemed to acknowledge the need for a rule of origin. They forecasted that ‘one day classics will be divided between those written only by humans and those written collaboratively, by human and some software’ and, therefore, explained that ‘it may be necessary to update the rules for the Pulitzer Prize and the Nobel Prize in literature’ (Floridi, “GPT-3,” 691).

[107] Inama, Rules of Origin.

[108] Augier, “Impact of Rules of Origin”; Falvey, “Rules of Origin”; Cadot, “OECD Countries Should Reform Rules,” 77 (noting that rules of origin may also ‘carry significant compliance costs’).

[109] Krishna, “Understanding Rules of Origin.”

[110] Pila, “Authorial Works Protectable by Copyright,” 75.

[111] Miller, “Copyright Protection for Computer Programs” (noting at 1058-1059 that ‘identifying that author may not always be easy—especially when the human element is highly attenuated’).

[112] Falvey, “Economic Effects.”

[113] Regulation (EU) No 952/2013 of the European Parliament and of the Council of 9 October 2013 Laying Down the Union Customs Code, October 10, 2013, OJ L 269/1, art 60(2).

[114] Pursuant to article 19(1) of the Treaty on the EU, the CJEU includes the European Court of Justice (ECJ), the General Court and specialised courts. However, for convenience, this paper will use the abbreviation 'CJEU' when referring to the ECJ's caselaw.

[115] Galfand, “Heeding the Call,” 470.

[116] ECLI:EU:C:1977:9, Case C-49/76 (CJEU 1997), para 6. See also Hoesch Metals and Alloys GmbH v. Hauptzollamt Aachen, ECLI:EU:C:2010:68, Case C-373/08 (CJEU 2010), para 45; Ioannis Christodoulou and Others v. Elliniko Dimosio, ECLI:EU:C:2013:825, Case C-116/12 (CJEU 2013), para 53; Renesola UK Ltd v. Commissioners for Her Majesty’s Revenue and Customs, ECLI:EU:C:2021:400, Case C-209/20 (CJEU 2021), para 38.

[117] ECLI:EU:C:1979:20, Case C-34/78 (CJEU 1979).

[118] ECLI:EU:C:1989:637, Case C-26/88 (CJEU 1989), para 21.

[119] Thomson Multimedia Sales Europe and Vestel France v. Administration des douanes et droits indirects, ECLI:EU:C:2006:158, Joined Cases C-447/05 and 448/05 (CJEU 2007), para 39. Bundesfinanzdirektion West v. HEKO Industrieerzeugnisse GmbH, ECLI:EU:C:2009:768, Case C-260/08 (CJEU 2009).

[120] Ginsburg, “Authors and Machines.”

[121] Brown, “Response to Request for Comments,” 11. See also Brown, “Artificial Authors,” 33.

[122] Ada Lovelace, quoted in Levy, Robot Unlimited, 149. For complete quotation of Ada Lovelace, see Toole, Lovelace, 722.

[123] The EU secondary law then emphasises, for example, that ‘simple operations consisting of ... sorting [and] classifying’ and the ‘simple assembly of parts of products to constitute a complete product’ are not considered substantial. Commission Delegated Regulation (EU) 2015/2446 of 28 July 2015 Supplementing Regulation (EU) No 952/2013 of the European Parliament and of the Council as Regards Detailed Rules Concerning Certain Provisions of the Union Customs Code, December 29, 2015, OJ L 343/1, art 34.

[124] Estevadeordal, “Harmonizing Preferential Rules of Origin,” 266.

[125] To pay tribute to Akerlof’s lemons problem. See van Gogh, “Still Life with Lemons on a Plate.”

[126] Floridi, “GPT-3,” 692.

[127] The author would like to warmly thank the anonymous reviewer who brought this to his attention.

[128] Medler, “Generations of Game Analytics.”

[129] Barnabé, “Transformative Power of Speedrun,” 251.

[130] Scully-Blaker, “A Practiced Practice.”

[131] Menotti, “Videorec as Gameplay,” 82; Hemmingsen, “Code is Law,” 436.

[132] Schwartz, “Research (In)complete,” 551.

[133] Twin Galaxies, “What is Twin Galaxies?”; Gordon, King of Kong; Kocurek, Coin-Operated.

[134] Schwartz, “Research (In)complete,” 546.

[135] OpenAI, “Sharing & Publication Policy.”

[136] Stokel-Walker, “ChatGPT Listed as Author,” 620.

[137] April 21, 2021, COM/2021/206 final.

[138] Walsh, Android Dreams, 111.

[139] Turner, Robot Rules, 320-324.

[140] For more on deepfake, see Westerlund, “Emergence of Deepfake Technology.” See also Grinbaum, “Ethical Need for Watermarks,” noting that ‘the ethical imperative to not blur this distinction arises from the asemantic nature of large language models and from human projections of emotional and cognitive states on machines, possibly leading to manipulation, spreading falsehoods or emotional distress’.

[141] Hacker, “Regulating ChatGPT,” 14.

[142] Kirchenbauer, “Watermark for Large Language Models,” 1.

[143] Gu, “Watermarking Pre-Trained Language Models.”

[144] Mitchell, “DetectGPT.”

[145] Therefore, the EU legislator should ensure that LLMs are not included in the list of high-risk AI systems (i.e., alongside AI system that raises a risk of fundamental rights infringement or safety rules), which would subject them to all the requirements of this regulation. This would, at best, stifle the development of LLMs and, at worst, de facto prevent them from being made available on the EU market (e.g., because there is no certainty that internet-trained models comply with the AI Act, art 10(3), which states that ‘training, validation and testing data sets shall be relevant, representative, free of errors and complete’).

[146] Asimov, “Galley Slave,” 348.

[147] Asimov, “Galley Slave,” 348.

[148] Dehouche, ‘Plagiarism in the Age,” 21.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/LawTechHum/2023/3.html