AustLII Home | Databases | WorldLII | Search | Feedback

Law, Technology and Humans

You are here:  AustLII >> Databases >> Law, Technology and Humans >> 2023 >> [2023] LawTechHum 23

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Li, Phoebe; Williams, Robin; Gilbert, Stephen; Anderson, Stuart --- "Regulating Artificial Intelligence and Machine Learning-Enabled Medical Devices in Europe and the United Kingdom" [2023] LawTechHum 23; (2023) 5(2) Law, Technology and Humans 94


Regulating Artificial Intelligence and Machine Learning-Enabled Medical Devices in Europe and the United Kingdom

Phoebe Li

University of Sussex*, United Kingdom

Robin Williams

University of Edinburgh, United Kingdom

Stephen Gilbert

Technische Universität Dresden, Germany

Stuart Anderson

University of Edinburgh, United Kingdom

Abstract

Keywords: Artificial Intelligence (AI); Artificial Intelligence-enabled Medical Device (AIeMD); regulation; autonomous systems; Software as a Medical Device (SaMD).

I. Introduction

Recent achievements in respect of Artificial Intelligence (AI) open up opportunities for new algorithmic tools developed to assist medical diagnosis and care delivery, such as diagnosis in breast cancer tumours, diabetic retinopathy, diabetes mellitus and stroke.[1] However, the optimal process for the development of AI, which is one of repeated cycles of learning and implementation, poses challenges to our existing system of regulating medical devices.[2] Product developers face tensions between the benefits of this—in terms of continuous improvement/deployment of algorithms—and of keeping products unchanged in order to collect evidence for safety assurance processes. Existing precautionary regulation and clinical assurance requirements, based on the submission of evidence about the performance of specific artefacts, are reasons why current diagnostic AI is not yet based on real-time machine learning, continually updated on changing data streams. We argue below that governance requirements that accommodate this approach are needed and would be of great importance in healthcare, among other fields.[3]

Let us start by noting that there are various definitions of AI and that it has been a topic of much debate given the current European Union (EU) AI Act proposal.[4] Throughout this paper, however, we use the definition from the OECD, defining AI systems as:

a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.[5]

Significantly, it is this issue of the level of autonomy of AI systems that gets to the nub of some of the most pressing challenges presented by AI-enabled technologies; in particular when it comes to regulation and governance challenges. Machine Learning (ML), for instance, where computer software learns from experience with live updates fed by real-time data,[6] ‘significantly’ changes the operation system,[7] presenting huge challenges in safety critical contexts and systems. What we will see in this paper is that, given the particular way in which ML AI operates, the current regulatory system is inadequate for the task of governing systems that incorporate it. As such, regulatory reforms are required to govern AI, corresponding to the level of autonomy in the learning of the decision-making system. Such reform is imperative, especially because future systems may have an even higher level of autonomy owing to the adoption of deep learning (an even more complex branch of AI).[8] Market changes may adapt to the intended purpose of systems. However, leaving aside these future challenges, we will show that existing AI systems already present issues for law. For example, many closed-loop devices, such as pacemakers and internal defibrillators, use real-time analysis of signals to deliver therapy. These have been around for decades and use software that may or may not be seen to be AI, depending on the definition employed.[9] Further, there are deep learning artificial neural network approaches used to train prediction models. These models are then used in fixed and non-adaptable mode (without using ML neural techniques) in approved medical devices in the EU, UK and US.[10] Arguably, there are already many AIeMD on the market that do not involve human intervention and oversight, depending again on the definition one takes for AI. For example, autonomous image analysis AI in radiology has received market approval.[11]

We have entered a phase of regulatory experimentation with various novel approaches emerging around the world.[12] For example, the US Food and Drug Administration (FDA) recently published draft guidance on marketing submissions for a predetermined change control plan for AI/ML-enabled devices.[13] Notably, the Japan ‘Digital Transformation Action Strategies in Healthcare for SaMD’ approach allowing predetermined change control plans for AI-SaMD started in 2020.[14] The process is not only about the application of AI but also the institutional arrangements for its safe and dependable deployment. This includes experimentation around regulatory processes and is generally conducted within novel market pathways, surveillance and piloting sandbox schemes. The pressing challenge is how to balance the potential benefits of innovation in healthcare with the need to assure their safety.

In order to harness the conflicting tensions, novel approaches have been proposed to regulating AI in health and care that significantly reconfigure the balance between the upstream (pre-market) and downstream (post-market) phases of traditional two-phase regulation processes.[15] As we will discuss further in section IV, a ‘total product life cycle’ (TPLC) approach has been used by certain regulators instead of the two-phase regulatory regime comprised of pre-market and post-market stages.[16] Finding the balance between innovation and safety is also framed by the requirements of clinical governance that set high evidential standards for clinical use.[17] Navigating the in-between experimental space is crucial for agile and smart regulation.

Novel approaches to regulation are also being discussed in terms of international cooperation by a number of bodies. These include the International Medical Device Regulators Forum (IMDRF), the joint World Health Organization (WHO)/International Telecommunication Union (ITU),[18] the FDA, the Japan Pharmaceuticals and Medical Devices Agency (PMDA),[19] the EU European Medicine Agency (EMA),[20] and the UK Medicines and Healthcare products Regulatory Agency (MHRA). In the UK, uncertainties around existing legal instruments have been compounded by leaving the EU in 2020.[21] Proposals to diverge from the EU regime on medical device regulation have been raised in the UK, with the focus on the regulatory path with notified bodies (known as approved bodies in the UK) and national competent authorities.[22] We should also note that, throughout this paper, we generally refer to the ‘UK’ instead of ‘Great Britain’ (England, Wales and Scotland) to indicate the overall challenges related to AIeMD regulation across all UK jurisdictions. However, this is not to ignore the fact, as we will see shortly, that there is a dual system of medical device regulation currently in operation between ‘Great Britain’ and Northern Ireland, with the latter continuing to be governed by EU law in this area.[23]

With this context in mind, this paper analyses the relevant regulatory responses and obligations in respect of AI in the EU Medical Device Regulation (MDR)[24] and the proposed EU AI Act.[25] In particular, we draw on insights gained and reflect on key themes captured at recent stakeholder workshops on AI/ML Software-enabled Medical Devices (AIeMDs). These workshops involved a wide range of stakeholders, including industry, regulatory and academic specialists. As part of this, we map the evolving regulatory trajectory of AIeMDs in the UK, comparing this to the regimes in the EU and the US.[26] We start in section II with a brief review of the regulatory landscape for AIeMD, focusing on the EU MDR and the corresponding obligations set out in the EU’s proposed AI Act. We note that an experimental sphere has emerged in the circular TPLC approach but argue that stakeholders’ obligations will need to be clarified and redefined for clinical practice. In section III, we highlight the gaps and uncertainties brought by AIeMD, reflecting the emerging UK regulatory trajectory after leaving the EU. Following this, in section IV, we consider how the UK is currently seeking to optimise its approach, lying—as we will demonstrate—between the EU and the US regimes. What we will demonstrate is that many concepts of the TPLC approaches already exist in the EU framework and that the experimental spaces lie in cross-institution or organisational coordination. We conclude, in section V, with some observations regarding how we might best navigate the future regulatory trajectory for the UK after leaving the EU.

II. Regulating AI/ML-Enabled Software as Medical Device (AIeMD) in the EU

We start the discussion on medical device regulation with the EU. This framework is relevant for the UK context as well because, despite leaving the EU, the regulatory model for medical devices to which the UK is most aligned is the EU regulatory model. This is for two reasons. First, as already noted above, the UK now operates a dual system of regulation between Great Britain and Northern Ireland. The consequence of this is that Northern Ireland is still subject to the current EU medical device regulations.[27] Second, and relatedly, although Great Britain is in the process of diverging from the EU regulatory approaches, the current medical devices regulatory regime is still essentially EU law-based. This is because the Great Britain interpretation of the UK’s Medical Devices Regulations 2002 (SI 2002/618, as amended) still contains the provisions of three older EU Directives.[28] Given all of this, arguably, the EU still has an outsized influence on UK law in relation to medical devices.

The legal infrastructure in which Software as a Medical Device (SaMD) and thus AIeMDs sit was built for a physical and tangible world. Yet, in recent decades, this has been disrupted by a variety of digital transformations, with governments playing catch-up to meet the challenges brought by digital products and services, including SaMDs. While it is relatively straightforward to establish mechanical testing frameworks for discrete physical objects, similar rules are difficult to apply to informational products like algorithms and software whose properties cannot be established through visual inspection. While many testing frameworks have been built for software,[29] there may be additional challenges to testing AI/ML-enabled software. This is because of the possible unpredictable behaviour of these, given their potential to adapt performance in real-time.[30] Legal liability could arise at any element or interface of AI lifecycle: dataset, algorithms, software and hardware.[31] In addition to risks arising from separate elements of AI lifecycle, complexities in regulation also include governing risks arising from connectivity and cybersecurity issues.[32]

The EU legal framework is based on fundamental rights as legal guidelines for medical AI and human oversight is a key criterion for safeguarding human dignity. Key regulatory issues include privacy and data protection, informed consent, product safety and liability rules.[33] Hence, the focus for regulation revolves around human oversight and the explainability of AI systems.[34] As we demonstrate below, all new variables pose challenges to the existing regulatory system.

SaMD in the EU Medical Device Regulation (MDR)

In the EU, software used in health as a medical device is regulated under the EU MDR, which replaced the former Medical Device Directive (MDD) and Active Implantable Medical Device Directive (AIMD).

The EU MDR sets out the legal framework governing medical devices intended to be placed on the Union market, including articulating manufacturers’ obligations following the classification of the medical device corresponding to its risk profile. It comprises legal requirements across several areas. These include affixing a CE mark; device classification, where devices will be class I, IIa, IIb or III; device approvals by notified bodies; risk management and quality management systems; post-market surveillance; and clinical evaluation and technical documentation.[35] Although the MDR sets out detailed guidelines in relation to the classification of SaMD, the interpretation of medical purposes of a given software would need to be determined on a case-by-case basis. Due to the ambiguities of the new rules and the complexities in decision-making regarding whether software updates constitute a significant change, developers or manufacturers may need frequent communication with the notified bodies to navigate the market approval pathway.

While the EU MDR sets out the principles for SaMD, it nevertheless is insufficient for regulating the unique characteristics of AIeMD. This is because it takes a risk-based approach to the classification of devices and focuses on physical harms based on their potential impact on the human body. Crucially, this regulation has much less focus on other classes of risk, such as those presented by AI that may have an effect at the population (public health) level, something that is potentially only evident decades after the deployment of the ML model. Population effects are particularly important for AIeMD, where bias caused by unrepresentative training data can result in harms that are only characterisable at a population level. Another gap within the MDR relates to the complex chain of custody and issues around data governance. For example, the MDR only identifies a ‘manufacturer’ as the main subject responsible for compliance;[36] however, in an AIeMD case, various actors would be responsible for each step of operation (see below section III).

Additionally, the MDR does not provide guidelines for addressing the ‘black-box’ nature of AI, which is autonomous, unpredictable and changes constantly.[37] It does not require devices to explain or to be transparent to the user but only requires the devices to be safe and performant, achieving the intended use.[38] Though the MDR imposes general reporting obligations for post-market surveillance, the technical documents required for assessing performance and safety are still unclear.

As we are about to see, the new AI Act proposal adopts a similar risk approach and provides parallel regulatory mechanisms in relation to AIeMD, which may create overlapping or even blurring duties and responsibilities on stakeholders. We expand on this below.

Regulating AIeMD: The Proposed EU AI Act

AIeMD brings additional variables into regulating SaMD, such as evolving parameters in new data and datasets reflecting race, gender, age and clinical practice.[39] As discussed above, SaMD has been considered in existing medical device regulations. However, if on-market adaptive AIeMD are to be accommodated, a new regulatory regime for governing the risk associated with approved devices with adaptive performance and safety profiles will be necessary. This includes real-time performance monitoring and pre-specifications of the range of acceptable device performance.[40] The introduction of ‘safe harbours’ (the safe margin that can accommodate change or protocols for a predetermined change) that reflect a list of allowable changes and modifications for real-time AI/ML devices will need to be explored.[41] Looking forward, the regulatory approach must recognise the agile nature of software and temper conflicting tensions between innovation, market competitiveness and patient interests.[42]

In response to the challenges brought by AI systems, the EU proposed the AI Regulation (also referred to as the AI Act)[43] to set a harmonised legal framework for regulating AI systems.[44] Let us, therefore, examine the applicable rules within the proposed Act. The EU’s proposed AI Act defines an AI system as:

a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate output such as predictions, recommendations, or decisions influencing physical or virtual environments.[45]

This definition is aligned with the OECD definition (given earlier) to ensure international harmonisation and wide acceptance. Similar to the MDR’s risk regulation approach, the AI Act proposes classifying different types of AI systems corresponding to their risk profiles: unacceptable, high risk, limited risk and minimum risk. Before reaching the market, high-risk AI systems, such as medical devices, will be subject to strict obligations for a high level of robustness, security and accuracy. These obligations focus on risk assessment and mitigation systems, the high quality of data governance of datasets, traceability, detailed documentation to authorities and information to users, and human oversight to minimise risk.[46]

A high-risk AI system would need to involve a notified body to carry out a conformity assessment and comply with AI requirements. In certain cases where multiple notified bodies are involved, coordination and cooperation between notified bodies will be required.[47] AIeMD are considered high-risk systems, identified as being a ‘safety component of a product or system’ that are subject to third-party ex-ante conformity assessment.[48] As discussed in the previous section, this specifically applies to SaMD, which requires the involvement of a notified body in the conformity assessment process under the MDR.

The AI Act would also require that if ‘substantial’ changes occur in the AI system’s lifecycle, the system would need to undergo another conformity assessment.[49] These conformity obligations in relation to regulating AI/ML in SaMD would run in parallel to those required in the MDR discussed above. Rules in the AI Act and the MDR would be complementary and mutually supportive. However, having said this, to date, it is still unclear whether there will be a related update in the MDR or a coordinated response to the AI Act. It is also unclear whether the overlapping obligations could be waived or be additive to ease administrative burden. In these respects, the uncertain and complex role of notified bodies requires clarification. For example, whether notified bodies in the MDR and the AI Act are seen as the same notified bodies or different notified bodies departments.

Assessing the Effectiveness of the Legal Instruments Governing AIeMD: Measuring Interpretability/Explainability

The transparency of AI should be a legal principle and a way of thinking that underpins access to information and accountability.[50] Due to the opaque and black-box nature of AI systems, a level of transparency and explainability has been expected to demarcate liability and to redress bias, discrimination and other undesirable consequences. Transparency in relation to the relevant data of AIeMD would improve the usefulness, safety and quality of research by enabling relevant stakeholders to learn from the successes and failures of products.[51] Hence, the interpretability and explainability features for image recognition of high-risk AI systems are major regulatory requirements in the AI Act.[52] Explainability refers to an ex-ante explanation of an artificial neural network’s functionality and explanations of the decision taken, such as the rationale, the weighting and the rules.[53] Interpretability is the broader concept that includes explainability. To put it differently, interpretability is the end goal for transparency, while explainability is the tool for interpretability.[54] The transparency obligation,[55] in particular, would require high-risk AI systems to be designed and developed in a way that ensures the operation is sufficiently transparent to enable users to interpret the system’s output.[56] In addition, in order to conduct ex-post surveillance after product deployment, providers of high-risk AI systems would be obliged to report and share information on serious incidents.[57]

The logic and factors used for predictions, recommendations or decisions in an AI system should be transparent and disclosed in meaningful information. However, while important progress has been made in relation to explainability/interpretability, meeting this requirement is particularly challenging with ML systems based on deep learning. Technical standards for measuring explainability are still at the early stage of development.[58] Here, we prefer to consider interpretability/explainability rather than just explainability, noting that this is context-sensitive and moving an interpretable AI model from one context to another may mean it becomes non-interpretable. Transparency requirements of AI/ML in healthcare would depend on individual scenarios: whether it is fully autonomous or decisions ultimately made by clinicians or patients.[59] Scholars have proposed various explainability models with a view to integrating and mapping distributed data in high-dimensional spaces.[60] But the lack of standards, guidelines, and clarity in interpreting and applying existing rules in each given case is a key issue for AIeMD. In particular, there are no clear and objective standards or benchmarks for measuring transparency obligations regarding explainability.[61]

In theory, the decentralised governance notified body model in the EU MDR should provide more agility for the shortened AIeMD lifecycle. One of us has written elsewhere that the existing legal framework in the EU is based on a static product safety regime, which is difficult to adapt to continuous learning in ML approaches and modern agile approaches.[62] The regulatory requirements are considered too burdensome, and updates and deployment would not be feasible.[63] Compounding this issue, due to the relative immaturity of the field, there is no clear pathway for identifying someone who is qualified for navigating these issues (for instance, there is no such thing as a qualified ‘software engineer’ in the US and very few people have chartered engineering status in the UK). As we will see below, notified bodies are playing catch-up with the MDD/MDR shift and are finding it hard to hire people with appropriate ML expertise. As raised at the stakeholder workshop, product developers without a contract with a notified body or those not in an active approval process may be put on a long waiting list for communications and clarifications on regulatory issues.[64] In addition, the system is also open to inconsistencies in interpreting the rules between different notified bodies that could lead to the lack of a level playing field.

Having reviewed some pertinent aspects of the Regulations applicable to AIeMD in the EU, we move on in the next section to discuss some of the issues and insights of significance to different stakeholders in this area.

III. Gaps and Uncertainties in Regulating AI/ML-Enabled SaMD (AIeMD)

In this section, we identify and discuss selected key themes for adapting AIeMD in the regulatory ecosystem. Specifically, we examine problems with data, uncertainties in regulation, software adaptivity and regulatory burden, and complexities in deployment and liability. These workshops explored the key issues and possible solutions for regulation, and the need for international alignment. The stakeholders attending were drawn from a wide range of backgrounds and sectors and included regulators, industry representatives and academics. As we will see in the discussion, the concerns raised reflect and are co-extensive with, those currently being discussed in the literature. Given their prominence, therefore, lawmakers and regulators would be well-advised to take them into consideration as part of the current set of reforms. In order to make our case, we turn now to examine each of the themes mentioned.

Problems with Data

Securing access to data for research is a key problem. Stakeholders found navigating the landscape of data ownership, intellectual property, informed consent and data protection to be challenging. Often, regulatory and legal restrictions intended to protect the interests of data subjects and other individuals with interests were seen to conflict with optimal access to data and datasets. However, legal interventions may be necessary for striking a workable balance between an individual’s interests in data protection and access to that data for research and development purposes.

Having gained access to data, the next pertinent issue is the quality of input data and whether training data sets are adequate for the target population. Key issues include the representativeness, diversity and size of training datasets and how we can characterise them.[65] Currently, there is a great disparity in the socioeconomic and ethnic composition of training datasets and target populations, which leads to impaired diagnostic accuracy amongst under-represented groups.[66] As such, AI bias and poor generalisability are the two major issues.[67] Additional complications surround attempts to identify the performance of the ethnic mix of training and target populations at deployment. Patient demographics indicated by age, gender and ethnicity have been poorly reported with missing information.[68] One widely discussed example is a melanoma detection tool that did not work on black skin, as the tool was trained with a dataset of white patients.[69]

To address this, the AI Act sets out principles of data governance and management requirements for high-risk AI systems in relation to the design choices, collection, preparation, a prior assessment of quantity and suitability, as well as possible bias and data gaps.[70] Added to this, the European Health Data Space will be able to mitigate bias issues by breaking down health silos into a single virtual electronic health record and allowing secondary use to detect bias from data used in AIeMD. Member States could require healthcare providers to store both the use data and the output data in the patient’s electronic health record, which will enable clinicians’ reporting responsibilities and minimise bias.[71]

Further, some medical data are highly diverse and change rapidly.[72] In the future, data stability will be a problem if systems are to be updated with live data in operation. How do we deal with variability, validity, generalisability, translatability and deployment when ‘clean’ datasets are deployed in downstream applications? One solution we propose here might be to segment the market into sectors with different regulatory requirements, corresponding to the rate of change in data, rate of change in technological fields, rate of change in practice, and the severity of the consequences of the use of AI. For instance, one possible stratification may relate to how acceptable explainability is for a particular application.[73] Through the de facto decisions of regulators, we are seeing the emergence of a more nuanced process of regulation, with more precautionary approaches where risk is higher and more flexibility in cases where, for example, the explainability of models is high. In the longer term, this may lead to segmentation of the market based on regulatory risk stratification.

Uncertainties in Regulations: Lack of Standards and Types of Required Evidence for Algorithms

In the practice of medicine, safety is not fixed but often a fluid concept harnessing different interests and medical interventions. Safety (and its implementation) is often justified by a risk-benefit analysis, such as is seen with medical device regulation and other EU product safety legislation. In turn, the existing approach to regulating medical devices has been based on multi-layered control mechanisms comprised of the totality of medical device requirements. These requirements include provisions relating to the mapping of devices’ intended purposes, quality management systems, and post-market surveillance for risk management (as seen in the EU MDR above). In order to manage the risks arising from the full life cycle, products are expected to meet the safety standards or benchmarks set by national and international bodies.[74] Thus, the law recognises that it is impossible and impractical to expect an airtight case of safety and that safety is dependent on contextual interpretations. A margin of error would be accommodated if the software in question offers significant reductions in mortality, as seen in the example of surgical robots, which have been subject to relatively relaxed regulatory criteria (in the context that the systems are operated by remote control and are entirely under the surgeon’s control).[75]

As part of determining risks to safety, AI actors should disclose meaningful information in plain language on the underlying factors and logic used for the prediction, recommendation or decision of the AIeMD. However, one of the principal issues we face for AIeMD is the lack of standardisation and certainty, particularly in relation to developers’ regulatory compliance and transparency obligations regarding the interpretability and explainability of AI systems.[76] Nevertheless, the European Commission’s Assessment List for Trustworthy Artificial Intelligence does at least suggest that including traceability, auditability and transparent communication on system capabilities may be required when transparency and explainability are hindered by black-box algorithms.[77] Further, due to the lack of legal capacities, many small and medium enterprises (SMEs) are unsure as to how to apply clinical evidence requirements in the MDR and the ways to evaluate the performance and effectiveness of algorithmic models.

Lastly on this theme, one of the key issues with the EU regulatory landscape is the lack of publicly accessible databases that can be used to evaluate AIeMD. The European Commission database on medical devices (Eudamed) is a repository for information on market surveillance exchanged between national competent authorities and the Commission. However, its use is restricted to national competent authorities and is not publicly accessible.[78] Further, in Europe, the public does not have access to review the summaries of notified bodies or national authorities.[79] This is relevant to an important issue addressed by stakeholders, who noted that, due to notified bodies’ limited capacity, currently, there is a serious delay to market approvals of AIeMD. Participants of the workshops expressed concerns at notified bodies’ lack of capacity and transparency in communication with developers. Communication challenges depend heavily on the stage of the process and the contractual relationship with the notified body. In order to address this problem, it was suggested by some workshop participants that making publicly available more aspects of the technological documentation reviewed by notified bodies—for example, disclosure of clinical evaluation reports, such as regulatory submissions to the FDA are in the public domain—would be a way to increase transparency of approval decisions. It is also critical to take full advantage of the experimental spaces in the regulatory systems, for example, the sandbox piloting scheme set out in the AI Act,[80] to avoid undue delay to approvals. Specifically, new start-ups and SMEs should have priority in gaining access to regulatory sandboxes.[81]

Software Adaptivity and Regulatory Burden

One of the prominent features of AI/ML models is their adaptivity: the capacity to adapt and improve performance as they are exposed to more data. Yet this poses new challenges to the regulatory system and was a constant concern discussed at the stakeholder workshop. The performance of ML algorithms is shaped by the training data, and performance will vary depending on how well the training data characterises the deployment context. Hence, predicting that change in performance can be difficult. ML software requires constant and frequent updates, making regulatory compliance burdensome. The existing regulatory regimes are based on the concept that, once software is approved, it will remain relatively unchanged, or at least, that change cycles will be relatively long. This violates the assumption in the regulatory system as it currently stands, which is that software does not need to change. It is also difficult to have frequent and incremental change due to the rigidity of legal frameworks. To address these challenges, the AI Act sets out the plan of predetermined changes.[82] Predetermined change refers to an ‘adaptation, refinement or calibration of a device within set boundaries to the characteristics of a specific patient or healthcare setting and within the intended purpose of the device’.[83] Manufacturers may be required to regularly inform regulators and users of the technical information as to how the AI learns and changes over time.[84] However, as mentioned above, the time and cost of regulatory compliance is a major problem for companies, especially SMEs.

During the workshops, stakeholders were in broad agreement that regulating the adaptability of AI/ML algorithms should be subject to the optimal balance between safety, efficacy and the rate and race for innovation. As part of this, we suggest there is a need for clarity in identifying a ‘safe harbour’ that incorporates a safety margin and in defining ‘significant’ and/or ‘anticipated’ changes.[85] It is useful then to develop a software change protocol that distinguishes between an ‘anticipated change’, which could later be managed by light-weight anticipatory mechanisms, and unanticipated changes, which would involve a more sophisticated and rigorous framework for scrutiny.

Complexities Following Product Deployment and Liability

Our final theme, which was significant for the stakeholders during the workshops, relates to issues inherent in governing the sheer complexity of AI/ML development. Governing AIeMD typically involves dealing with several moving parts and faces the ‘many hands’ problem, where many people work together to develop AI/ML and many factors are involved in its deployment. The consequence of this is that responsibility for harm caused by its use may be unclear.

The future performance and dependability of AI systems hinges upon issues of human–machine integration—the integration of machine-based processes and solutions generated by AI within the existing ecosystem of technological, social and organisational processes mediated by human actions—and the selection and configuration of various components of hardware and software. Complexity arises from the deployment of AI systems within organisations where multiple elements and actors are drawn into operation, including data, algorithms, software, hardware, designers, operators and users. Data quality, representation and accuracy can be major issues—when pre-trained models developed at a central site are subsequently used in localised deployment.

As indicated, due to the complex chain of custody, stakeholders highlighted that the demarcation of responsibilities and liabilities has been challenging. The AI Act proposal distinguishes the roles and responsibilities of AI developers (providers) and users (deployers)[86] , while the MDR sets the manufacturer as the entity responsible for the device and imposes extra obligations considering instructing users and avoiding user errors.[87] Several means by which this demarcation of responsibilities and the complexity of AI in general could be navigated were raised in the discussion. In getting to grip with the governance of AI/AIeMD, the clear demarcation of liability between developers and deployers would first need addressing. In addition to tracing the liability of an individual actor, participants raised the possibility that group responsibility (such as the liability of the software development team) could be explored.[88] It would also be useful to develop AI systems with tracing systems where the chain of custody and liability could be tracked down. Third-party assurance on the safety and effectiveness of AI products—particularly at the juncture of transition from development to deployment—may be useful for phased quality control. In addition to these ex-ante measures aimed at preventing harm, ongoing monitoring and surveillance are still instrumental in ensuring safety in the total lifecycle. Additional safety measures such as insurance schemes and special compensation funds should also play an integral part ex post to remedy the damage.

Having mapped the key concerns in terms of regulatory challenges and uncertainties, the following section will explore the emerging experimental spaces for regulating AIeMD in the UK.

IV. Experimental Spaces for Regulating AIeMD in the UK

Having reviewed the key difficulties for governing AI as highlighted in the stakeholder workshops in the preceding section, this section will map the pending challenges for the UK in navigating its domestic digital policy in relation to regulating AIeMD, particularly after leaving the EU. We use the term ‘experimental spaces’ to indicate the innovative nature of the regulatory sphere, noting the piloting market pathways, monitoring and surveillance schemes, and ‘sandboxes’ as examples of experimental spaces in the regulatory regime.

Smart and Proportionate Regulation: Examining the Approach in the MHRA Consultation on the Future of Medical Devices and the AI White Paper

After leaving the EU, the UK Government has indicated a ‘light-touch and pro-innovation’ regulatory regime for the digital economy while ensuring the highest levels of privacy, patient safety and responsible use of data in the health system.[89] Yet these goals are, to some degree, mutually exclusive or at least pulling in opposite directions. As noted in section II, currently, the UK operates a dual system of medical device regulation, while Northern Ireland is governed by the EU MDR. Ostensibly, this may enable the UK to customise a regulatory regime for medicines and medical devices outside the EU. Much of this may be enabled by the recently passed the Medicines and Medical Devices Act 2021 (UK). We also noted that while the Act itself does not contain much substantive detail on the future approach to regulating medical devices (as outlined in more detail by Quigley and colleagues in this issue[90]), an indication of this future can be found in the government’s response to the MHRA’s recent public consultation on the future of medical devices regulation.[91] This response, along with the government white paper on AI, provides some detail of what the future regulation of AI in the UK might look like. Of particular interest to us for the purposes of this article is the potentially experimental character of this regulation. As such, we describe and discuss some of these proposals here.

Indications from the MHRA seem to show that regulatory reform regarding medical device regulations intends to consider an approach that is agile and adaptable to the safety considerations of the SaMD lifecycle. This is evident from the MHRA’s recently launched initiative to reform the regulatory framework of SaMD and AI as medical device and the government’s response to the MHRA’s consultation on the regulatory framework for medical devices in the UK.[92] The government response indicates that there is no intention to introduce any additional AIeMD requirements in legislation but instead to focus on the role of standards and guidance building on existing regulation for SaMD.[93] Hence, AIeMD will be regulated as a part of SaMD and follows a flexible approach to regulation while maintaining a strong emphasis on protecting patients and the public in relation to SaMD cyber security.[94] For example, the MHRA has planned a work program with several work packages, of which work package 10—Software and AI as a Medical Device Change Programme—proposes to better understand how the opacity of AI impacts safety, effectiveness and quality, and to use this to provide guidance on the form of interpretability that could mitigate these concerns.[95] We would add that clearly articulated minimum standards and risk profiles acting as a baseline and/or due diligence obligation for regulatory bodies could inform how notions of accountability and responsibility offer a shared understanding across regulatory bodies. In turn, this could support a robust framework for further risk-based criteria established by standard-setting bodies.

In parallel to these reforms, the UK Government also published the AI White Paper in March 2023, adopting the pro-innovation approach to regulating AI. This is based on the interpretation of five principles: (1) safety, security and robustness; (2) appropriate transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress.[96] This approach is a sectoral contextual approach and is to be complemented by technical standards and alternative assurance schemes. The MHRA is the lead regulator while working alongside other regulators as appropriate. The government hopes the principles-based approach will enable regulation to keep pace with a fast-evolving technology. We can see that, again, the government chose a light-touch approach to regulating AI with increasing reliance on industry standards and guidelines. This echoes the government’s response to the consultation, indicating a preference for using guidance instead of legislation for regulating SaMD or AIeMD.[97] Considering the short product lifecycle of AIeMD and the lengthy law-making processes, regulation via legislative statutes may not be favourable or optimal. This may also be due to the fact that the MHRA is seriously under-resourced and it is only feasible to govern AIeMD via delegation and co-regulation.[98] At present, the AI Standards Hub is a government initiative for developing AI standards collaboratively with other actors.[99] In the medical devices sector, the British Standards Institute have been drawing up standards for medical AI systems.[100] In this way, it recognises the rapid development in the field and that adaptive response is desirable for smart and agile regulation. As discussed in the above ‘Uncertainties in Regulations’ section, the lack of existing standards is a barrier to AI innovation. We can see that the reliance on standards is also a key character of the EU AI Act, which plays an essential role in the conformity assessment and the quality management system for high risk.[101]

In line with the aim to deliver proportionate responses and risk-based criteria for establishing when and what sort of regulatory intervention is required in guidance or voluntary measures, the White Paper emphasises that ‘regulators [should] focus on high risk concerns rather than hypothetical or low risks associated with AI’.[102] It suggests an initial assessment of AI-specific risks and their potential to cause harm, with reference to an analysis of the values that they threaten if left unaddressed. These values include safety, security, fairness, privacy and agency, human rights, social wellbeing and prosperity.[103] To deliver proportionate responses along with adaptable risk-based criteria for AI applications requires regulatory bodies and government set out clearly outlined priority areas. This would enable the development of dynamic adaptations to regulatory processes and the creation of new processes along with risk-based criteria. This approach is similar to the FDA’s enforcement discretion in relation to AI-based digital health technologies.[104] An example of a dynamic approach adapting existing legal processes is the MHRA’s recent proposal to introduce an obligation for manufacturers to implement measures that are ‘proportionate to the risk class, type of device and the size of the company, to cover any legal liability arising from adverse incidents with medical devices that they place on, or supply to the UK market’.[105] The government’s closer involvement in defining outcome-based regulation and proportionality should further clarify when regulatory change and revision of cross-sectoral principles are required. However, the government’s AI White Paper lacks detail on institutional design and does not describe a full workable regulatory framework.[106] Without clear criteria for regulatory intervention, a more complex contingent regulatory process, such as that proposed, may paradoxically increase uncertainties for regulated organisations. At the same time, independent expert committees, including, for instance, the Regulatory Horizons Council (RHC), which is an independent expert committee that identifies the implications of technological innovation and provides the government with expert advice on regulatory reform (discussed below), could assist regarding the identification of priority areas.

Currently, the interpretation of the key principles relies on relevant regulators in sector-specific contexts. The MHRA is the main regulator to elaborate the principles for regulating medical AI. The White Paper indicates that this light-touch approach will be reviewed in due course. If relevant regulators could not interpret the principles in each context and govern AI systems satisfactorily, a new statutory duty on regulators to have due regard to the principles would be introduced.[107] While this wait-and-see approach may be sensible before the full maturity of this field, we do have concerns about the impacts on all actors of the overall lack of clear guidance and if MHRA is well supported for co-regulation to coordinate with other relevant regulators and stakeholders while trying to meet the conflicting goals of patient safety, innovation and market competitiveness. We mentioned above, in the ‘Complexities Following Product Deployment and Liability’ section, that the lack of clear duties, responsibility and liability amongst actors is a key problem of regulating AI. Since the MHRA is severely under-resourced, it is uncertain whether it has sufficient capacity to cope with the urgent needs of defining its and other actors’ mandates and the respective scope of duties in the fast-moving field of AIeMD.[108] The pressing need to strengthen MHRA’s regulatory capacity was raised by the RHC, and yet, unfortunately, the government has not responded positively to the suggestion. For a rapidly evolving high-technology sector, regulators may not have sufficient capacity to face the regulatory challenges presented. Thus, the traditional top-down approach may need to be replaced or complemented by co-regulation, through which regulators work with the industry for effective governance.[109] But while evolving settings call for co-regulation, shifting governance down to the industry level due to limited regulatory capacity may be at the expense of applying standard criteria and maintaining accountability.

Compounding issues for regulation, the proliferation of AI methods in healthcare creates cross-cutting challenges to identifying emergent risks for health and safety. Coordination between regulators would be instrumental for effective and coherent policymaking. The UK RHC has identified a need for a coherent end-to-end regulatory pathway.[110] As the pathway to market of a new AI technology may require interaction with multiple regulators, the government proposes to adopt a multi-regulator sandbox approach by engaging multiple regulators for innovators to test their new ideas and work with regulators on the application of regulatory frameworks before AI can be put into operation.[111] An example of this is the Digital Regulation Cooperation Forum, DRCF, which is formed by key regulators to ensure cooperation on online regulatory matters, including the Competition and Markets Authority, the Information Commissioner’s Office, the Office of Communication, and the Financial Conduct Authority.[112]

Regulatory sandboxes are temporary approaches that facilitate the development of new regulatory approaches by which future policy would be investigated through close interactions in real-world case studies. Testbeds and multi-regulatory sandbox schemes are recommended for businesses to navigate the evolving regulatory labyrinth. However, as seen in a regulatory sandbox scheme, the Pre-Cert Pilot Program of the FDA, most of the companies that participated in the pilot schemes were large corporations, with only a small number of small companies and non-profit organisations taking part.[113] As we can see through the lack of resources in regulatory capacity for SMEs (see above section ‘Uncertainties in regulations’), SMEs are already struggling with meeting the regulatory burdens. To remove the barrier to the market, we suggest start-ups and SMEs should be prioritised for taking full advantage of such sandbox schemes.

The AI White Paper also proposes a central risk function layer, which is a new mechanism to coordinate, monitor and adapt the framework as a whole that provides central coordination or oversight between the government and individual regulators for efficient communications in identification, enforcement and monitoring of AI risks.[114] Similarly, the AI and Digital Regulations Service (formerly known as the Multi-Agency Advisory Service, MAAS) was set up as a cross-regulatory advisory service supporting developers and adopters of AI and digital technologies by providing guidance across the regulatory, evaluation and data governance pathways. It is a multi-agency collaboration between four organisations involving regulating and evaluating health and social care technologies.[115] It also sought to harmonise the evidence requirements collected by different regulatory bodies.[116]

As stated, an agile and smart approach to regulating AIeMD would depend on innovative collaboration between multiple regulators and agencies.[117] This includes the need for key institutions to work collaboratively to produce standards and sufficient evidence for monitoring and surveillance.[118] For example, the MHRA and the National Institute for Health and Care Excellence (NICE) collaboratively developed the ‘Evidence Standards Framework’ (ESF) for digital health technologies, which could be used for assessing the cost–benefit analysis of digital health technology.[119] A range of health service governance processes would also buttress the formal regulatory system. Such processes include the cost–benefit appraisals required for medical interventions by NICE, the significant distributed governance exercised by hospital trusts in procuring and implementing systems, and the formal and informal scrutiny exercised by clinical specialists and their professional bodies.

Added to this, AI and health regulators will also need to engage human rights departments to identify, define and address AI bias, disparities and inequalities. The AI White Paper suggests that regulators will need to develop and publish descriptions and illustrations of fairness and human rights protection that apply to AI systems.[120] It also notes that the Information Commissioner’s Office will be included in cross-agency discussions, particularly for data governance issues. The purpose will be to provide guidance on legal and governance issues relating to access to anonymised patients’ data to manufacturers and regulators for safety monitoring and optimising performance of AIeMD (see the above section, ‘Problems with Data’).[121] However, the powers, duties, relationships and dynamics between these institutions require further clarification and coordination in order to achieve a better, holistic approach to regulation.

One of the possible areas for inconsistent or varying policymaking lies in the way regulators interpret the experimental spaces in innovation via the early-to-market scheme. For example, an option of early deployment of AIeMD, while evidence is still being gathered, was suggested by the RHC via a ‘provisional registration’ mechanism led by the MHRA and an ‘early deployment route’ within NICE.[122] The government has a few proposals in place on this, including the Medical Device Single Audit Program, the Domestic Assurance route and an Innovative Devices Access Pathway.[123] Additional experimental approaches have been considered but will require further investigation. These include the ‘airlock classification rule’, which would apply to SaMD with an incomplete risk profile, is put on hold.[124] Similarly, a pre-market approvals route permits the deployment of a SaMD for a specified period for specific use cases. This has been considered but has also been subject to further clarifications on the limitations and scope of application.[125] The UK Government is keeping these options open and will seek future opportunities to clarify regulatory innovations like this.[126]

Further, the scope of regulatory activity in different jurisdictions and its impact on international recognition of AIeMD regulations needs to be identified. The RHC proposes Mutual Recognition Agreements, which would provide a fast-tracked or streamlined regulatory approval pathway for devices approved in recognised countries to promote ‘international regulatory cooperation’.[127] The MHRA intends to develop ‘access routes that build on synergies with both EU and wider global standards’ to address this.[128] Crucially, these proposals to recognise and engage with AI standards developed outside the UK indicate that the scope of distinct jurisdictional regulatory activity will require capacity-building for both regulatory bodies and the medical device industry. In particular, streamlining the regulatory approval process for devices approved in other countries requires a delicate balance between standardising processes, such as post-market surveillance for specific device types, as well as maintaining a high degree of safety for patients regarding emerging technology. Given this, before drawing this article to a close, we turn to examine issues of international alignment.

Towards International Alignment: Do UK Proposals Fall Short?

The medical device industry is an important area where responsible innovation is balanced with robust standard setting, convergence of regulations and market competitiveness. Much of the innovation in this sector will be driven by the deployment of AI. Because of the complexities of AI outlined throughout this paper, there is a need for robust regulation to ensure safety and efficacy. At the same time, however, innovation could be accelerated by strong global regulatory alignment. Hence, how the UK’s approach is aligned with international standards and frameworks will be an important element of its future regulation.

Considering the adaptable nature of AIeMD, entailing changing software development demands on adaptive algorithms and risk profiles for dynamic healthcare environments, regulatory coordination and convergence between standard-setting bodies will be essential to support the UK’s pro-innovation approach.

A shortcoming of the UK AI proposal is that they treat the sector as largely homogenous. In reality, the proposed context-driven and ‘light-touch’ approach is likely to affect technology developers, adopters, different industries and business models in very different ways. If the aim is to support UK businesses that develop AI, then a separate UK approach may not necessarily help companies that aspire to have an international reach. Additionally, UK-specific regulatory requirements would mean that any UK AI exports will have to comply with those on top of any destination countries’ respective regulatory requirements. Even if the UK approach were ‘lighter’ than that of the EU or US, absent formal recognition (mutual or unilateral) of the other regime(s) it would nevertheless increase compliance burdens.

Lighter regulation aimed solely at lessening the regulatory burden for entry into the domestic market could be made less burdensome comparatively to that in other nations. However, the UK market is small compared to the global market and regulation that is evidently lighter-touch than international standards is likely to erode public trust. The impact of a specific, less burdensome UK approach could be very different for UK businesses in industrial sectors that do not develop AI tools but benefit from their use, regardless of where in the world these have been developed. For these businesses, a low-regulation environment may be attractive. However, the UK risks becoming a testbed for technologies that legislators and regulators in their home jurisdiction still consider too risky for their citizens. In turn, while a lesser regulatory burden may give the edge to UK businesses and allow them to experiment more and see a steeper increase in productivity, it will come with a much higher reputational risk should these systems then fail. Due to the Brussels and Silicon Valley effects, foreign AI developers are likely to focus their development efforts in any case on systems that comply with the EU and US rules to get access to these much larger markets. As with the situation for UK-based companies, regulatory fragmentation could then increase their costs in the UK even if the UK system were comparatively light touch.

For these reasons, the UK industry has a strong interest in converging international standardisation. Therefore, avenues designed for industry to demonstrate compliance with international standards and interpret standards require further guidance by the UK Government, the MHRA and additional capability in UK Approved Bodies. At one of the AI/ML SaMD workshops mentioned earlier, we focussed on issues of international alignment. Participants were asked to reflect on their experience with regulators in the US, UK and EU, and to think about what the optimal future regulatory design in the UK might be. Specifically, we asked: how should the UK navigate between the US and EU regimes, which are the leading hegemonies in the Western world? As AIeMD often target the international market, most stakeholders favoured international convergence and reflected that the US-proposed framework for AI/ML-based SaMD provides pragmatic and clear guidelines. The FDA has a tailored regulatory framework. This comprises SaMD Pre-specifications, ‘Algorithm Change Protocols’, and ‘Predetermined Change Control Plans’ and sets out a framework for on-market adaptivity.[129] The FDA approach also addresses issues around transparency, explainability, labelling, bias and generalisability of datasets. It is a ‘total product lifecycle’ approach and sets out principles for monitoring the on-market behaviour of AI/ML SaMD within the SPS through real-world performance monitoring.

In general, workshop participants expressed a strong preference for taking up the FDA model, particularly in relation to pre-market approvals and market pathways. We can already see evidence of regulatory convergence with the US framework, for example, the MHRA published the joint ‘Good Machine Learning Practice’ Guidelines, which provide the basis for SaMD development within a medical device quality system, with the FDA and Health Canada in 2021.[130] Overall, the UK remains the most closely aligned to the EU MDR framework despite Brexit, but the pre-market aspects of the UK approach for AIeMD align more closely with US thinking and less with EU thinking.

The MHRA consultation on the future regulation of medical devices reflected a similar preference for international alignment. For example, the IMDRF SaMD classification framework will be adopted to amend the UK medical device regulations.[131] In relation to pre-market requirements, the UK Government has decided to introduce essential requirements that mirror the EU MDR, focusing on themes including cybersecurity, data protection, privacy and confidentiality.[132] In terms of post-market requirements, the predetermined change control plans’ based on the FDA framework, would be enabled to streamline the processes to manage change for software.[133] However, the FDA has not provided sufficient guidance on overall post-market surveillance and incident reporting responsibilities. Hence, we suggest the UK still align with the EU MDR mechanisms on post-market surveillance.

In relation to post-market surveillance, the government confirmed that a ‘predetermined change control plan’ (PCCPs) would be enabled on a voluntary basis for certain SaMD change management processes.[134] While the PCCPs contribute to the effectiveness of post-market surveillance by using real-world evidence to ensure software function and performance, reporting ‘significant’ or ‘substantial’ changes could be burdensome. Additionally, due to the technical complexities, in this response to consultation, the government did not mandate a ‘report adverse incident’ link from manufacturers where an adverse incident could be reported promptly.[135] We consider this a missed opportunity because there is currently a serious lack of reporting due to overcomplicated procedures; the government should reconsider introducing this mandatory reporting link requirements and implementation considerations for easing clinicians’ reporting duties. To strengthen the overall effectiveness of post-market surveillance, we suggest the ‘Medical Algorithmic Audit’ system be used to conduct robust and systematic error analysis and examine the governance structure at the hospital level.[136] We also recommend the development of technical standards and/or automated approaches to identifying biases and picking up missed cases after the AI system is deployed.

We can learn some important lessons about robust standard-setting frameworks and risk-based approaches by taking a closer look at the role of AI regulation at the EU level. The AI Act proposal is built around standardising protection, including common criteria regarding prohibited practices and high-risk and limited-risk systems.[137] This echoes the UK’s concern that AI regulation should not be overly occupied with merely theoretical risks. In addition, there is a close interplay of AI regulation between the EU and the US. However, there are important differences between the EU’s proposed AI Act and, for example, the recently introduced US bill concerning the Algorithmic Accountability Act of 2022.[138] One significant difference centres on risk profiles of AI systems in general, where the US Act focuses on regulating ‘critical decision processes’ rather than ‘high-risk AI systems’.[139] Considering the converging and divergent approaches to regulation, it remains to be seen how the UK pro-innovation approach would play out to set international standards and practices. How the UK’s mission of standard setting, including allowing high-risk practices prohibited in the EU AI Act, sets the benchmark for industry will depend on concrete cases that support responsible innovation and regulatory stability as incentives for AI innovation.

V. Conclusions: Looking Forward

Ensuring product safety on the global market involves a delicate balance between innovation, competition and consumer welfare. In this article, we examined some of the challenges regarding AI and ML-enabled SaMDs. After looking at the current and future regulatory landscapes in the EU and UK, we discussed some gaps and uncertainties in this area, with a special focus on the regulatory trajectory in the UK post-Brexit. Our analysis revolved around an examination of selected key themes from recent stakeholder workshops. We highlighted the conflicting tensions between safety and competitive innovation. Here, we looked specifically at challenges relating to problems with data and access to data, the lack of appropriate standards currently available, software adaptivity and complexities arising after product deployment (such as tracing responsibility and liability). Our discussion here reflected both the views expressed at two recent stakeholder workshops on AIeMD, as well as those in the extant literature.

Following this, in the final substantive section, we explored some emerging institutional and regulatory experimental spaces in regulating AIeMD in the UK. As we identified above, regulatory and institutional recalibration has emerged in the experimental spaces for AI technology uptake and deployment. Various actors’ responsibilities will need to be better understood, demarcated and reconfigured in the total product lifecycle approach. The growing experimental space in regulating AIeMD demonstrates that a sound legal infrastructure requires not only clarity and stability but also fluidity and flexibility. As noted in section IV, experimental spaces such as the pilot sandbox scheme for SME inclusion are also a critical area in translating the values and principles into practical clinical practice.

Due to the multi-faceted nature and variants of AIeMD, integrating government policies and regulations would be a major challenge. Effective coordination among various regulators would be crucial for a seamless transition into the AI society. In the case of medical devices, formal risk governance sits alongside a more complex clinical governance apparatus. Moving forward, further clarifications on the evolving roles, duties and relationships of various regulators would enhance the efficacy and efficiency of policy implementation.

The UK faces a dilemma. It wants to accept both US and EU regulations, but it must make its own decisions. Compliance with UK regulations may not be accepted by EU or US regulators, so UK producers will be at a greater disadvantage than their US/EU counterparts unless the UK opts for a pure US or EU model. The UK could arrive at an unfortunate situation where it could import EU- or US-approved products but could not export to either. Previously, the UK was 100% aligned with the EU; now, it is an uncomfortable ‘halfway house’ where a dual system of regulation exists between Great Britain and Northern Ireland. We can see that the UK has added aspects to align more with some of the FDA and IMDRF approaches, but this ‘pick-and-mix’ approach means there is no perfect alignment. From a regulatory perspective, this creates a large amount of work for companies if they are engaged in trade either into or out of the UK. While the UK has entered into new trade agreements with other economies,[140] regulatory interoperability between jurisdictions remains a major issue. Moving ahead, regulatory coherence and regulatory collaboration between economies will continue to be implemented alongside like-minded trade partners.

Acknowledgements

PL, RW and SA are supported by a grant from the UKRI Strategic Priorities Fund to the UKRI Research Node on Trustworthy Autonomous Systems Governance and Regulation (EP/V026607/1, 2020-2024). PL is also supported by the ESRC Centre for Inclusive Trade Policy (ES/W002434/1, 2022-2026). The analysis was conducted following two workshops: one on the regulation of AI Software as a Medical Device (online, 25th January 2022) and another on Regulating Machine Learning/Artificial Intelligence (ML/AI) in Software as a Medical Device (ML/AIaMD): international alignment (online, 27th April 2022). The workshops were organised by the UKRI Node on the Governance and Regulation of Trusted Autonomous Systems, https://governance.tas.ac.uk/.

We would like to thank the reviewers, Professor Muireann Quigley, Dr Laura Downey, Dr Zaina Mahmoud, Dr Joseph Roberts and Professor Alex Faulkner for comments on a previous draft. All websites were last checked on 23 October 2023.

Bibliography

Abramson, Richard G. “Variability in Radiology Practice in the United States: A Former Teleradiologist’s Perspective.” Radiology 263, no 2 (2012). https://doi.org/10.1148/radiol.12112066.

Aggarwal, Ravi, Viknesh Sounderagah, Guy Martin, Daniel S. W. Ting, Alan Karthikesalingam, Dominic King, Hutan Ashrafian, and Ara Darzi. “Diagnostic Accuracy of Deep Learning in Medical Imaging: A Systematic Review and Meta-Analysis.” npj Digital Medicine 4 (2021): 65. https://doi.org/10.1038/s41746-021-00438-z.

Aisu, Nao, Masahiro Miyake, Kohei Takeshita, Masato Akiyama, Ryo Kawasaki, Kenji Kashiwagi, Taiji Sakamoto, Tetsuro Oshika, and Akitaka Tsujikawa. “Regulatory-approved Deep Learning/Machine Learning-based Medical Devices in Japan as of 2020: A Systematic Review.” PLOS Digit Health 1, no 1 (2022): e0000001. https://doi.org/10.1371/journal.pdig.0000001.

Al-Faruque, Ferdous. “Experts: MDR Transition Delay Needs Clarification, Industry Engagement to Succeed.” Regulatory Focus (blog), January 10, 2023. https://www.raps.org/news-and-articles/news-articles/2023/1/experts-mdr-transition-delay-needs-clarification-i.

Benjamens, Stan, Pranavsingh Dhunnoo, and Bertalan Mesko. “The State of Artificial Intelligence-based FDA-approved Medical Devices and Algorithms: An Online Database.” npj Digital Medicine 3 (2020): 118. https://doi.org/10.1038/s41746-020-00324-0.

Biasin, Elizabeth, Erik Kamenjasevic, and Kaspar Rosager Ludvigsen. “Cybersecurity of AI Medical Devices: Risks, Legislation, and Challenges.” In Research Handbook on Health, AI and the Law, edited by Barry Solaiman and I. Glenn Cohen. Edward Elgar Publishing, forthcoming. https://arxiv.org/abs/2303.03140.

Bienefeld, Nadine, Jens Michael Boss, Rahel Luthy, Dominique Brodbeck, Jan Azzati, Micro Blaser, Jan Willms, and Emanuela Keller. “Solving the Explainable AI Conundrum by Bridging Clinician’s Needs and Developers’ Goals.” npj Digital Medicine 94 (2023): 6. https://doi.org/10.1038/s41746-023-00837-4.

Braun, M. Patrik Hummel, Susanne Beck, and Peter Dabrock. “Primer on an Ethics of AI-based Decision Support Systems in the Clinic.” Journal of Medical Ethics 47 (2020): e3. http://dx.doi.org/10.1136/medethics-2019-105860.

Cacha, Ignacio H., Judith Sáinz-Pardo Díaz, Maria Castrillo, and Álvaro López García. “Forecasting COVID-19 Spreading Through an Ensemble of Classical and Machine Learning Models: Spain’s Case Study.” Scientific Reports 13 (2023): 6750. https://doi.org/10.1038/s41598-023-33795-8.

Charlesworth, Andrew, Kit Fotheringham, Colin Gavaghan, Albert Sanchez-Graells, and Clare Torrible. “Response to the UK’s March 2023 White Paper ‘A Pro-innovation Approach to AI Regulation’.” June 19, 2023. https://dx.doi.org/10.2139/ssrn.4477368.

Chhaya, Vatsal and Kapil Khambholig. “The SaMD Regulatory Landscape in the US and Europe.” Regulatory Focus (blog), August 2021. https://www.raps.org/RAPS/media/news-images/Feature%20PDF%20Files/21-8_Khambholja-2.pdf.

Chinen, Mark. Law and Autonomous Machines: The Co-evolutions of Legal Responsibility and Technology. Cheltenham: Edward Elgar, 2019.

Clayton, Tony, Graham Spinardi, and Robin Williams. Policies for Cleaner Technology: A New Agenda for Government and Industry. London: Earthscan, 1999.

COCIR. “Artificial Intelligence in EU Medical Device Regulation.” May 2021 https://www.cocir.org/fileadmin/Publications_2021/COCIR_Analysis_on_AI_in_medical_Device_Legislation_-_May_2021.pdf.

Davis, Nicola. “AI Skin Cancer Diagnoses Risk Being Less Accurate for Dark Skin – Study.” The Guardian, November 9, 2021. https://www.theguardian.com/society/2021/nov/09/ai-skin-cancer-diagnoses-risk-being-less-accurate-for-dark-skin-study.

Downey, Laura and Muireann Quigley. “Software as a Medical Device: A Bad Regulatory Fit?” Everyday Cyborgs 2.0 (blog), March 15, 2021. https://blog.bham.ac.uk/everydaycyborgs/2021/03/15/software-as-a-medical-device-a-bad-regulatory-fit/.

Evans, Barbara J. and Frank Pasquale. “Product Liability Suits for FDA-Regulated AI/ML Software.” In The Future of Medical Device Regulation: Innovation and Protection, edited by Glenn I. Cohen, Timo Minssen, W. Nicholson Price II, Christopher Robertson and Carmel Shachar, 22-35. Cambridge University Press, 2022.

Freeman, Karoline, Julia Geppert, Chris Stinton, Daniel Todkill, Samantha Johnson, Aileen Clarke, and Sian Taylor-Philips. “Use of Artificial Intelligence for Image Analysis in Breast Cancer Screening Programmes: Systematic Review of Test Accuracy.” BMJ 374 (2021): n1872. https://doi.org/10.1136/bmj.n1872.

Ghassemi, Marzyeh, Luke Oakden-Rayner, and Andrew L. Beam. “The False Hope of Current Approaches to Explainable Artificial Intelligence in Health Care.” The Lancet Digital Health 3, no 11 (2021): e745–e750. https://doi.org/10.1016/S2589-7500(21)00208-9.

Gilbert, Stephen, Matthew Fenech, Martin Hirsch, Shubhanan Upadhyay, Andrea Biasiucci, and Johannes Starlinger. “Algorithm Change Protocols in the Regulation of Adaptive Machine Learning-Based Medical Devices.” Journal of Medical Internet Research 23, no 10 (2021): e30545. https://doi.org/10.2196/30545 .

Gilbert, Stephen, Stuart Anderson, Martin Daumer, Phoebe Li, Tom Melvin, and Robin Williams. “Learning From Experience and Finding the Right Balance in the Governance of Artificial Intelligence and Digital Health Technologies.” Journal of Medical Internet Research 25 (2023): e43682. https://doi.org/10.2196/43682.

Ibrahim, Hussein, Xiaoxuan Liu, Nevine Zariffa, Andrew D. Morris, and Alastair K. Denniston. “Health Data Poverty: An Assailable Barrier to Equitable Digital Health Care.” Lancet Digital Health 3, no 4 (2021): e260–e265. https://doi.org/10.1016/S2589-7500(20)30317-4.

Jiang, Zhihao, Houssam Abbas, Pieter J. Mosterman, and Rahul Mangharam. “Automated Closed-loop Model Checking of Implantable Pacemakers Using Abstraction Trees.” ACM SIGBED Review 14, no 2 (2017): 15–23. https://doi.org/10.1145/3076125.3076127.

Khan, Saad M., Xiaoxuan Liu, Siddharth Nath, Edward Korot, Livia Faes, Siegfried K. Wagner, Pearse A Keane, Neil J. Sebire, Matthew J. Burton, and Alastair K. Denniston. “A Global Review of Publicly Available Datasets for Ophthalmological Imaging: Barriers to Access, Usability, and Generalisability.” The Lancet Digital Health 3, no 1 (2021): e51–e66. https://doi.org/10.1016/S2589-7500(20)30240-5.

Kiseleva, Anastasiya. “AI as a Medical Device: Is it Enough to Ensure Performance Transparency and Accountability?” European Pharmaceutical Law Review 4, no 1 (2020): 5–16. https://doi.org/10.21552/eplr/2020/1/4.

Kiseleva Anastasiya, Dimitris Kotzinos, and Paul De Hert. “Transparency of AI in Healthcare as a Multilayered System of Accountabilities: Between Legal Requirements and Technical Limitations.” Frontiers in Artificial Intelligence 5 (2022): 879603. https://doi.org/10.3389/frai.2022.879603.

Kunwar, Damini. “Robotic Surgeries Need Regulatory Attention.” The Regulatory Review (blog), January 8, 2020. https://www.theregreview.org/2020/01/08/kunwar-robotic-surgeries-need-regulatory-attention/.

Li, Phoebe, Alex Faulkner, and Nicolas Medcalf. “3D Bioprinting in a 2D Regulatory Landscape: Gaps, Uncertainties and Problems.” Law, Innovation and Technology 12, no 1 (2020): 1–29. https://www.tandfonline.com/doi/abs/10.1080/17579961.2020.1727054.

Liu, Xiaoxuan, Ben Glocker, Melissa M McCradden, Marzyeh Ghassemi, Alastair K. Denniston, and Lauren Oakden-Rayner. “The Medical Algorithmic Audit.” Lancet Digital Health 4 (2022): e384–e397. https://doi.org/10.1016/S2589-7500(22)00003-6.

Lugard, Mauritis, Josefine Sommer, and Anouchka Hoffmann. “Understanding What Constitutes ‘Significant Changes’ in Design and Intended Purpose of Medical Devices in the EU.” SIDLEY (blog), February 2020. https://www.sidley.com/en/insights/publications/2020/02/understanding-what-constitutes-significant-changes-design-intended-purpose-of-medical-devices-eu#:~:text=For%20manufacturers%20of%20medical%20devices,requirements%2C%20regardless%20of%20any%20potential.

May, Mike. “Eight Ways Machine Learning is Assisting Medicine.” Nature Medicine 27 (2021): 2–3. https://doi.org/10.1038/s41591-020-01197-2.

Missen, Timo, Sara Gerke, Mateo Aboy, Nicholson Price, and Glenn Cohen. “Regulatory Responses to Medical Machine Learning.” Journal of Law and the Biosciences 7, no 1 (2020): 1–18. https://doi.org/10.1093/jlb/lsaa002.

Mökander, Jakob, Prathm Juneja, David Watson and Luciano Floridi. “The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: What Can They Learn From Each Other?” Minds and Machines 32 (2022): 751–758. https://doi.org/10.1007/s11023-022-09612-y.

Mourby, Miranda, Katherina Ó. Cathaoirb and Catherine Bjerre Collin. “Transparency of Machine-Learning in Healthcare: The GDPR and European Health Law.” Computer Law & Security Review 43 (2021): 105611. https://doi.org/10.1016/j.clsr.2021.105611.

Payrovnaziri, Seyedeh Neelufar, Zhaoyi Chen, Pablo Rengifo-Moreno, Tim Miller, Jiang Bian, Jonathan H. Chen, Xiuwen Liu and Zhe He. “Explainable Artificial Intelligence Models Using Real-World Electronic Health Record Data: A Systematic Scoping Review.” Journal of the American Medical Informatics Association 27, no 7 (2020): 1173–1185. https://doi.org/10.1093/jamia/ocaa053.

Quigley, Muireann and Laura Downey. “Living in a Material World? Regulatory Challenges of Software as a Medical Device.” Draft working paper shared with the authors.

Quigley, Muireann, Laura Downey, Zaina Mahmoud, and Jean McHale. “The Shape of Medical Devices Regulation in the United Kingdom? Brexit and Beyond.” Law, Technology and Humans 5, no 2 (2023). 21-42. https://doi.org/10.5204/lthj.3102.

Renda, Andrea and Alex Engler, “What’s in a Name? Getting the Definition of Artificial Intelligence Right in the EU’s AI Act.” CEPS EXPLAINER (blog), February 22, 2023. https://www.ceps.eu/ceps-publications/whats-in-a-name/.

Roberts, Huw, Josh Cowls, Emmie Hine, Francesca Mazzi, Andreas Tsamados, Mariarosaria Taddeo, and Luciano Floridi. “Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US.” Science and Engineering Ethics 27, no 68 (2021). https://doi.org/10.1007/s11948-021-00340-7.

Schneeberger, David, Karl Stoeger, and Andreas Holzinger. “The European Legal Framework for Medical AI.” 4th International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), August 2020, Dublin, Ireland.

Stoger, Karl, David Schneeberger, and Andreas Holzinger. “Medical Artificial Intelligence: The European Legal Perspective.” Communications of the ACM 64, no 11 (2021): 34–36. https://doi.org/10.1145/3458652.

Smith, Helen and Kit Fotheringham. “Exploring Remedies for Defective Artificial Intelligence Aids in Clinical Decision-making in Post-Brexit England and Wales.” Medical Law International 22, no 1 (2022). https://doi.org/10.1177/09685332221076124.

Solaiman, Barry and Mark G. Bloom. “AI, Explainability, and Safeguarding Patient Safety in Europe: Towards a Science-Focused Regulatory Model.” In The Future of Medical Device Regulation: Innovation and Protection, edited by Glenn I. Cohen, Timo Minssen, W. Nicholson Price II, Christopher Robertson, and Carmel Shachar, 91-102. Cambridge University Press, 2022.

Sørensen, Knut H. “Learning Technology, Constructing Culture: Sociotechnical Change as Social Learning.” STS Working Paper 18/96. Trondheim: Norwegian University of Science and Technology, 1996.

Spinardi, Graham and Robin Williams. “Environmental Innovation in Refining and Chemicals.” In Ahead of the Curve: Cases of Innovation in Environmental Management, edited by Ken Green, Peter Groenwegen, and Peter S. Hofman, 165–192. Dordrecht/Boston/London: Kluwer Academic, 2001.

Vayena, Effy, Alessandro Blasimme, and Glen Cohen. “Machine Learning in Medicine: Addressing Ethical Challenges.” PLoS Medicine 15, no 11 (2018): e1002689. https://doi.org/10.1371/journal.pmed.1002689.

Vokinger, Kerstin N., Thomas J. Hwang, and Aaron S. Kesselheim. “Lifecycle Regulation and Evaluation of Artificial and Machine Learning-Based Medical Devices.” In The Future of Medical Device Regulation: Innovation and Protection, edited by Glenn I. Cohen, Timo Minssen, W. Nicholson Price II, Christopher Robertson, and Carmel Shachar, 13-21. Cambridge University Press, 2022.

Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi. “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation.” International Data Privacy Law 7, no 2 (2017): 76–99. https://doi.org/10.1093/idpl/ipx005.

Wellnhofer, Ernst. “Real-World and Regulatory Perspectives of Artificial Intelligence in Cardiovascular Imaging.” Frontiers in Cardiovascular Medicine (2022): 9.890809. https://doi.org/10.3389/fcvm.2022.890809.

Westman, Nicole. “Autonomous X-ray-analyzing AI is Cleared in the EU.” The Verge (blog), April 5, 2022. https://www.theverge.com/2022/4/5/23011291/imaging-ai-autonomous-chest-xray-eu-fda.

Primary Legal Materials

European Union

European Commission, The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for Self Assessment. (2020).

Council Directive 90/385/EEC of 20 June 1990 on the Approximation of the Laws of the Member States Relating to Active Implantable Medical Devices.

Medical Device Coordination Group, Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 – MDR and Regulation (EU) 2017/746 – IVDR, MDCG 2019-11, October 2019.

Regulation 2017/745 of the European Parliament and of the Council of 5 April 2017 on Medical Devices, Amending Directive 2001/83/EC, Regulation (EC) No 178/2002.

Regulation 1223/2009 and Repealing Council Directives 90/385/EEC and 93/42/EEC OJ L 117.

Regulation 2016/679 on the Protection of Natural Persons With Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation).

Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts – Presidency Compromise Text, Brussels, 29 November 2021.

Amendments Adopted by the European Parliament on 14 June 2023 on the Proposal for a Regulation of the European Parliament and of the Council on Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD).

International Standards

IEEE. Standard for Transparency of Autonomous Systems. IEEE 7001-2021.

IMDRF/AIMD WG (PD1)/N67:2021. Machine Learning-enabled Medical Devices—A Subset of Artificial Intelligence-enabled Medical Devices: Key Terms and Definitions. Proposed document. September 16, 2021.

International Medical Device Regulators Forum (IMDRF). Software as a Medical Device (SaMD): Key Definition. December 9, 2013. https://www.imdrf.org/sites/default/files/docs/imdrf/final/technical/imdrf-tech-131209-samd-key-definitions-140901.pdf.

OECD. Recommendation of the Council on Artificial Intelligence. OECD/LEGAL/0449. 2022. https://oecd.ai/en/ai-principles.

United Kingdom

British Standards Institute. Application of BS EN ISO 14971 to Machine Learning in Artificial Intelligence – Guide. BS/AAMI 34971:2023. (BSI, 2023).

Department for Digital, Culture, Media and Sport. UK Digital Strategy 2022. (UK DCMS, 2022). https://www.gov.uk/government/publications/uks-digital-strategy/uk-digital-strategy.

Department for Science, Innovation and Technology. A Pro-innovation Approach to AI Regulation. (Department for Science, Innovation and Technology, 2023).

House of Commons Library. Progress on UK Free Trade Agreement Negotiations. July 24, 2023.

Medicines and Healthcare products Regulatory Agency, US Food & Drug Administration, and Health Canada. Good Machine Learning Practice for Medical Device Development: Guiding Principles 2021.

Medicines and Medical Devices Act 2021 (UK).

Medicines and Healthcare products Regulatory Agency. Government Response to Consultation on the Future Regulation of Medical Devices in the United Kingdom. (UK MHRA, 2022).

Medicines and Healthcare products Regulatory Agency. Software and AI as a Medical Device Change Programme – Roadmap. (UK MHRA, 2023).

National Institute for Health and Care Excellence. Evidence Standards Framework (ESF) for Digital Health Technologies. https://www.nice.org.uk/about/what-we-do/our-programmes/evidence-standards-framework-for-digital-health-technologies.

Office for Product Safety and Standards. Study on the Impact of Artificial Intelligence on Product Safety 2021. (Office for Product Safety and Standards, 2021).

Regulatory Horizons Council (RHC). Medical Devices Regulation 2021. (UK HRC, 2021).

Regulatory Horizons Council (RHC). The Regulation of Artificial Intelligence as a Medical Device 2022. (UK RHC, 2022).

UK Government. HM Government Response to Sir Patrick Vallance’s Pro-Innovation Regulation of Technologies Review: Digital Technologies. (2023).

United States

US Food and Drug Administration. Action Plan on Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device 2021. (US FDA, 2021).

US Food and Drug Administration. Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions: Draft Guidance for Industry and Food and Drug Administration Staff 2023. (US FDA, 2023).


*Email: [1].

1 Benjamens, “The State,” 3–4.

[2] Gilbert, “Algorithm Change.”

[3] Missen, “Regulatory Responses”; Vayena, “Machine Learning in Medicine.”

[4] Renda, “What’s in a Name?”

[5] OECD, Recommendation of the Council on Artificial Intelligence, I.

[6] May, “Eight Ways Machine Learning is Assisting Medicine.”

[7] The subsequent issue is how to define ‘significant’ change. Lugard, “Understanding What Constitutes ‘Significant Changes’.”

[8] Aggarwal, “Diagnostic Accuracy.”

[9] Jiang, “Automated Closed-loop Model.”

[10] Solaiman, “AI, Explainability, and Safeguarding Patient Safety in Europe,” 93.

[11] Westman, “Autonomous X-ray-Analyzing AI is Cleared in the EU.”

[12] UK Regulatory Horizons Council (RHC), The Regulation of Artificial Intelligence as a Medical Device, Annex B.

[13] US FDA, Marketing Submission Recommendations.

[14] Aisu, “Regulatory-approved Deep Learning.”

[15] Li, “3D Bioprinting.”

[16] Wellnhofer, “Real-World and Regulatory Perspectives”; UK RHC, The Regulation of Artificial Intelligence as a Medical Device; US FDA, Action Plan.

[17] As evidenced in a recent BMJ article contrasting experimental evidence from Google. See Freeman, “Use of Artificial Intelligence.”

[18] IMDRF/AIMD WG (PD1)/N67:2021, Machine Learning-enabled Medical Devices. The ITU/WHO Focus Group on Artificial Intelligence for Health (FG-AI4H) aims to establish a standardised assessment framework for the evaluation of AI-based methods for health, diagnosis, triage or treatment decisions: https://www.itu.int/en/ITU-T/focusgroups/ai4h/Pages/default.aspx.

[19] https://www.pmda.go.jp/english/rs-sb-std/sb/subcommittees-3rd/0024.html.

[20] The EMA’s role in medical devices is small, but slowly expanding. It has a small role in AI regulation in very narrow circumstances: https://ec.europa.eu/growth/single-market/european-standards/harmonised-standards/medical-devices_en.

[21] Gilbert, “Learning From Experience.”

[22] UK MHRA, Government Response to Consultation on the Future Regulation of Medical Devices in the United Kingdom.

[23] Quigley, “The Shape of Medical Devices Regulation in the United Kingdom?”; Downey, “Software as a Medical Device.”

[24] EU Medical Device Regulation (MDR).

[25] EU AI Act.

[26] Note that we use the term ‘AI-enabled medical devices’ and the abbreviation ‘AIeMDs’ throughout this article, departing from other common abbreviations used in this field. The IMDRF uses the term ‘AIMD’ (‘AI Medical Device’). That term has disambiguation issues with the Medical Device Directive. The UK MHRA uses the term ‘AIaMD’ (‘AI as Medical Device’) in its response to the consultation. That term has limitations and has been criticised, as the AI may not be central to the medical device’s functionality. While the use of yet another term risks the introduction of more acronyms, the current abbreviations are limited in their appropriateness and are not yet established in the peer-reviewed scientific literature. Therefore, we think it sensible to use our more precise terminology.

[27] For the details of this, see Quigley, “The Shape of Medical Devices Regulation in the United Kingdom?”

[28] EU Council Directives 90/385/EEC, 93/42/EEC and 98/79/EC.

[29] Downey, “Software as a Medical Device”; Quigley, “Living in a Material World?”

[30] Vokinger, “Lifecycle Regulation.”

[31] Office for Product Safety and Standards, Study on the Impact of Artificial Intelligence on Product Safety; Evans, “Product Liability Suits.”

[32] Biasin, “Cybersecurity of AI Medical Devices.”

[33] Schneeberger, “The European Legal Framework for Medical AI.”

[34] Stoger, “Medical Artificial Intelligence.”

[35] Vokinger, “Lifecycle Regulation.”

[36] MDR, Article 2(30). See also Quigley, “Living in a Material World?”

[37] Kiseleva, “AI as a Medical Device.”

[38] COCIR, “Artificial Intelligence in EU Medical Device Regulation.”

[39] Khan, “A Global Review.”

[40] Gilbert, “Learning From Experience.”

[41] Vokinger, “Lifecycle Regulation,” 19. See section III on software adaptivity. See also US FDA, Marketing Submission Recommendations.

[42] Department for Digital, Culture, Media and Sport, UK Digital Strategy 2022.

[43] Proposal for an AI Regulation (Artificial Intelligence Act).

[44] Proposal for an AI Regulation (Artificial Intelligence Act), Recital 6.

[45] The revised EU AI Act adopted by the European Parliament on 14 June 2023, Article 3.

[46] EU AI Act, Article 14(1). In many standards, the idea is to use ALARP (as low as reasonably practicable). Draft AI Regulation, Chapter 2, Articles 8–15. See European Commission, “Regulatory Framework Proposal on artificial intelligence,” https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.

[47] MDR, Article 49.

[48] EU AI Act, Preamble 28, Articles 3.14 and 6.2.

[49] EU AI Act, Article 43.4. ‘Substantial modification’ means a change to the AI system that affects the requirements in regulatory compliance or a modification to the intended purpose (Article 3.23). The predetermined change protocol may benefit from the ‘substantial equivalence’ determination by the FDA. The FDA provides an expedient path for a new device to enter the market by showing substantial equivalence with a referenced device that is already on the market. ‘Substantial equivalence’ means that the intended use of a new medical device is substantially equivalent to a predicate device. Similarities and differences between the new and predicate devices need to be demonstrated to show the essential nature of two devices are equivalent.

[50] Kiseleva, “Transparency of AI,” 5.

[51] Vokinger, “Lifecycle Regulation,” 20.

[52] Vayena, “Machine Learning in Medicine”; Payrovnaziri, “Explainable Artificial Intelligence Models.”

[53] Wachter, “Why a Right,” 78.

[54] Kiseleva, “Transparency of AI,” 6.

[55] EU AI Act, Articles 13(1) and 52.

[56] EU AI Act, Article 13.

[57] EU AI Act, Articles 3(44)(c) and 62.

[58] Solaiman, “AI, Explainability, and Safeguarding Patient Safety in Europe.”

[59] Mourby, “Transparency of Machine-Learning in Healthcare.”

[60] Holzinger, “What Do We Need.”

[61] Ghassemi, “The False Hope.”

[62] Gilbert, “Algorithmic Change.”

[63] Gilbert, “Algorithmic Change.”

[64] Al-Faruque, “Experts.”

[65] Vayena, “Machine Learning in Medicine”; Braun, “Primer.”

[66] Ibrahim, “Health Data Poverty,” 1.

[67] UK RHC, The Regulation of Artificial Intelligence as a Medical Device.

[68] Khan, “A Global Review.”

[69] Davis, “AI Skin Cancer.”

[70] EU AI Act, Article 10(2).

[71] COCIR, “Artificial Intelligence in EU Medical Device Regulation,” 23. See European Health Data Space, https://health.ec.europa.eu/ehealth-digital-health-and-care/european-health-data-space_en.

[72] One example is attempts to model COVID-19 transmission, which floundered as variants emerged and further mutated and as regulations and populations behaviours changed. See, e.g., Cacha, “Forecasting COVID-19.” See also Abramson, “Variability.”

[73] Bienefeld, “Solving the Explainable.”

[74] Internationally, there are ISO and IEEE initiatives on standard setting for AI-related technologies. In the UK, the AI Standard Hub is the responsible agency for setting standards for AI-related systems.

[75] Kunwar, “Robotic Surgeries.”

[76] Solaiman, “AI, Explainability, and Safeguarding Patient Safety in Europe.” See also IEEE, Standard for Transparency of Autonomous Systems.

[77] European Commission, The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for Self Assessment.

[78] Vokinger, “Lifecycle Regulation,” 18.

[79] Vokinger, “Lifecycle Regulation,” 20.

[80] EU AI Act, Preamble 72 and Articles 53–55.

[81] EU AI Act, Article 55.

[82] EU AI Act, Articles 13(c) and 43(4).

[83] COCIR, “Artificial Intelligence in EU Medical Device Regulation,” 9.

[84] COCIR, “Artificial Intelligence in EU Medical Device Regulation,” 20.

[85] ‘Significant risk’ means a risk that is significant in terms of its severity, intensity, probability of occurrence, duration of its effects and ability to affect an individual, a plurality of persons or a particular group of persons.

[86] EU AI Act, Articles 3 and 29 (on providers’ and deployers’ responsibilities).

[87] MDR, Article 10.

[88] See also Chinen, Law and Autonomous Machines, 77.

[89] Department for Digital, Culture, Media and Sport, UK Digital Strategy 2022.

[90] Quigley, “The Shape of Medical Devices Regulation in the UK?”

[91] UK MHRA, Government Response to Consultation on the Future Regulation of Medical Devices in the United Kingdom.

[92] UK MHRA, Government Response to Consultation on the Future Regulation of Medical Devices in the United Kingdom.

[93] UK MHRA, Government Response to Consultation on the Future Regulation of Medical Devices in the United Kingdom, section 65.2; UK RHC, The Regulation of Artificial Intelligence as a Medical Device, 37.

[94] UK MHRA, Government Response to Consultation on the Future Regulation of Medical Devices in the United Kingdom, section 64.2.

[95] UK MHRA, Software and AI.

[96] Department for Science, Innovation and Technology, A Pro-innovation Approach to AI Regulation.

[97] UK MHRA, Government Response to Consultation on the Future Regulation of Medical Devices in the United Kingdom, Ch. 10.

[98] UK RHC, The Regulation of Artificial Intelligence as a Medical Device, 33.

[99] The AI Standards Hub is led by the Alan Turing Institute in partnership with the British Standards Institution and the National Physical Laboratory. It is supported by the UK Government through the DCMS Digital Standards team and the Office for AI.

[100] British Standards Institute, Application of BS EN ISO 14971.

[101] EU AI Act, Articles 17 and 40.

[102] Department for Science, Innovation and Technology, A Pro-innovation Approach to AI Regulation, Executive Summary.

[103] Department for Science, Innovation and Technology, A Pro-innovation Approach to AI Regulation, para. 24.

[104] Gilbert, “Learning from Experience.”

[105] UK MHRA, Government Response to Consultation on the Future Regulation of Medical Devices in the United Kingdom, 22. See also Smith, “Exploring Remedies.”

[106] Charlesworth, “Response to the UK’s March 2023 White Paper.”

[107] Department for Science, Innovation and Technology, A Pro-innovation Approach to AI Regulation, section 3.

[108] UK RHC, The Regulation of Artificial Intelligence as a Medical Device, 33.

[109] Works on clean technology have suggested that mutual governance needs to be supported by strict regulatory requirements and should not be seen as zero sum: Clayton, Policies for Cleaner Technology; Spinardi, “Environmental Innovation.”

[110] UK RHC, The Regulation of Artificial Intelligence as a Medical Device, Recommendation 6.

[111] UK Government, HM Government Response to Sir Patrick Vallance, Recommendation 1.

[112] Digital Regulation Cooperation Forum, https://www.gov.uk/government/collections/the-digital-regulation-cooperation-forum.

[113] Gilbert, “Learning From Experience,” 6.

[114] Department for Science, Innovation and Technology, A Pro-innovation Approach to AI Regulation, section 3.3.1.

[115] This includes the National Institute for Health and Care Excellence (NICE), UK MHRA, Health Research Authority and Care Quality Commission.

[116] UK RHC, The Regulation of Artificial Intelligence as a Medical Device, 28.

[117] The main regulators and agencies in the UK are the MHRA, National Health Service (NHS) Transformation Directorate, NICE, National Institute for Health and Care Research (NIHR), AI Lab and AI Council.

[118] Including the NHS; regulatory agencies such as the MHRA, Care Quality Commission, NICE, and Health Research Authority; and health institutions planning to deploy AI technologies.

[119] NICE, Evidence Standards Framework.

[120] Department for Science, Innovation and Technology, A Pro-innovation Approach to AI Regulation. The legal instruments relevant to the principle of fairness are: the Equality Act 2010 (UK), General Data Protection Regulation (GDPR) and Data Protection Act 2018 (UK).

[121] UK RHC, The Regulation of Artificial Intelligence as a Medical Device, Recommendation 9.

[122] UK RHC, The Regulation of Artificial Intelligence as a Medical Device, 45.

[123] See Quigley, “The Shape of Medical Devices Regulation in the UK?”

[124] UK MHRA, Government Response to Consultation on the Future Regulation of Medical Devices in the United Kingdom, section 61.

[125] UK MHRA, Government Response to Consultation on the Future Regulation of Medical Devices in the United Kingdom, section 73.2.

[126] UK MHRA, Government Response to Consultation on the Future Regulation of Medical Devices in the United Kingdom.

[127] UK RHC, Medical Devices Regulation Report, 24.

[128] UK MHRA, Government Response to Consultation on the Future Regulation of Medical Devices in the United Kingdom, 6.

[129] The FDA explicitly refers to the IMDRF: https://www.imdrf.org/. See US FDA, Action Plan.

[130] UK MHRA, Good Machine Learning Practice.

[131] UK MHRA, Government Response to Consultation on the Future Regulation of Medical Devices in the United Kingdom, Ch. 10.

[132] UK MHRA, Government Response to Consultation on the Future Regulation of Medical Devices in the United Kingdom, Annex I, GSPR 17.

[133] UK MHRA, Government Response to Consultation on the Future Regulation of Medical Devices in the United Kingdom, section 63.2.

[134] UK MHRA, Government Response to Consultation on the Future Regulation of Medical Devices in the United Kingdom, section 63.2.

[135] UK MHRA, Government Response to Consultation on the Future Regulation of Medical Devices in the United Kingdom, section 63.2.

[136] Liu, “The Medical Algorithmic Audit.”

[137] Roberts, “Achieving a ‘Good AI Society’,” 68.

[138] Mökander, “The US Algorithmic Accountability Act.”

[139] Mökander, “The US Algorithmic Accountability Act.”

[140] UK House of Commons Library, Progress on UK Free Trade Agreement Negotiations.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/LawTechHum/2023/23.html