Described as ‘the first ever legal framework on AI,’ the proposed AI Act is certainly ambitious in scope and will be a game-changer if it passes in its current form.
Among the many provisions translating questions of ‘AI ethics’ into hard-letter law, the most interesting are perhaps those governing ‘high-risk’ AI systems. This would include software that constitutes a medical device.
Following the Phillips France judgment, standalone software can be a medical device where intended to be used for medical purposes, even if it does not act directly on the human body. Much AI developed to assist clinical decision making—such as software that scans MRI images to help triage patients—will count as a medical device, and therefore fall within the new AI regime for high-risk AI systems.
The implications for medical AI in the EU will be significant. Requirements for how AI is developed, validated, overseen, explained (and even how its evolution is anticipated once used on real-world data) will become law, and not just guidance. Transparency obligations under Article 13—in addition to those already found in data protection law—will make it even harder for AI which poses a risk to human health, safety or fundamental rights to operate as a complete ‘black box.’
The UK will be significantly out of step if it does not implement something similar. As more & more substantial post-Brexit legislation takes shape in the EU, we may begin to see question marks as to how long UK regulatory alignment will last. At present, the UK seems to be on track to gain an adequacy assessment from the Commission so it can continue to operate within a free(er) market for personal information. AI could prove an area of divergence in data governance, unless something mirroring the scope & rigour of the AI Act is attempted here.