New article of the HALL researcher Anastasiya Kiseleva co-authored with Dimitris Kotzinos and Paul De Hert is published in the computer science journal Frontiers in Artificial Intellegence and availble open-acces. The authors put together the requirements of two frameworks (informed consent and medical devices) to build the system of AI’s transparency in healthcare. This system is based on the accountability methodology and considers the roles of all the involved actors: patients, healthcare providers and AI developers. The key points suggested and discussed:
- We should agree on the common and interdisciplinary taxonomy for AI's transparency.
- For that, first we need to look at transparency as a legal concept within and outside of the AI context. This way, the rules for AI's transparency arrive as the invited guest, not as alien to an existing legal scene.
- Then we should all finally agree on how to use the terms ‘explainability’, ‘interpretability’ and ‘transparency’ when we talk about AI’s opacity. In short, transparency is for processes and systems, explanations are for actions, tools, materials and features, and interpretability is for human perception.
- Interpretations are made by different actors involved in the specific case of AI’s development and use, they have different roles in the process, background and pursue different goals behind interpretations. Transparency (as the system involving interpretability) should be always context- and role-specific.
- Transparency is an umbrella concept and the system achieved through a set of technical and non-technical measures. Algorithmic transparency is not the same as transparency of the use of algorithms.
- The key to transparency is the multilayered system of accountabilities of involved actors. For that, we need to answer the following questions in relation to transparency: what, when, by whom to whom and how.
- The layers of transparency are: external (towards patients from healthcare providers), internal (towards healthcare providers from AI developers) and insider (towards AI developers from themselves). All the layers are interconnected and influence each other.
- The layers of transparency correspond to the relevant existing legal frameworks: external transparency – informed medical consent requirement; internal and insider layers of transparency – Medical Devices Framework. We use the accountability methodology to analyse these frameworks and see how much transparency is already provided there.
- Risk-management approach existing in medicine and healthcare (for example, for medical devices’ conformity assessment) shall be applied for AI applications. It means not only minimising risks but also weighing them against benefits. For the black-box issue of AI, the best existing solutions shall be applied, but if not possible to solve it completely, the risk shall be analysed in the whole context of an AI application.