Roberto V. Zicari (1), Irmhild van Halem (1), Todor Ivanov (1), Gemma Roig (1), Karsten Tolle (1), John Brodersen (6), Boris Düdder (8), Georgios Kararigas (3), Pedro Kringen (1), Norman Stürtz (1), Magnus Westerlund (7), Jesmin Jahan Tithi (2), James Brusseau (10), Timo Eichhorn (1), Florian Möslein (9), Romeo Kienzler (4), Melissa McCullough (1), Naveed Mushtaq (1), Matthew Eric Bassett (1).
(1) Frankfurt Big Data Lab, Goethe University Frankfurt, Germany
(2) Intel Labs, Santa Clara, CA, USA
(3) German Centre for Cardiovascular Research, Charité University Hospital, Berlin, Germany
(4) IBM Center for Open Source Data and AI Technologies, San Francisco, CA, USA
(5) Cardiology Department, Charité University Hospital, Berlin, Germany
(6) Department of Public Health, Faculty of Health Sciences, University of Copenhagen, Danemark
(7) Arcada University of Applied Sciences, Helsinki, Finland
(8) Department of Computer Science (DIKU), University of Copenhagen (UCPH), Denmark.
(9) Institute of the Law and Regulation of Digitalization , Philipps-University Marburg, Germany
(10) Philosophy Department, Pace University, New York, USA
The ethical and societal implications of artificial intelligence systems continue to raise concerns.
We at the Frankfurt Big Data Lab at the Goethe University of Frankfurt, together with a team of international experts defined a novel holistic and analytic processes to assess Ethical AI, called Z-Inspection.
Z-Inspection is a general inspection process for Ethical AI which can be applied to a variety of domains such as business, healthcare, public sector, among many others. It uses applied ethics. To the best of our knowledge, Z-Inspection is the first process that combines a holistic and analytic approach to assess Ethical AI in practice.
We developed the Z-Inspection process with the following goals in mind:
- To help the decision-making process to assess if the use AI in a given context is appropriate;
- To help minimize risks vs. identifying chances associated with an AI in a given context;
- To help establish trust in AI;
- To help improve the design of the AI from a socio-legal- technical viewpoint;
- To help foster ethical values and ethical actions (i.e. stimulate new kinds of innovation).
Our assessment takes into account the “Framework for Trustworthy AI” and the seven key requirements that AI systems should meet in order to be deemed trustworthy, defined by the independent High-Level Expert Group of Artificial Intelligence , set by the European Commission, and also confirmed by a recent report of The Organization for Economic Co-operation and Development (OECD).
This EU framework is the viewpoint / perspective that the paper is based on.
EU states four general ethical principles based on fundamental rights :
- Respect for human autonomy,
- Prevention of harm,
Based on these principles, the EU derives seven key requirements (values) for trustworthy AI :
- Human agency and oversight
- Technical robustness and safety
- Privacy and data governance
- Diversity, non-discrimination and fairness
- Societal and environmental wellbeing
These are general AI principles and requirements. Each domain in which AI is applied has, in addition, its own principles and requirements / values, some of them correspond to the above, others need to be added. The Assessment List for Trustworthy Artificial Intelligence defined in  is a self assessment checklist providing questions classified into seven categories as above.
Our work on Z-Inspection applies these principles and requirements by proposing a practical implementable assessment process that can be adapted to specific use cases and domains in practice.
The core idea of our assessment is to create an orchestration process to help teams of skilled experts assessing the ethical, technical and legal implications of using an AI-product/service within a given context. Wherever possible Z-Inspection allows us to use existing frameworks, checklists, and to “plug in” existing tools to perform specific parts of the verification. The goal is to customize the assessment process for AIs deployed in different domains and in different contexts.
Z-Inspection is designed integrating and complementing two well-known approaches:
- A holistic approach, to try grasping the whole without consideration of the various parts; and
- An analytic approach, to consider each part of the problem domain.
We have used and tested Z-Inspection by evaluating a non invasive AI medical device which was designed to assist medical doctors in the diagnosis of cardiovascular diseases.
The benefits of having such an AI Ethical assessment process in place are clearly explained in : ” If governments deploy AI systems on human populations without framework for accountability, they risk losing touch with how decisions have been made, thus making it difficult for them to identify or respond to bias, errors, or other problems. The public will have less insight into how agencies function, and have less power to question or appeal decisions.”
An Ethical assessment ” would also benefit vendors (AI developers) that prioritize fairness, accountability, and transparency in their offering. Companies that are best equipped to help agencies and researchers study their system would have a competitive advantage over others. Cooperation would also help improve public trust, especially at a time when skepticism of the societal benefits of AI is on the rise.” 
The aim of our research work is to help contribute to closing the gap between “ principles” (the “what” of AI ethics) and “ practices” (the ”how”).
In our opinion, one cornerstone of being able to conduct a neutral, effective AI Ethical assessment is the absence of conflict of interests (direct and indirect).
Ensure no conflict of interests exist between the inspectors and the entity/organization to be examined;
Ensure no conflict of interests exist between the inspectors and vendors of tools and/toolkits/frameworks to be used in the inspection;
Assess potential bias of the team of inspectors.
This result in a:
→ GO if all three above are satisfied.
→ Still GO with restricted use of specific tools, if 2 is not satisfied.
→ NoGO if 1 or 3 are not satisfied.
The assessment process proposed in this article can be used by a variety of AI stakeholders (e.g. Designers and engineers, Organizations and corporate bodies, Policymakers and regulators, Researchers, NGOs and civil society, Users/general public, Marginalized groups, Journalists and communicators).
We believe we are all responsible, and that the individual and the collective conscience is the existential place where the most significant things happen. With Z-Inspection we want to help to establish what we call a Mindful Use of AI (#MUAI).
© 2020 by Roberto V. Zicari and his colleagues.
Z-Inspection® is a registered trade mark.
This work is distributed under the terms and conditions of the Creative Commons (Attribution-NonCommercial-ShareAlike CC BY-NC-SA) license ( https://creativecommons.org/licenses/by-nc-sa/4.0/)
 Ethics Guidelines for Trustworthy AI. Independent High-Level Expert Group on Artificial Intelligence. European commission, 8 April, 2019
 Algorithmic Impact Assessment: A Practical Framework for Public Agency Accountability, AI Now, April 2018 https://ainowinstitute.org/aiareport2018.pdf
 The Assessment List for Trustworthy Artificial Intelligence (ALTAI): https://futurium.ec.europa.eu/en/european-ai-alliance/pages/altai-assessment-list-trustworthy-artificial-intelligence
- Mindful Use of AI. Z-Inspection: A holistic and analytic process to assess Ethical AI – Talk(1 hour). Prof. Roberto V. Zicari, Frankfurt Big Data Lab, July 2, 2020, Youtube video and copy of the slides of the talk: Zicari.Lecture.July2.2020
Prof. Roberto V. Zicari
Frankfurt Big Data Lab
Goethe University Frankfurt