Z-Inspection®: A holistic and analytic process to assess Ethical AI

Motivation

The ethical and societal implications of artificial intelligence systems continue to raise concerns.

Z-Inspection is a general inspection process for Ethical AI which can be applied to a variety of domains such as business, healthcare, public sector, among many others. It uses applied ethics. To the best of our knowledge, Z-Inspection is the first process that combines a holistic and analytic approach to assess Ethical AI in practice.

We developed the Z-Inspection process with the following goals in mind:

– To help the decision-making process to assess if the use AI in a given context is appropriate;

– To help minimize risks vs. identifying chances associated with an AI in a given context;

– To help establish trust in AI;

– To help improve the design of the AI from a socio-legal- technical viewpoint;

– To help foster ethical values and ethical actions (i.e. stimulate new kinds of innovation).

Our assessment takes into account the “Framework for Trustworthy AI” and the seven key requirements that AI systems should meet in order to be deemed trustworthy, defined by the independent High-Level Expert Group of Artificial Intelligence [1], set by the European Commission, and also confirmed by a recent report of The Organization for Economic Co-operation and Development (OECD).

This EU framework is the viewpoint / perspective that the paper is based on.

EU states four general ethical principles based on fundamental rights [1]:

  • Respect for human autonomy,
  • Prevention of harm,
  • Fairness,
  • Explicability

Based on these principles, the EU derives seven key requirements (values) for trustworthy AI [1]:

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination and fairness
  • Societal and environmental wellbeing
  • Accountability

These are general AI principles and requirements. Each domain in which AI is applied has, in addition, its own principles and requirements / values, some of them correspond to the above, others need to be added. The Assessment List for Trustworthy Artificial Intelligence defined in [3] is a self assessment checklist providing questions classified into seven categories as above.

Our work on Z-Inspection applies these principles and requirements by proposing a practical implementable assessment process that can be adapted to specific use cases and domains in practice.

The core idea of our assessment is to create an orchestration process to help teams of skilled experts assessing the ethical, technical and legal implications of using an AI-product/service within a given context. Wherever possible Z-Inspection allows us to use existing frameworks, checklists, and to “plug in” existing tools to perform specific parts of the verification. The goal is to customize the assessment process for AIs deployed in different domains and in different contexts.

Z-Inspection is designed integrating and complementing two well-known approaches:

– A holistic approach, to try grasping the whole without consideration of the various parts; and

– An analytic approach, to consider each part of the problem domain.

We have used and tested Z-Inspection by evaluating a non invasive AI medical device which was designed to assist medical doctors in the diagnosis of cardiovascular diseases.

The benefits of having such an AI Ethical assessment process in place are clearly explained in [2]: “ If governments deploy AI systems on human populations without framework for accountability, they risk losing touch with how decisions have been made, thus making it difficult for them to identify or respond to bias, errors, or other problems. The public will have less insight into how agencies function, and have less power to question or appeal decisions.“

An Ethical assessment “ would also benefit vendors (AI developers) that prioritize fairness, accountability, and transparency in their offering. Companies that are best equipped to help agencies and researchers study their system would have a competitive advantage over others. Cooperation would also help improve public trust, especially at a time when skepticism of the societal benefits of AI is on the rise.” [2]

The aim of our research work is to help contribute to closing the gap between “ principles” (the “what” of AI ethics) and “ practices” (the ”how”).

In our opinion, one cornerstone of being able to conduct a neutral, effective AI Ethical assessment is the absence of conflict of interests (direct and indirect).

This means:
Ensure no conflict of interests exist between the inspectors and the entity/organization to be examined;
Ensure no conflict of interests exist between the inspectors and vendors of tools and/toolkits/frameworks to be used in the inspection;
Assess potential bias of the team of inspectors.

This result in a:

→ GO if all three above are satisfied.
→ Still GO with restricted use of specific tools, if 2 is not satisfied.
→ NoGO if 1 or 3 are not satisfied.

The assessment process proposed in this article can be used by a variety of AI stakeholders (e.g. Designers and engineers, Organizations and corporate bodies, Policymakers and regulators, Researchers, NGOs and civil society, Users/general public, Marginalized groups, Journalists and communicators).

We believe we are all responsible, and that the individual and the collective conscience is the existential place where the most significant things happen. With Z-Inspection we want to help to establish what we call a Mindful Use of AI (#MUAI).

© 2020 by Roberto V. Zicari and his colleagues.

Z-Inspection® is a registered trade mark.

This work is distributed under the terms and conditions of the Creative Commons (Attribution-NonCommercial-ShareAlike CC BY-NC-SA) license ( https://creativecommons.org/licenses/by-nc-sa/4.0/)

References

[1] Ethics Guidelines for Trustworthy AI. Independent High-Level Expert Group on Artificial Intelligence. European commission, 8 April, 2019

[2] Algorithmic Impact Assessment: A Practical Framework for Public Agency Accountability, AI Now, April 2018 https://ainowinstitute.org/aiareport2018.pdf

[3] The Assessment List for Trustworthy Artificial Intelligence (ALTAI): https://futurium.ec.europa.eu/en/european-ai-alliance/pages/altai-assessment-list-trustworthy-artificial-intelligence

Resources

Mindful Use of AI.  Z-Inspection: A holistic and analytic process to assess Ethical AI – Talk(1 hour). Prof. Roberto V. Zicari, Frankfurt Big Data Lab, July 2, 2020, Youtube video  and copy of the slides of the talk: Zicari.Lecture.July2.2020

– Introduction to Z-inspection. A framework to assess Ethical AI – Talk (2 hours). Prof. Roberto V. Zicari,  May 27, 2020 [slides] [video]

The Ethics of Artificial Intelligence (AI) Lecture (2 hours). Prof. Roberto V. Zicari, April 22, 2020. [slides] [video]

#DigitalManifesto https://lnkd.in/gQGCpvA

Z-Inspection® Web Site.