Goethe University Frankfurt

Z-inspection: Towards a process to assess Ethical AI

Z_inspection

We at the Frankfurt Big Data Lab at the Goethe University of Frankfurt, together with a team of international experts  are working on the definition of an assessment process for Ethical AI, that we call  Z-inspection.

http://www.bigdata.uni-frankfurt.de/z-inspection-process-assess-ethical-ai/

We decided to go for an open development and incremental improvement to establish our process and brand (“Z Inspected”).

We are assessing right now, as a test to our assessment process, a AI-based product in healthcare (cardiology)  https://cardis.io

As indicated by [51], existing AI systems in healthcare are not as rigorously tested as other medical devices, and this could raise risks.

For example, [52] indicates that a widely used prediction algorithms used by health systems to identify and help patients with complex health needs exhibits significant racial bias.

A recent study [53] shows that most start-ups in healthcare :

“have a limited or non existent participation and impact in the publicly available scientific literature, and that healthcare products not subjected to peer review but based on internal data generation alone may be problematic and non trustworthy.”

We will look for additional real AI use cases soon.

The recording of our latest presentation is available here (30 min.): https://www.youtube.com/watch?v=jrwuZvt_H7k&feature=youtu.be

Copy of the slides are available here: http://cognitive-science.info/wp-content/uploads/2019/10/CSIGTalkZicari.20191031.pdf

More info on our research work on AI and Ethics is available here: http://www.bigdata.uni-frankfurt.de/ethics-artificial-intelligence/ and this is our team:  http://www.bigdata.uni-frankfurt.de/people/

The benefits of having such an AI Ethical assessment process in place are clearly explained in [1]: “If governments deploy AI systems on human populations without framework for accountability, they risk losing touch with how decisions have been made, thus making it difficult for them to identify or respond to bias, errors, or other problems. The public will have less insight into how agencies function, and have less power to question or appeal decisions.”

An Ethical assessment “would also benefit vendors (AI developers) that prioritize fairness, accountability, and transparency in their offering. Companies that are best equipped to help agencies and researchers study their system would have a competitive advantage over others. Cooperation would also help improve public trust, especially at a time when skepticism of the societal benefits of AI is on the rise.” [1]

The aim of our research work is to help contribute to closing the gap between “ principles” (the “what” of AI ethics) and “ practices” (the ”how”).

 

The project is non commercial.

© 2019 by Roberto V. Zicari and his colleagues.

Z-inspection is open access and distributed under the terms and conditions of the Creative Commons (Attribution-NonCommercial-ShareAlike CC BY-NC-SA) license ( https://creativecommons.org/licenses/by-nc-sa/4.0/)

In our opinion, one cornerstone of being able to conduct a neutral, effective AI Ethical assessment is the absence of conflict of interests (direct and indirect).

This means:

  1. Ensure no conflict of interests exist between the inspectors and the entity/organization to be examined;
  2. Ensure no conflict of interests exist between the inspectors and vendors of tools and/toolkits/frameworks to be used in the inspection;
  3. Assess potential bias of the team of inspectors.

This result in a:

→ GO if all three above are satisfied.

→ Still GO with restricted use of specific tools, if 2 is not satisfied.

→ NoGO if 1 or 3 are not satisfied.

 

References

[1]  Algorithmic Impact Assessment: A Practical Framework for Public Agency Accountability, AI Now, April 2018: https://ainowinstitute.org/aiareport2018.pdf

[51]  Artificial Intelligence Is Rushing Into Patient Care – And Could Raise Risks, Liz Szabo, Scientific America, December 24, 2019

[52] Dissecting racial bias in an algorithm used to manage the health of populations.  Obermeyer Z 1,2Powers B 3Vogeli C 4Mullainathan S 5Science. 2019 Oct 25;366(6464):447-453. doi: 10.1126/science.aax2342.

[53] Stealth research: Lack of peer‐reviewed evidence from healthcare unicorns  Ioana A. Cristea  Eli M. Cahan  John P. A. Ioannidis, European Journal of Clinical Investigation, 28 January 2019  https://doi.org/10.1111/eci.13072

 

Team Members:

Roberto V. Zicari (1), Irmhild van Halem (1), Matthew Eric Bassett (1), Gemma Roig (1), Pedro Kringen (1), Karsten Tolle (1), Todor Ivanov (1), Timo Eichhorn (1), Naveed Mushtaq (1), Melissa McCullough (1), Jesmin Jahan Tithi (2), Romeo Kienzler (4), Georgios Kararigas (3), Marijana Tadic (5), John Brodersen (6), Magnus Westerlund (7), Boris Düdder (8), Florian Möslein (9), Norman Stürtz (1).

 

(1) Frankfurt Big Data Lab, Goethe University Frankfurt, Germany.

(2) Intel Labs, Santa Clara, CA, USA.

(3) German Centre for Cardiovascular Research , Charité University Hospital, Berlin, Germany

(4) IBM Center for Open Source Data and AI Technologies, San Francisco, CA, USA

(5) Cardiology Department, Charité University Hospital, Berlin, Germany

(6) Department of Public Health, Faculty of Health Sciences, University of Copenhagen, Danemark

(7) Arcada University of Applied Sciences, Helsinki, Finland

(8) Department of computer science (DIKU), University of Copenhagen (UCPH), Denmark

(9) Institute of the Law and Regulation of Digitalisation, Philipps-University Marburg, Germany

 

Contact:

Prof. Roberto V. Zicari

Founder Frankfurt Big Data Lab

Goethe University Frankfurt

http://www.bigdata.uni-frankfurt.de

#DigitalManifesto https://lnkd.in/gQGCpvA

(C) Big Data Laboratory. Design By Tea Sets