Note:  Requirements for the final report here


Course Ethical Implications of AI will be remote, and start on November 4th at 10am via Zoom For all people who registered, prior to the opening lecture, you will receive by e-mail a link to join the Zoom video call.

Time: Wednesday and Thursday from 10:00  to 12:00

Location: remote Zoom video call (link will by send by e-mail).

Language: The language of the lectures is English  

Credit Points: Students can receive 6 CPs. Link in QIS/LFS

Module names: DC, M-DS-ADS, B-WB, M-WB, PoE, M-SIW-I1A, M-SIW-I1B

Eligibility: Bachelor Students, Master Students across multiple disciplines are encouraged to attend.

Communication via Email:  EthicalAIWS2021@gmail.com


 

Course Description

AI is becoming a sophisticated tool in the hands of a variety of stakeholders, including political leaders. Some AI applications may raise new ethical and legal questions, and in general have a significant impact on society (for the good or for the bad or for both). 

People’s motivation plays a key role here. With AI the important question is how to avoid that it goes out of control, and how to understand how decisions are made and what are the consequences for society at large. 

Students will learn the ethical implications of the use of Artificial Intelligence (AI). 

What are the consequences for society? For human beings / individuals?  Does AI serve human kind?

Discussion and debate of ethical issues is an essential part of professional development—both within and between disciplines—as it can establish a mature community of responsible practitioners.

Through ethical reflection students can gain orientation / competencies that will help them in their ethical decision making.

Students will work in small groups.

The course will cover topics such as the seven key principles and requirements (values) for trustworthy AI, as defined by the European Commision’s High Level Expert Group on AI:

  • Human agency and
  • Technical robustness and safety,
  • Privacy and data governance,
  • Transparency,
  • Diversity, non-discrimination and fairness,
  • Societal and environmental wellbeing,
  • Accountability.

Pre-requisites

Students should have an interest in reflecting on what is right or wrong, and it is assumed that they are capable of discussing a scenario and taking a view on whether an action is ethical.

We encourage students with different backgrounds, knowledge, and geographies to enroll in this course. The topic is highly interdisciplinary and therefore requires different points of views, expertise, and attitudes.


Course Registration: Fill out the course registration form  registration is closed


Ethical Implications of AIHow to get credit points

Assignments –  Students will each week watch two video lessons and read one paper.

In order to get the final credit points you need to write a mid-term report and a final report at the end of the semester.


Recommended Lecture Schedule

Date Topic Materials
04.11.2020 Intro Lesson 1
(Prof. Roberto V. Zicari, Prof. Gemma Roig)
 [slides] [video] [chat]
05.11.2020 Intro Lesson 2
(Prof. Roberto V. Zicari, Prof. Gemma Roig)
 [video][chat]
papers of the week
Paper 1: Whittlestone et al. (2019) – Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research

—–

Paper 2: Independent High-Level Expert Group on Artificial Intelligence (2019) – Ethics Guidelines for Trustworthy AI

 [paper1] [paper2]
11.11.2020 The Ethics of Artificial Intelligence (AI)
(Prof. Roberto V. Zicari)
 [slides] [video]
12.11.2020 The Ethics of Artificial Intelligence (AI)
(Dr. Emmanuel Goffi)
 [slides] [video]
papers of the weeks
Paper: High-Level Expert Group on Artificial Intelligence (2020) – Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment
—–
Web tool:
Try out the ALTAI web tool
 [paper] [web tool]
18.11.2020 Ethics, Moral Values, Humankind, Technology, AI Examples.
(Prof. Rafael A. Calvo)
 [slides] [video]
19.11.2020 Ethics, Moral Values, Humankind, Technology, AI Examples.
(Dr. Emmanuel Goffi)
 [slides] [video]
paper of the week
Paper 1: Leikas et al. (2019) – Ethical Framework for Designing Autonomous Intelligent Systems
—–
Paper 2: Rajkomar et al. (2018) – Ensuring Fairness in Machine Learning to Advance Health Equity 
 [paper1] [paper2]
25.11.2020 On the ethics of algorithmic decision-making in healthcare
(Dr. Thomas Grote)
 [slides]
26.11.2020 Fairness, Bias and Discrimination in AI
(Prof. Gemma Roig)
 [slides] [video]
paper of the week Paper 1: Grote & Berens (2019) – On the ethics of algorithmic decision-making in healthcare 
—– Paper 2: Wendehorst et al. (2019)Opinion of the Data Ethics Commission
 [paper1] [paper2]
02.12.2020 AI and Trust: Explainability, Transparency
(Prof. Dragutin Petkovic)
 [slides] [video p1] [video p2] [Q&A]
03.12.2020 AI Privacy, Responsibility, Accountability, Safety and Human-in the loop
(Dr. Magnus Westerlund)
 [slides] [video]
04.12.2020 Mid-term report due  [requirements]
paper of the week
Paper 1: Obermeyer et al. (2019) – Dissecting racial bias in an algorithm used to manage the health of populations

—–

Paper 2: Ada Lovelace Institute (2020) – Exit through the App Store? A rapid evidence review on the technical considerations and societal implications of using technology to transition from the COVID-19 crisis

 [paper1] [paper2]
09.12.2020 Trustworthy AI: A Human-Centred Perspective
(Dr. Christopher Burr)
 [slides] [video]
10.12.2020 Introduction to Z-inspection. A framework to assess Ethical AI
(Prof. Roberto V. Zicari)
 [slides] [video]
 paper of the week
Paper 1: Hodges et al. (2016) – Ethical Business Regulation: Understanding the Evidence
—– Paper 2: ICO & The Alan Turing Institute (2020) – Explaining decisions made with AI
 [paper1] [paper2]
16.12.2020 Emerging Rules on Artificial Intelligence:
Trojan Horses of Ethics in the Realm of Law?
(Prof. Florian Möslein)
 [slides] [video]
17.12.2020 AI Fairness and AI Explainability software tools
(Romeo Kienzler)
 [slides] [video]
paper of the week
Paper 1: Brundage et al. (2020) – Toward Trustworthy AI Development:Mechanisms for Supporting Verifiable Claims

—–

Paper 2: Arya et al. (2019) – One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

 [paper1][paper2]
13.01.2021
Design of Ethics Tools for AI Developers
(Dr. Carl-Maria Mörch)
 [slides] [video]
14.01.2021 Opinion of the German Data Ethics Commission
(Prof. Christiane Wendehorst)
 [slides] [video]
 paper of the week
Paper 1: Hind et al. (2019) – Experiences with Improving the Transparency of AI Models and Services

—–

Paper 2: Obermeyer et al. (2019) – Dissecting racial bias in an algorithm used to manage the health of populations

 [paper1] [paper2]
20.01.2021a Increasing Trust in AI
(Dr. Michael Hind)
 [slides] [video]
21.01.2021 Assessing AI use cases. Ethical tensions, Trade offs.
(Dr. Estella Hebert)
 [slides] [video]
paper of the week
Paper 1: Peters et al. (2020) – Responsible AI – Two Frameworks for Ethical Design Practice

—–

Paper 2: Gebru et al. (2020) – Datasheets for Datasets

 [paper1] [paper2]
04.02.2021 Final Report (Deadline 23:59:59)  [requirements]

Reports/Papers classified by topics

The Ethics of Artificial Intelligence (AI),  AI and Trust.

[1] Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. Whittlestone, J. Nyrup, R. Alexandrova, A. Dihal, K. Cave, S. (2019), London. Nuffield Foundation.  Link to .PDF

[2] Ethics Guidelines for Trustworthy AI. Independent High-Level Expert Group on Artificial Intelligence. European commission, 8 April, 2019. Link to .PDF

Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment. High-Level Expert Group on Artificial Intelligence. European commission, 17 July,  2020. Link

[72] WHITE PAPER. On Artificial Intelligence – A European approach to excellence and trust.  European Commission, Brussels, 19.2.2020 COM(2020) 65 final. Link to .PDF

 

Ethics, Moral Values, Humankind, Technology, AI Examples.

[3] Perspectives on Issues in AI Governance, Lynette Webb, Charina Chou, Google White Paper, 2019. Link to .PDF

[49]  AI on the Case: Legal and Ethical Issues. Richard Austin, Deeth Williams Wall LLP , May 17, 2019. Link to .PDF

[68] Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System, Partnership on AI, 2019. Link

Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice (February 13, 2019). Richardson, Rashida and Schultz, Jason and Crawford, Kate, 94 N.Y.U. L. REV. ONLINE 192 (2019). Available at SSRN

 

Fairness, Bias and Discrimination in AI.  From Philosophy to Machine Learning.

[8] Improving Fairness in Machine Learning Systems: What Practitioners Need? K. Holstein et al. CHI 2019; May 4-0, 2019. Link to .PDF

[11] Ensuring, Fairness in Machine Learning to Advance Health, Alvin Rajkomar et al.  Equity, Annals of Internal Medicine (2018). DOI: 10.7326/M181990. Link to .PDF

[12] Putting Fairness Principles into Practice: Challenges, Metrics, and Improvements. Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Allison Woodruff, Christine Luu, Pierre Kreitmann, Jonathan Bischof, Ed H. Chi (Submitted on 14 Jan 2019). Link to .PDF

[52] Dissecting racial bias in an algorithm used to manage the health of populations. Obermeyer ZPowers BVogeli CMullainathan SScience. 2019 Oct 25;366(6464):447-453. doi: 10.1126/science.aax2342. Link to .PDF

AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias, Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush R. Varshney, Yunfeng Zhang, 2018, Paper linkOpen Source project link, Published as IBM Journal of Research and Development 63(4/5), 2019

 

AI: Explainability, Transparency.

[23] Experiences with Improving the Transparency of AI Models and Services. Michael HindStephanie HoudeJacquelyn MartinoAleksandra MojsilovicDavid PiorkowskiJohn RichardsKush R. Varshney (Submitted on 11 Nov 2019)arXiv:1911.08293v1. Link to .PDF

Petkovic D, Kobzik L, Re C. “Machine learning and deep analytics for biocomputing: call for better explainability”. Pacific  Symposium on Biocomputing Hawaii, January 2018;23:623-7, Link to .PDF

Petkovic D, Kobzik L, Ganaghan R,“AI Ethics and Values in Biomedicine – Technical Challenges  and Solutions”, Pacific Symposium on Biocomputing, Hawaii January 3-7, 2020, Link to .PDF

Gunning D, Aha D.:”DARPA’s Exianable Artificial Intelligence Program”, AI magazine, Association for the Advancement of Artificial  Intelligence, Summer 2019, slides

Ribeiro M, Singh S, Guestrin C. „Why Should I Trust You? Explaining the Predictions of Any Classifier”, KDD ’16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningAugust, 2016,  Link to .PDF or ACM PDF

Ribeiro M, Singh S,  Guestrin C.: ”Nothing Else Matters: Model-Agnostic Explanations by Identifying Prediction Invariance”, 30th Conf. of Neural Information Processing Systems (NIPS 2016), Barcelona, Spain 2016, Link to .PDF

Petkovic D, Altman R, Wong M, Vigil A.: “Improving the explainability of Random Forest classifier – user centered approach”. Pacific Symposium on  Biocomputing. 2018;23:204-15, Link to .PDF

D. Petkovic, A. Alavi, D. Cai, J. Yang, S. Barlaskar:  “RFEX – Simple random Forest Model and Sample Explainer for non-ML experts”, Link to .PDF

Barlaskar S, Petkovic D: “Applying Improved Random Forest Explainability (RFEX 2.0) on synthetic data”, SFSU TR 18.01, 11/27/20181; with related toolkit at  https://www.youtube.com/watch?v=neSVxbxxiCE

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques, Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang, 2019, Link to .PDF

Explaining explainable AI, Michael Hind, XRDS: Crossroads, The ACM Magazine for Students 25(3), ACM, 2019, Link to .PDF

Explaining decisions made with AI. – ICO and The Alan Turing Institute, May 2020, Link

 

Human in the loop, Security, and Accountability.

 Calvo, R.A., Peters, D. & Cave, S. Advancing impact assessment for intelligent systems. Nat Mach Intell, Vol 2, 89-91 (2020).  Link to. PDF

Concrete Problems in AI Safety, Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané. Link to PDF

Anomalous Instance Detection in Deep Learning: A Survey – Saikiran Bulusu, Bhavya Kailkhura, Bo Li, Pramod K. Varshney, Dawn Song. Link to PDF

Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims – Miles Brundage et al. Link to PDF

 

Introduction to Z-inspection. A framework to assess Ethical AI

“Z-inspection: Towards a process to assess Ethical AI” – Roberto V. Zicari – With contributions from: Irmhild van Halem, Matthew Eric Bassett, Karsten Tolle, Timo Eichhorn, Todor Ivanov, Jesmin Jahan Tithi. CSGI(Cognitive Systems Group) Talk, Oct.31, 2019, Youtube videoLink to .PDF

 

Assessing AI use cases. Socio-Technical Scenarios.

[6] Ethical Framework for Designing Autonomous Intelligent Systems. J Leikas et al. J. of Open Innovation, 2019, 5, 1. Link

 

Assessing AI use cases. Ethical tensions, Trade offs.

[7] Algorithmic Impact Assessment: A Practical Framework for Public Agency Accountability, Reisman D., Schultz J, Crawford K, Whittake M, AI Now, April 2018. Link to .PDF

[22] FactSheets: Increasing trust in AI services through suppliers declarations of conformity. Arnold, M.; Bellamy, R. K. E.; Hind, M.; Houde, S.; Mehta, S.; Mojsilovic ́, A.; Nair, R.; Natesan- Ramamurthy, K.; Olteanu, A.; Piorkowski, D.; Reimer, D.; Richards, J.; Tsay, J.; and Varshney, K. R. 2019. IBM Journal of Research & Development 63(4/5). Link to .PDF

[31] Datasheets for datasets. Gebru, T.; Morgenstern, J.; Vecchione, B.; Vaughan, J. W.; Wallach, H.; Daume ́, III, H.; and Craw- ford, K. 2018.. In Proceedings of the Fairness, Accountability, and Transparency in Machine Learning WorkshopLink to .PDF

[46] IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition. Pp. 211 – 281. Link to .PDF

COVID-19 Rapid Evidence Review: Exit through the App Store?, Nuffield Foundation. Link

EMPOWERING CITIZENS AGAINST COVID-19 WITH AN ML-BASED AND DECENTRALIZED RISK AWARENESS APP
Speaker : Professor Yoshua Bengio (Mila), Friday, May 8, 2020
Recording: here

 

AI Ethics in Healthcare

[53] Stealth research: Lack of peer‐reviewed evidence from healthcare unicorns, Ioana A. Cristea Eli M. Cahan John P. A. Ioannidis, European Journal of Clinical Investigation, 28 January 2019. Link

[58] Grote T, Berens P. On the ethics of algorithmic decision-making in healthcare, J Med Ethics 2019;0:1–7. doi:10.1136/medethics-2019-105586. Link to .PDF

[26] Cardisio: https://cardis.io

[78] Schonberg D. Artificial intelligence in healthcare: a critical analysis of the legal and ethical implications.  International Journal of Law and Information Technology, 2019, Link

[93] Dorian Peters, et. al, Responsible AI- Two Frameworks for Ethical Design Practice. IEEE Transactions on Technology and Society, Vol. 1, No. 1, March 2020. Link to .PDF

 

Legal relevance of AI Ethics

1) OECD, ‘Recommendation of the Council on Artificial Intelligence’ (22 May 2019), Link

2) G20, ‘G20 Ministerial Statement on Trade and Digital Economy’ (9 June 2019), Link to .PDF

3) Ethics Guidelines for Trustworthy AI. Independent High-Level Expert Group on Artificial Intelligence. European commission, 8 April, 2019, Link to .PDF

4) F Möslein, ‘Robots in the boardroom: artificial intelligence and corporate law’ in  W Barfield and U Pagallo (eds), Research Handbook on the Law of Artificial Intelligence (Edward Elgar Publishing 2018), Link to .PDF

5) F Möslein, ‘Regulating Robotic Conduct: On ESMA’s New Guidelines and Beyond’ in N Aggarwal and others (eds), Autonomous Systems and the Law (Beck, Nomos 2019) 45

6) F Möslein ‘Leitlinien für den Einsatz künstlicher Intelligenz’ in D Linardatos (ed), Rechtshandbuch Robo-Advice (Beck, Vahlen 2020) 58

 

Ethical Business

1) Ethical Business Regulation:Understanding the Evidence, Christopher Hodges, Professor of Justice Systems, and Fellow of Wolfson College, University of Oxford February 2016. Link .PDF

2) Ethical Theories, By Larry Chonko, Ph.D. The University of Texas at Arlington, SlidesNotes to slides

 

German Data Ethics Commision

The Data Ethics Commission presented its Opinion to the Federal Government on 23 October 2019 at a closing ceremony at the Federal Ministry of Justice and Consumer Protection.

Instructors

roberto

Roberto V. Zicari, Founder Frankfurt Big Data Lab. Course coordinator

Roberto V. Zicari is professor of Database and Information Systems (DBIS) at the Goethe University Frankfurt, Germany. He is an internationally recognized expert in the field of Databases and Big Data. His interests also expand to Ethics and AI, Innovation and Entrepreneurship. He is the founder of the Frankfurt Big Data Lab at the Goethe University Frankfurt, and the editor of the ODBMS.org web portal and of the ODBMS Industry Watch Blog. He was for the past five years a visiting professor with the Center for Entrepreneurship and Technology within the Department of Industrial Engineering and Operations Research at UC Berkeley (USA).

Roig

Prof. Dr. Gemma Roig, Group Leader, Computational Vision & Artificial Intelligence, Goethe University Frankfurt

I am currently a professor at the Computer Science Department in Goethe University Frankfurt. I am also a research affiliate at MIT. Before I was ass. prof. at Singapore University of Technology and Design. Previously, I was a postdoc fellow at MIT in the Center for Brains Minds and Machines with Prof. Tomaso Poggio. I was also affiliated at the Laboratory for Computational and Statistical Learning. I pursued my doctoral degree in Computer Vision at ETH Zurich. My research focuses on understanding the underlying computational principles of visual intelligence of humans and artificial systems, with the aim of developing a general artificial intelligence framework. Such general artificial intelligence system, is fundamental to design machine models that mimic or surpass human performance in specific domains, and that can automatically learn new tasks.

20190916_084802 Dr. Emmanuel R. Goffi, Director, Observatoire éthique & intelligence artificielle | Observatory on Ethics & Artificial Intelligence at the Institut Sapiens, in Paris

Emmanuel R. Goffi is an expert in ethics of artificial intelligence. He was the Director of the Creéia – Centre de recherche et expertise en éthique et intelligence artificielle and a Professor of ethics with the ILERI – Institut libre d’étude des relations internationales. He holds a PhD in Political Science from Science Po-CERI. Emmanuel is a research fellow with the Centre for Defence and Security Studies at the University of Manitoba (UofM), in Winnipeg, and a research member with the Centre FrancoPaix at the Université du Québec à Montréal. He is also a member of the Mines Action Canada Board.
Emmanuel has served in the French Air Force for 25 years. He lectured at the French Air Force Academy, and has been lecturing in several universities and colleges in France and in Canada.

 csm_grote2_bb6fc68ef0 Dr. Thomas Grote, Ethics and Philosophy Lab, Cluster of Excellence: „Machine Learning: New Perspectives for Science“, University of Tübingen, Tübingen 72076, Germany

Dr. Thomas Grote is a postdoctoral researcher at the Ethics and Philosophy Lab (EPL) of the Cluster of Excellence: Machine Learning: New Perspectives for Science at the University of Tübingen. His research focuses on issues related to machine learning at the intersection of epistemology and ethics.

 csm_grote2_bb6fc68ef0 Prof. Dr. Florian Möslein, Professor of Law at the Philipps-University Marburg, Director of the Institute of the Law and Regulation of Digitalisation (IRDi, www.irdi.institute)

Florian Möslein is Director of the Institute for Law and Regulation of Digitalisation (www.irdi.institute) and Professor of Law at the Philipps-University Marburg, where he teaches Contract Law, Company Law and Capital Markets Law. He previously held academic positions at the Universities of Bremen, St. Gallen, and Berlin, and visiting fellowships in Italy (Florence, European University Institute), the US (Stanford and Berkeley), Australia (University of Sydney), Spain (CEU San Pablo, Madrid) and Denmark (Aarhus).

Having graduated from the Faculty of Law in Munich, he also holds academic degrees from the University of Paris-Assas (licence en droit) and London (LL.M. in International Business Law). Florian Möslein published three monographs and over 80 articles and book contributions, and has edited seven books.

His current research focus is on regulatory theory, corporate sustainability and the legal challenges of the digital age.

 IMG_0570 Prof. Dragutin Petkovic, Professor, Associate Chair, Undergraduate Advisor, IEEE LIFE Fellow, Director, Computing for Life Sciences (CCLS), Coordinator for Graduate Certificates in AI Ethics and SW Engineering

Prof. D. Petkovic obtained his Ph.D. at UC Irvine, in the area of biomedical image processing. He spent over 15 years at IBM Almaden Research Center as a scientist and in various management roles. His contributions ranged from use of computer vision for inspection, to multimedia and content management systems. He is the founder of IBM’s well-known QBIC (query by image content) project, which significantly influenced the content-based retrieval field. Dr. Petkovic received numerous IBM awards for his work and became an IEEE Fellow in 1998 and IEEE LIFE Fellow in 2018 for leadership in content-based retrieval area. Dr. Petkovic also had various technical management roles in Silicon Valley startups. In 2003 Dr. Petkovic joined CS Department as a Chair and also founded SFSU Center for Computing for Life Sciences in 2005. Currently, Dr. Petkovic is the Associate Chair of the SFSU Department of Computer Science and Director of the Center for Computing for Life Sciences. He led the establishment of SFSU Graduate Certificate in AI Ethics, jointly with SFSU Schools of Business and Philosophy. Research and teaching interests of Prof. Petkovic include Machine Learning with emphasis on Explainability and Ethics, teaching methods for Global SW Engineering and engineering teamwork, and the design and development of easy to use systems.

 IMG_0570 Dr. Christopher Burr is a philosopher of cognitive science and artificial intelligence. He is a Senior Research Associate at the Alan Turing Institute and a Research Associate at the Digital Ethics Lab, University of Oxford.

His current research explores philosophical and ethical issues related to data-driven technologies and human-computer interaction, including the opportunities and risks that such technologies have for mental health and well-being. A primary goal of this research is to develop robust and pragmatic guidance to support the governance, responsible innovation, and sustainable use of data-driven technology within a digital society. To support this goal, he has worked with a number of public sector bodies and organisations, including NHSx; the UK Government’s Department for Health and Social Care; Department for Digital, Culture, Media and Sport; Centre for Data Ethics and Innovation; and the Ministry of Justice. He has held previous posts at the University of Bristol, where he explored the ethical and epistemological impact of big data and artificial intelligence as a postdoctoral researcher and also completed his PhD in 2017. Research Interests: Philosophy of Cognitive Science and Artificial Intelligence, Digital Ethics, Bioethics, Decision Theory, Public Policy, and Human-Computer Interaction.

Magnus_Westerlund DSc. Magnus Westerlund, Principal Lecturer, Head of Master Degree Programme in Big Data Analytics
Arcada University of Applied Sciences, Helsinki, Finland 

Magnus Westerlund (DSc) is the programme director of the master degree programme in big data analytics and deputy head of business and analytics department at Arcada University of Applied Sciences in Helsinki, Finland. He has a background from the private sector in telecom and information management and earned his doctoral degree in information systems at Åbo Akademi University, Finland. Magnus has research publications in the fields of analytics, IT-security, cyber regulation, and distributed ledger technology. His current research topics are found in the decentralized platform area of distributed applications, and the application of intelligent and secure autonomous systems. His long-term aim is to help define what we mean by autonomous systems that are trustworthy, accountable, and that can learn from interaction.

 5286_Brain_And_Mind_Centre_Photography Prof. Rafael A. Calvo, Chair in Engineering Design, Faculty of Engineering, Dyson School of Design Engineering, Imperial College London

Rafael A. Calvo, PhD (2000) is Professor at Imperial College London focusing on the design of systems that support wellbeing in areas of mental health, medicine and education, and the ethical challenges raised by new technologies. In 2015 Calvo was appointed a Future Fellow of the Australian Research Council to study the design of wellbeing-supportive technology.
Rafael is the Director for Research at the Dyson School of Design Engineering and co-lead at the Leverhulme Centre for the Future of Intelligence.

 Foto-Estella-Hebert Dr. Estella Hebert, Goethe University Frankfurt

Estella Hebert is a postdoctoral researcher and lecturer in the department of education at the Goethe University Frankfurt focusing her research on questions of digitalisation within educational contexts. She finished her PhD on the relationship of identity, agency and personal data in 2019. Coming from a media pedagogical and educational philosophical perspective her interests are based in post-digital perspectives on the social, ethical and cultural transformations caused by digitality, questions of datafication and media critical perspectives.

 romeo_kienzler Romeo Kienzler, IBM Center for Open Source Data and AI Technologies, San Francisco, CA, USA

Romeo Kienzler is Chief Data Scientist at the IBM Center for Open Source Data and AI Technologies (CODAIT) in San Fransisco. He holds an M. Sc. (ETH) in Computer Science with specialisation in Information Systems, Bioinformatics and Applied Statistics from the Swiss Federal Institute of Technology Zurich. He works as Associate Professor for Artificial Intelligence at the Swiss University of Applied Sciences Berne and Adjunct Professor for Information Security at the Swiss University of Applied Sciences Northwestern Switzerland (FHNW). His current research focus is on cloud-scale machine learning and deep learning using open source technologies including TensorFlow, Keras, and the Apache Spark stack. Recently he joined the Linux Foundation AI as lead for the Trusted AI technical workgroup with focus on Deep Learning Adversarial Robustness, Fairness and Explainability. He also contributes to various open source projects. He regularly speaks at international conferences including significant publications in the area of data mining, machine learning and Blockchain technologies. Romeo is lead instructor of the Advance Data Science specialisation on Coursera  with courses on Scalable Data Science, Advanced Machine Learning, Signal Processing and Applied AI with DeepLearning.

 Carl_Moerch_LINKEDIN-1-700x400 Carl Mörch, Postdoctoral Fellow, Algora Lab – MILA, OBVIA

Carl is currently a postdoctoral fellow at the Université de Montréal and Mila. He has been awarded a Postdoctoral Fellowship by the International Observatory on the Societal Impacts of Artificial Intelligence and Digital Technologies (OBVIA). He is also a lecturer and adjunct professor at UQÀM (Montréal, Canada). His research is oriented towards the creation of AI Ethics Tools. His objective is to contribute to the concrete application of high-level ethical principles by developing lists of standards in high-risk areas (Health, Finance). In general, he is interested in the responsible development of technologies in society, health care and psychology. He co-created canadaprotocol.com, an open access tool for AI developers working in Mental Health. He is also working on the ethical evaluation of free mobile applications and on the concept of moral competence in AI. Finally, he is leading “Reach Me” an m-health project to improve pregnant women’s access to prenatal services, using text messaging. He holds a M.Psy. (ICP, France), and a Ph.D. in Psychology (UQÀM, Canada).

MichaelHind Dr. Michael Hind, Distinguished Research Staff Member, IBM Research AI Department, IBM Thomas J Watson Research Center

Dr. Hind has authored over 50 publications, served on over 50 program committees, and given several keynotes and invited talks at top universities, conferences, and government settings. Michael has led dozens of researchers to successfully transfer technology to various parts of IBM and helped launch several successful open source projects, such as AI Fairness 360 and AI Explainability 360. His 2000 paper on Adaptive Optimization was recognized as the OOPSLA’00 Most Influential Paper and his work on Jikes RVM was recognized with the SIGPLAN Software Award in 2012. Michael is an ACM Distinguished Scientist, and a member of IBM’s Academy of Technology.

Foto_Wendehorst Prof. Christiane Wendehorst, Professor of Civil Law at the University of Vienna

Christiane Wendehorst has been Professor of Civil Law at the University of Vienna since 2008. Amongst other functions, she is founding member and President of the European Law Institute (ELI), chair of the Academy Council of the Austrian Academy of Sciences (ÖAW), Co-Head of the Department of Innovation and Digitalisation in Law, and member of the Managing Board of the Austrian Jurists‘ Association (ÖJT), the Academia Europea (AE), the International Academy for Comparative Law (IACL), the American Law Institute (ALI) and the Bioethics Committee at the Austrian Federal Chancellery. She has been Co-chair of the German Data Ethics Committee from 2018-2019. Currently, her work is focussed on legal challenges arising from digitalization, and she has worked as an expert on subjects such as digital content, Internet of Things, artificial intelligence and data economy for, inter alia, the European Commission, the European Parliament, the German Federal Government, the ELI and the ALI. Prior to moving to Vienna, she held chairs in in Göttingen (1999-2008) and Greifswald (1998-99) and was Managing Director of the Sino-German Institute of Legal Studies (2000-2008).