4. Technical Recommendations on the Ethics of AI

April 1, 2022 lectured by Dr. Mehmet HAKLIDIR and written by Dr. Merve Ayyüce KIZRAK

In this week's lecture, We are glad to be hosted Head of Cloud Computing and Big Data Research Lab (B3LAB) at TUBITAK BILGEM Dr. Mehmet Haklıdır. In this course, after briefly talking about the ethical approaches of different organizations, we discussed the subject technically through case studies by referring to UNESCO's recommendations on the Ethics of AI guide.

Trustworthy Artificial Intelligence (TAI)

"Trustworthy AI refers to AI that respects the values-based principles." (OECD Definition)

"AI built upon value-based principles such as inclusive growth, sustainable development and well-being, human-oriented values and objectivity, transparency and explainability, robustness, security and trust, and accountability." (The National Artificial Intelligence Strategy of Türkiye Definition)

Global AI Ethics

UNESCO Recommendation on the ethics of AI

Values

  • Respect, protection and promotion of human rights and fundamental freedoms and human dignity

  • Environment and ecosystem flourishing

  • Ensuring diversity and inclusiveness

  • Living in peaceful, just and interconnected societies

Principles

  • Proportionality and Do No Harm

  • Safety and security

  • Fairness and non-discrimination

  • Sustainability

  • Right to Privacy, and Data Protection

  • Human oversight and determination

  • Transparency and explainability

  • Responsibility and accountability

  • Awareness and literacy

  • Multi-stakeholder and adaptive governance and collaboration

Fairness and non-discrimination

The bias can be caused by one or multiple of the steps explained below.

Data Collection step is where bias is most encountered. Dataset-based bias may occur if the data is created by people with a certain tendency or the equipment where the data is collected is distorted.

Data Preprocessing is preparing the data for the model. The processes applied at this stage may cause bias. For example, data that represents missing values can cause bias or data filtering operation can be the reason for the break-in of data integrity.

Modeling is the training process to recognize patterns at this step, the reason for the bias may be due to the parameters of the model.

When AI Goes Bad: Google Photos’ Shame

Transparency and explainability

The Explainable AI (XAI) aims to create a suite of machine learning techniques that:

  • Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and

  • Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.

Explainable AI has two main parts:

  • Explainable Model: Developing more efficient and more advanced AI techniques than existing methods, or renewing existing methods in this context, in order to make the AI system 'explainable'

  • Explainable Interface: Making end users, who do not have expertise in AI, but who are experts in the application field in which AI is used, evaluate and interpret the outputs of AI by interacting with the created model at an advanced level.

Open Source Tool 2 - TransparentAI

Safety and security

Right to Privacy, and Data Protection

Human oversight and determination

Open Source Tool - H2020 – Human AI Net

Proportionality and Do No Harm

Responsibility and accountability

Technical and non-technical methods to realize Trustworthy AI

Both technical and non-technical methods can be used to implement the above requirements. These cover all phases of an AI system's lifecycle. An evaluation of the methods used to implement the requirements, as well as reporting and justifying changes to implementation processes, should be done on an ongoing basis. A process like the one in the figure can be taken as an example.

Next week, we will do a case study in our face-to-face class at Bahçeşehir University Beşiktaş campus. We will not have guests. Let's see what the results will be.

Last updated