4. Technical Recommendations on the Ethics of AI
April 1, 2022 lectured by Dr. Mehmet HAKLIDIR and written by Dr. Merve Ayyüce KIZRAK
Last updated
April 1, 2022 lectured by Dr. Mehmet HAKLIDIR and written by Dr. Merve Ayyüce KIZRAK
Last updated
About Dr. Mehmet HAKLIDIR
In this week's lecture, We are glad to be hosted Head of Cloud Computing and Big Data Research Lab (B3LAB) at TUBITAK BILGEM Dr. Mehmet Haklıdır. In this course, after briefly talking about the ethical approaches of different organizations, we discussed the subject technically through case studies by referring to UNESCO's recommendations on the Ethics of AI guide.
"Trustworthy AI refers to AI that respects the values-based principles." (OECD Definition)
"AI built upon value-based principles such as inclusive growth, sustainable development and well-being, human-oriented values and objectivity, transparency and explainability, robustness, security and trust, and accountability." (The National Artificial Intelligence Strategy of Türkiye Definition)
OECD Network of Experts on AI - ONE AI: The OECD.AI expert group on implementing Trustworthy AI (ONE TAI) aims to highlight how tools and approaches may vary across different operational contexts.
Global Partnership on AI – GPAI: GPAI strive to foster and contribute to the responsible development, use and governance of human-centred AI systems, in congruence with the UN Sustainable Development Goals.
Ad Hoc Committee on Artificial Intelligence of the Council of Europe - CAHAI: The Committee examined the feasibility and potential elements on the basis of broad multi-stakeholder consultations, of a legal framework for the development, design and application of AI, based on Council of Europe’s standards on human rights, democracy and the rule of law.
UNESCO Intergovernmental Meeting related to the draft Recommendation on the Ethics of AI: UNESCO developed an international standard-setting instrument on the ethics of AI, in the form of a recommendation.
Respect, protection and promotion of human rights and fundamental freedoms and human dignity
Environment and ecosystem flourishing
Ensuring diversity and inclusiveness
Living in peaceful, just and interconnected societies
Proportionality and Do No Harm
Safety and security
Fairness and non-discrimination
Sustainability
Right to Privacy, and Data Protection
Human oversight and determination
Transparency and explainability
Responsibility and accountability
Awareness and literacy
Multi-stakeholder and adaptive governance and collaboration
The bias can be caused by one or multiple of the steps explained below.
Data Collection step is where bias is most encountered. Dataset-based bias may occur if the data is created by people with a certain tendency or the equipment where the data is collected is distorted.
Data Preprocessing is preparing the data for the model. The processes applied at this stage may cause bias. For example, data that represents missing values can cause bias or data filtering operation can be the reason for the break-in of data integrity.
Modeling is the training process to recognize patterns at this step, the reason for the bias may be due to the parameters of the model.
When AI Goes Bad: Google Photos’ Shame
The Explainable AI (XAI) aims to create a suite of machine learning techniques that:
Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and
Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.
Explainable AI has two main parts:
Explainable Model: Developing more efficient and more advanced AI techniques than existing methods, or renewing existing methods in this context, in order to make the AI system 'explainable'
Explainable Interface: Making end users, who do not have expertise in AI, but who are experts in the application field in which AI is used, evaluate and interpret the outputs of AI by interacting with the created model at an advanced level.
Both technical and non-technical methods can be used to implement the above requirements. These cover all phases of an AI system's lifecycle. An evaluation of the methods used to implement the requirements, as well as reporting and justifying changes to implementation processes, should be done on an ongoing basis. A process like the one in the figure can be taken as an example.
Next week, we will do a case study in our face-to-face class at Bahçeşehir University Beşiktaş campus. We will not have guests. Let's see what the results will be.