11. AI Ethics & Consequences

May 20, 2022 lectured by Merve HICKOK written by Dr. Merve Ayyüce KIZRAK

We are happy to host AI Ethicist Merve Hickok in our lecture this week. In this course, we take AI ethics beyond discourse. We close the artificial intelligence topics of this course by making an evaluation of the sources of biases, their impact on society and discussions, harms, and lessons learned.

"As more and more artificial intelligence is entering into the world, more and more emotional intelligence must enter into leadership." — Amit Ray

As we engage with big data and artificial intelligence (AI) systems and products, both as developers and users, we begin a lecture on reasoning on how to critically approach, analyze and shape these technologies, and how we will shape our future. This will be the final lecture of the computer and ethics course directly related to AI. Starting next week, we will have discussions on concepts such as blockchain, digital organizations, games, and metaverse.

Considerations for Artificial Intelligence Ethics

All the technologies around us greatly affect us as citizens and consumers. Let's not forget, that besides our developer identity, we are human. We live in a social environment. Therefore, there are elements of our understanding of what is going on around us. We will now consider different perspectives and ethics, human rights, and risk management.

In ethics, AI goes beyond regulation and compliance and creates innovation. For example, it helps us collect and reorganize privacy laws such as GDPR in Europe and KVKK in Turkey. In this way, we can fight discrimination. There are not many question marks here. However, ethics; To innovate and go beyond and do the right thing for you, citizens, consumers, employers, employees, and the entire ecosystem around you. Here the responsibility belongs to everyone. Rather than an AI development cycle, what is needed is an AI lifecycle.

Just before you finish and launch the product that you developed with AI technology, you need to say OK and ask some questions:

  • How is our design?

  • What is our product?

  • What will our platform do?

  • What should be in the first place and what are the possible harms or effects?

These questions actually need to start from the very beginning when we start our project, even when we have only the thought of creating a product, the questions need to be transferred to this idea. The same responsibility needs to be embedded throughout the entire product cycle. Until you remove that product, until that product or service is no longer available, questions need to be asked constantly. In the end, it shouldn't be thought of like plug-and-play. For this, we need to criticize our own products and other products. Because our responsibility is not only towards your organization but also about the social environment in which we live. Of course, some companies/organizations may not be too happy with the way we raise our concerns, and that in itself tells us a lot about the values ​​of company culture embedded in products and what this product can do to society and consumers in general.

Luciano Floridi and Mariarosaria Taddeo worked on a concept. They examine the data implications of AI from a data, algorithm, and application perspective. What is meant by this is ethics as we define it.

Data: Focuses on ethical problems posed by

  • Collection and analysis of large datasets

  • Issue ranging from the use of big data (including generation, recording, curation, processing, dissemination, sharing, and use)

Algorithms: Addresses issues posed by

  • Increasing complexity and autınomy of algorithms broadly understood (basic automated systems, machine learning, robots, autonomous systems…)

Practices: Addresses questions concerning

  • Responsibilities and liabilities of people and organizations in charge (including responsible innovation, and professional codes)

  • Both in development & implementation

Sources of Bias

AI isn't just about data, and bias doesn't just exist in data. It's a biased word.

Bias; It is a process that starts with whether we need to design an AI system and can occur consciously or unconsciously at every moment of design. There is a lot of discussion about bias in AI and machine learning systems today.

“In its widest sense, bias can originate, or be found, across the whole spectrum of decisions made about processing data. Bias can originate in human-societal institutional interactions that are codified in data, in labels, in methods, and in trained models. Consequently, these sources of bias can lead to unfairness in the outcomes resulting from the algorithm. It can even be found in how we choose to interpret those algorithmic outcomes.”*

Such biases that we face can lead to injustice. Based on the need to manage this risk, a specific framework has been created with uncomplicated and distinctive solutions and techniques. The yellow boxes highlighted in the figure below are areas of bias that the authors discussed. Others are covered in bias reduction procedures with the Independent Audit of AI Systems.

In the study, it is recommended that the authors and ForHumanity have a life cycle process and a robust governance mechanism for AI Systems Audit.*

The way we use the data, the way we decide on its properties, the model, how the model should work, how we track it, have we validated your models? In other words, it is necessary to carefully consider 4 basic phases:

  • Design

  • Development

  • Distribution

  • Operation

Because the bias we aim to reduce is divided into three headings, as can be seen from the figure. The basic need we will derive from here is a solid governance mechanism.

Sample Bias, Non-responsive Bias, and Cognitive Bias introduce and increase the likelihood of various risks that arise when we do not manage our datasets well.

To give a few examples;

  1. Legal risk: It arises from non-compliance with regulations such as anti-discrimination, and data privacy regulations.

  2. Ethical and reputational risk: The continuation of stereotypes and discrimination results from non-compliance with ethical rules and social responsibility policy.

  3. Functional risk: It results from poor validity, poor cohesion, and non-compliance with the system's intended scope, structure, context, and purpose.

Consequences

Let us give examples of some of the problems and debates that arise from all these reasons and that we encounter in our daily lives.

Harms

  • Denial of services, opportunity

  • Lower service or product levels

  • Labor to make the system work for yourself

  • Sexualization, objectification, abuse, harassment

  • Psychological harms

  • Financial harms

Learned Representations

  • Misrepresentation: Stereotypes, negative attitudes, and objectification fall out of language models

  • Underrepresentation, Marginalization, Erasure: Identity terms disproportionately sampled as porn means LGBTQ+ mentions are entirely filtered out within a system

  • Overrepresentation: Anglocentric perspectives serve as the “default” amplifying privileged (not majority) voices

  • Personal Identifying Representations: PII can be extracted from trained models

Propagating Harms: Malicious Use

  • Persuasion towards harmful acts

  • Polarization

  • Radicalization

  • False, damaging information

Source: Developing and understanding responsible foundation models.

Ethical Principles and Human Rights

The proliferation of published and adopted principles in the field of AI has little contextualized understanding-based work in this pool of principles. Thus, the whitepaper and associated data visualization compare the content of thirty-six leading AI policy documents side-by-side. This effort generated a growing consensus around eight key thematic trends: privacy, accountability, safety and security, transparency and accountability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values.

The Need for Risk Analysis from a Business Perspective

It is necessary to talk about risk management in terms of business and companies, and it is necessary to understand what it means. It can be work-oriented and many people say they don't care. If our business is related to AI; To make sure the AI ​​systems are good, make sure that the systems do not discriminate, do not coerce, etc. we want. Because it has an impact on our reputation, our brand, our revenue, and our trust in consumers. How consumers see our value aligns with our products and our friends.

Everything we cover about governance and good practices helps developers. Data centers, AI models, etc. whatever the product or service is, help developers understand what they're building.

It's a good idea to identify some of the risks early in the process. It helps us mitigate and manage risks, and we know, secure and improve our models and results. Moreover, it can also provide us with new information.

It can give us better estimates for our business, better forecasts about our customers, and increase your customer.

  • It will help improve customer experience and satisfaction.

  • Even if we say that it exceeds legal ethics and goes outside the law, it will help with harmonization.

  • It will help to fulfill compliance and obligations much easier.

  • It will help identify people affected by the product and/or services. Since the system will be more reliable, it will help it to be adopted more easily.

As a business, you are responsible for managing risks to the regulator, the consumer, investors, and insiders.

References:

  1. Brown, S., Carrier, R., Hickok, M., & Smith, A. L. (2021, July 8). Bias Mitigation in Data Sets. https://doi.org/10.31235/osf.io/z8qrb

  2. Floridi Luciano and Taddeo Mariarosaria 2016What is data ethics? Phil. Trans. R. Soc. A.3742016036020160360http://doi.org/10.1098/rsta.2016.0360

Last updated