9. Stakeholders, Ethical Digital Ecosystem and Standards
May 6, 2022 written by Dr. Merve Ayyüce KIZRAK
“It’s going to be interesting to see how society deals with artificial intelligence, but it will definitely be cool.” — Colin Angle
Guidance Mechanisms
Artificial intelligence (AI) is a dynamic environment and some guidance mechanisms are needed to conduct processes ethically in this environment. In this way, an effective environment can be created with the partnership of individuals and/or organizations.
In this lecture, we will look at current activities and policies. Guidance mechanisms are not independent of policy and organizational options. Some of the guidance mechanisms precede the AI ethics discussion, while others directly respond to AI ethics.
The first group of the mechanism constitutes the guidelines to help ensure the AI ethical environment. Some of these stand out more than others. For example, the European Commission has started to publish important documents on AI ethics with the High-Level Expert Group on AI, which is created in 2019. Some tools are also offered in this context. On the other hand, the increase in the number of published directives can lead to confusion and uncertainty. Despite this, ethical guidelines and frameworks seem to continue to be an important aspect of ethical discussions in the field of AI for a while. For example, some professional organizations such as The Institute of Electrical and Electronics Engineers - IEEE and USACM have published profession-specific guides on AI and ethics. In addition, some professional organizations also contribute to standardization. We can cite the ISO/IEC JTC 1/SC 42 standard, which refers to AI and ethics. The most prominent standardization efforts in terms of ethical aspects of AI are carried out by the IEEE in the P7000 family of standards.
Published on June 2022: ISO/IEC 23053:2022
Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)
Standardization can be attributed to the certification that IEEE spearheaded with its ethical certification program for autonomous and intelligent systems. An important idea currently proposed by the European Commission (2020a) in the AI directive is that AI systems with a predefined significant level of risk must undergo certification to ensure that ethical issues are properly addressed. Standardization can also affect or guide other activities by defining requirements and activities. A well-established example of this is standardization in information security, where the ISO 27000 series defines best practices.
Standardization can provide technical and organizational guidance on a range of issues. The IEEE P7000 series we mentioned earlier is a good example. It's sub-breakdowns;
Transparency (P7001)
Privacy (P7002)
Algorithmic biases (P2003)
It aims to provide standardization for certain issues such as Security (P7009).
Collingridge's Dilemma
Collingridge has observed that it is relatively easy to intervene and change the characteristics of the technology early in its lifecycle. However, it is very difficult to predict the results at this point. The results become visible after a while after the new technology is used. After this moment, it is more difficult to intervene. This is a dilemma for those who want to address the development process ethical issues. An ethical problem arises after use, at the same time it is desired that ethical problems do not arise. The Collingridge dilemma is not limited to AI.*
Uncertainty regarding the future use of systems remains a fundamental problem that cannot be resolved, although there are suggestions for how to address it, at least to some extent. Most of these recommendations refer to development methodologies, and many fall back on some form of value-aware design.* The underlying idea of this approach is to identify relevant values that should inform the development and use of new technology and to consult with stakeholders on how these can be adjusted and succeeded. The most distinctive feature of this type of methodology is the provision of values and principles from the design.
New tools are emerging to help ensure AI ethics. Some of them are:
Published by groups associated with research funders such as the Wellcome Data Lab,
It originates from nonprofits and nonprofits such as Doteveryone and the results screening kit.
It is based on universities such as the AI Now Institute, which has published an algorithmic impact assessment.
It comes from professional bodies such as the UK Design Council's Double Diamond (Design Council n.d.).
It originates from companies like PWC, which have published a practical guide to responsible AI.
In addition to these guidance mechanisms specifically aimed at providing support for tackling AI's ethical challenges, there are many other options that arise from science and technology research and reflection activities that can form part of the broader discourse on how to support AI ethics. These include activities such as anticipating future technologies and their ethical issues, some of which are closely linked to digital technology. But they can also benefit from a wider field of future and foresight. Stakeholder dialogue and public engagement is another major area of activity that will play a central role in AI ethics, drawing on a large amount of previous work to provide multiple methodologies. Another issue that should not go unnoticed is the creation of discourse by awareness-raising, education and training, and policymakers. These will stimulate the sense of responsibility that must be awakened in AI developers and practitioners.
The table below summarizes the topics we have discussed and the issues that can be addressed. It can represent a general situation, even if it does not cover all alternatives.
Artificial Intelligence Ethics Stakeholders
This concept is widely used in the organizational literature to help organizations determine whom to consider when making decisions or acting. From an AI ethics perspective, individuals or groups that are significantly affected or potentially at risk from the action can be viewed as stakeholders.
There are systematic and comprehensive analysis methods for stakeholder identification and engagement. However, there are difficulties in identifying stakeholders in AI ethics. Depending on the meaning and scope of the concept of “artificial intelligence” we use and its possible social consequences, individuals, private sector organizations, academics, NGO representatives and government institutions are all or a significant part of them being stakeholders.
The "ecosystem" metaphor is widely used for the circle formed by the stakeholders. Because roles such as ensuring sustainability and creating different interaction environments are emerging. We can show the stakeholders three basic classes within the scope of the ecosystem.
Policy-oriented organizations
Other organizations
Individuals
The figure below tries to reveal the relationships and examples of the stakeholders that make up these three classes. For example, an individual user may work in a stakeholder organization and may also be part of standardization and policy development.
The first stakeholder category in the figure is policy-oriented. It encompasses policymakers who play an important role in shaping how ethics and human rights issues can be addressed, setting policies related to AI, including research policy and technology policy. This includes international organizations such as the UN, and OECD and its subsidiaries such as UNESCO and the International Telecommunication Union (ITU). The second proposed category of stakeholders is organizations. This group can contain many and often very different members. It covers not only companies that develop and distribute AI on a commercial basis, but also users and companies that have a specific role, such as insurance, that facilitates and stabilizes liability relationships.
In addition to commercial organizations in the field of AI, there are many organizations, professional organizations, standardization organizations, and educational institutions involved in the AI value chain. These should be included. Because it has a clear relationship with the use of standards, the integration of ethical considerations into standards, and the raising of awareness and knowledge through education. Similarly, media organizations play a crucial role in raising awareness of ethical issues and guiding public discourse. This may encourage policy development.
The third and final category of stakeholders in this overview is individuals. Policy bodies and organizations are made up of individuals and cannot exist without individual members. Some of these individual stakeholders correspond to corporate stakeholder groups. A developer may be a large for-profit company, but AI applications can also be developed by a hobby technologist with expertise in generating new ideas or applications. We can show another example as individuals who are not users but are affected. Examples of this group are prisoners whose parole decisions are made using AI, or patients whose diagnosis and treatment depend on AI.
In summary, we can say that; The stakeholder population for the AI field is complex.
Artificial Intelligence Ecosystem for Community and Human Flourishing
The use of terms like “innovation ecosystem” is relatively common in innovation management and related fields. The term ecosystem actually comes from biology. According to National Geographic, “an ecosystem is a geographic area where plants, animals, and other organisms, as well as weather and landscapes, work together to form a bubble of life” It is only vaguely related to the term's original use in biology.
The concept of an ecosystem outside of biology; is considered a complex, interconnected network of individual components. It is a popular concept that suggests that the components of the system are living organisms.
We try to summarize the features of the AI ecosystem, which is used as an interesting metaphor to describe the community of socio-technical actors, with the figure above. Although the absence of a few actors does not harm the system, it is an interactive mechanism that contributes to inclusion. A structure that tries to develop together dominates. Despite these advantages, there are significant drawbacks to applying the ecosystem concept to socio-technical systems. This is called an imperfect analogy in the literature. It should be noted that, unlike natural ecosystems, innovation ecosystems are not themselves the result of evolutionary processes, but are deliberately designed. There is concern that an analogy that is not based on rigorous conceptual and empirical analysis may hinder further research and policy around innovation. It is important to define a social system in terms of a natural system and to be aware of the potential for conceptual pitfalls. The heavy emphasis on evolutionary selection processes can lead to an implicit technological determinism. This means that the technology in question is seen as an external and autonomous development that is inevitable and forces individuals and organizations to adapt. On the other hand, the struggle for survival implied by evolution is problematic as it applies not only to organizations but also potentially to cultures where only those who have adapted to technology survive.
To understand the ethics of ecosystems and how such ecosystems can be shaped, it is important to identify the relevant parts of the AI innovation ecosystem. The figure above provides a systems view of the ecosystem that focuses on the need to provide specific algorithmic impact assessments.
Algorithmic Impact Assessments
Algorithms can be part of decision-making systems. Algorithmic decision systems (ADS) “rely on the analysis of large amounts of personal data to reveal correlations and derive information deemed useful for decision making”. Decisions made by an ADS can be wrong. To minimize this risk, algorithmic impact assessments are designed to reduce the risks of bias, discrimination, and wrong decision-making.*
In public discourse, caring for the good of the environment, in general, is not just an option but a moral obligation. This can be grounded on many normative premises. In addition, we have taken a basis that draws on the ancient tradition of human development, which is closely related to the problem of living with a high level of well-being. Society takes place in natural, social, and technical ecosystems that have a powerful influence on our ability to live well. When we take all these relations as a basis, another issue that we should pay special attention to is human rights. In this context, we will evaluate general definitions, current examples, and discussions next week.
** These are the contents cited from the 1st source.
References:
Bernd Carsten Stahl, "Artificial Intelligence for a Better Future: An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies”, Springer, ISBN-978-3-030-69978-9, 2020.
Last updated