2. Perspectives on Artificial Intelligence

March 18, 2022 lectured and written by Dr. Merve Ayyüce KIZRAK

“AI doesn’t have to be evil to destroy humanity – if AI has a goal and humanity just happens to come in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.” — Elon Musk, Technology Entrepreneur, and Investor

Although it may seem like a cliché, we are talking about a digital world that we started to live in thanks to these computers and the internet and social media that came into our lives right after. This large volume of real-time and streaming data has led us to the desire to process the data for certain purposes, to evaluate it more effectively by making sense out of it, to develop decision processes, and of course, to see its economic impact. In order to do this, the focus was on developing the technical infrastructure to collect, store, share, process, and reuse data. On the other hand, various theoretical and practical approaches have become necessary to process data. One of them is artificial intelligence.

AI knows where we are today, who we meet or may be meeting, what to order, what music to listen to, which road to take home. (or it may suggest it to us or decide for us.) In addition, states/governments follow similar approaches in technologies such as autonomous weapons. Artificial intelligence is a technology that comes to the forefront due to its financial return in sectors such as global trade systems, financial markets, banking, and advertising. If you pay attention, our perspective on the same technology can completely change depending on the usage sector and the data it processes (recommendation system, autonomous weapons, financial applications, and healthcare applications). You can call AI systems dangerous and destructive, or you can call them lifesaving. Therefore, ethical discussions must come to the fore, depending on the area of ​​use, the way it works, and the data it processes. That's why, in this course, we're going to do the ethical discussion of many technology topics, especially AI.

Both users and developers define AI systems as "black boxes" in many respects. Preventing this can be achieved with both technical and ethical discussions. In the coming weeks, we will discuss technical approaches to issues such as bias and discrimination. But first, let's try to fully understand the terms of AI. It is a good approach to situate all concepts on this foundation.

The most common sub-fields of AI today are:

Machine learning: Systems based on algorithms that can learn from datasets and whose performance can be improved with more data over time.

Artificial neural networks: Structures that learn how to perform a task by taking advantage of data features, usually without being programmed with any tasks specific rules.

Deep learning: A more specialized subfield of AI that relies on complex statistical models and algorithms with multiple layers of parallel processing that aims to model the way the biological brain works in a simplified manner. Due to the need for large datasets and powerful processing units to enable self-learning in deep learning, successful results have only been achieved in the last 20 years.

So, AI has many different tools. If we were to draw a Venn diagram showing how all these concepts are put together, this is what it may look like. AI is a huge set of tools for making computers behave intelligently. Off AI, the biggest subset is prairie tools from machine learning, but AI does have other tools than machine learning.

The part of machine learning that's most important these days is neural networks or deep learning, which is a very powerful set of tools for carrying out supervised learning.

Data science is maybe a cross-cutting subset of all of these tools that uses many tools from AI machine learning and deep learning but has some other separate tools as well that solves a very set of important problems in driving business insights.

The “Third Wave” of AI

It all started with British Scientist Alan Turing searching for the answer to the question “Can machines think?” in the 1950s. Around the same time, Ordinarius Professor Cahit Arf presented his work titled “Can Machines Think and How?” at Erzurum Atatürk University. As a result, researchers have started to turn to AI technologies that can think, learn and reason like humans, in line with the emergence vision of AI in its beginning.

Narrow AI: It is the name given to the first developmental stage of AI, which includes systems or applications that can only perform certain tasks. This stage describes the majority of applications so far. Chess and Go applications are among the examples for this stage, because the techniques used cannot go beyond their determined purposes. This stage also includes assistive AI, which is defined as the stage of work that supports people's work and facilitates their everyday lives.

General AI: It will be the stage when AI technologies will be able to do the things that human intelligence can do both by learning and by improving itself. Thus, it is expected that systems able to perform close to humans without human support in fields such as mathematics, physics, art, and law will emerge.

Super AI: It will be the stage where the systems will be far superior to humans in terms of performance and achievement and will be able to develop and learn beyond what humans can perceive, make and implement completely independent decisions. When and under what conditions this stage may take place or whether it will take place as predicted is completely based on predictions.

AI Terms and Concepts

A 2018 review of AI literature by academic publisher Elsevier suggests that there are a number of key concepts and research areas that make up the academic discipline of AI. Based on a sample of 600 000 AI-related articles analyzed against 800 keywords, the report classified AI publications into seven clusters:

  1. Search and optimization

  2. Fuzzy systems

  3. Planning and decision making

  4. Natural language processing and information representation

  5. Computer vision

  6. Machine learning

  7. Probabilistic reasoning and neural networks.

This highlights that AI is not a single technology, but can be better understood as a set of techniques and sub-disciplines.

While all these clusters are familiar components of the AI ​​field, the emphasis in current AI ethics is on machine learning and neural networks. None of this is really new. Machine learning has been an established part of AI research since its inception, but recent advances in computing power and the availability of data have led to an increase in its application in a wide variety of fields. It covers a wide variety of techniques and approaches, including machine learning, supervised learning, Bayesian decision theory, various parametric and non-parametric methods, clustering, and others.

Neural networks are a major factor behind the recent success of machine learning, which is the main driver of the current AI wave. A particular technique of high importance is deep learning, which uses different types of neural networks and contributes to recent successes in fields such as speech recognition, visual object recognition, and object detection, as well as in other fields. (such as drug discovery and genomics).

It is important to understand which of the characteristics of AI are ethically appropriate.

  1. Opacity (Being a Black-Box): Machine learning algorithms and neural networks are so complex that their inner workings are not easy to understand, even for subject matter experts. Although they remain purely technical and determined systems (partially as they are learning systems and therefore change), it is almost impossible to fully understand their inner workings.

  2. Unpredictability: As a result of point 1, it is difficult, if not impossible, to predict the outputs of systems based on understanding the inputs.

  3. “Big data” requirements: Machine learning systems in their current form require large training datasets and significant computer capacity to build models.

Possible purposes of AI

When we start to think about AI ethics, we should actually understand the purpose of AI correctly. Digital technologies are flexible and open to interpretation. As such, they can be used for an infinite number of purposes that may or may not be compatible with those of the original developers and designers. When we consider the use of AI, different purposes can emerge that enable the development and design of systems. But Stahl basically divided it into 3 purposes in his book:

  1. AI for efficiency

  2. AI for social control

  3. As an alternative and complement to the first two purposes, AI for human development

When the published policy and strategy documents dealing with AI are examined, these three motivations and their combinations are generally mentioned.

A report presented to the President of the United States, emphasizing the economic advantages, said:

“AI has the potential to double annual economic growth rates in the countries analyzed by 2035.”

The European Commission expects:

“AI is expected to expand across many businesses and industrial sectors, increasing productivity and generating strong positive growth.”

We know that the rapidly growing data collection capabilities of AI, coupled with its ability to detect patterns and correlations between variables, provide new ways to control human behavior. From this point of view, the use of AI and many related technologies also paves the way for social control. Using this idea based on behavioral economics, subtle ways can be tried. For example, it can come across very harshly, as in the Chinese social credit scoring system.

Another example may violate legal boundaries, as in the Facebook - Cambridge Analytica case, where social media data was used to illegitimately alter the outcome of democratic elections.

Zuboff; argues that social control fully overlaps with the concept of AI as new business models and socio-technical systems developing as the driving force of what he defines as "surveillance capitalism".

The pursuit of efficiency and the resulting economic benefits can lead to a strong economy that provides the material substrate for human well-being.

An efficient economy creates wealth, paving the way for human development that would otherwise be impossible. For example, the transition from coal-based power generation to solar power is expensive. Also, the pursuit of efficiency and profit creation can be a legitimate field of activity for excellence, and people can thrive in that activity.

We can stress that the third purpose of AI is not inherently contradictory, but rather defines main areas of emphasis or different aspects of AI that can guide its development and deployment.

The clear purpose of doing the ethically right thing with AI can be described with reference to human development. We will continue this discussion with the philosopher Dr. Cansu Canca next week.

References:

  1. Bernd Carsten Stahl, "Artificial Intelligence for a Better Future: An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies”, Springer, ISBN-978-3-030-69978-9, 2020.

  2. Elsevier (2018), ArtificiaI intelligence: how knowledge is created, transferred and used. Trends in China, Europe, and the United States. Elsevier, Amsterdam. https://www.elsevier.com/__data/ assets/pdf_file/0011/906779/ACAD-RL-AS-RE-ai-report-WEB.pdf. Accessed 22 Sept 2020

  3. Gasser U, Almeida VAF (2017) A layered model for AI governance. IEEE Internet Comput 21:58– 62. https://doi.org/10.1109/MIC.2017.4180835

  4. Bishop CM (2006) Pattern recognition and machine learning. Springer Science+Business Media, New York

  5. Alpaydin E (2020) Introduction to machine learning. The MIT Press, Cambridge MA

  6. Executive Office of the President (2016) Artificial intelligence, automation, and the economy. Executive Office of the President of the United States. https://obamawhitehouse.archives.gov/sites/whi tehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF. Accessed 23 Sept 2020

  7. House of Lords (2018) AI in the UK: ready, willing and able? HL Paper 100. Select Committee on Artificial Intelligence, House of Lords, Parliament, London. https://publications.parliament.uk/ pa/ld201719/ldselect/ldai/100/100.pdf Accessed 23 Sept 2020

Last updated