Artificial intelligence: instrument or creature?

Renowned developers and shareholders of powerful AI technologies are warning of major, unforeseeable consequences of a technology that is gaining influence according to its own laws. 100 such experts have signed an open letter calling for a pause in the development of AI technologies. Others, and in some cases even the same major developers and shareholders, emphasise the almost utopian opportunities that will arise from the imminent technological breakthrough in the transition from AI to ASI (artificial super-intelligence). As with all technological development topics of the last 150 years, these positions are personalised and exaggerated in the media.

Dystopias

Anyone who follows Elon Musk, Bill Gates or Yuval Noah Harari will find themselves in a dystopian scenario in which superintelligence, which is on the verge of a breakthrough, threatens human autonomy or even represents a new stage of development compared to humans. From this perspective, AI is less a technology than a creature that reproduces itself according to its own rules and possibilities, can follow its own interests and enters into a competitive situation in relation to humans. As a humanities scholar, the first thing that comes to mind is not complex algorithms, but the sorcerer's apprentice:

«Great is my dismay!
Spirits raised by me
Vainly would I lay!»

The Sorcerer's Apprentice - J. W. von Goethe

The broom develops a will of its own and a power of its own to enforce this will against the sorcerer's apprentice. The sorcerer's apprentice has activated a potential that he can no longer control and that threatens him and his order.

Utopia

Those who listen to more optimistic voices, such as Sam Altmann, Andrew Ng, Fei-Fei Li or Andrew McAfee, recognise in AI a useful and very powerful tool that will help mankind to cure diseases, optimise education systems, reduce energy consumption and improve the overall quality of human life. This optimism is not inspired by the motif of the sorcerer's apprentice, but by the myth of the fire-bringing Prometheus. Trevor Noah, presenter of the Daily Show until 2022 and host of the podcast «What Now» since 2023 has Sam Altmann himself, the CEO of OpenAI «Prometheus» called.

If we take the initial situation to the extreme and personalise it, we can ask: Is Sam Altmann, the chief developer and CEO of OpenAI, a sorcerer's apprentice or a Prometheus? And does the current AI represent a technological development or is it a creature that will grow, develop, develop its own ideas and values and find ways to implement them?

AI between creation and instrument

Can the innovation of AI be compared with the iPhone phenomenon? Or is it merely the next innovation, riding the wave of this phenomenon as a free rider? Or should we even compare it to the invention of the atomic bomb, a possibility and danger of human self-extinction through technology? Anyone who wants to develop an attitude to these questions, which are important for the ethical assessment of AI technology, should not be guided too much by the utopias and dystopias of our pop culture. David Lagercrantz's fourth Millennium book «Conspiracy» or the film «Free Guy» offer great entertainment and can give us food for thought. But they are not a basis that we can refer to when thinking about the opportunities and risks of this technology. However, it would be just as negligent to play down this innovation and its potential and underestimate it by analogy with steam engines, combustion engines or contraceptives.

The first and decisive question here is: Is AI technology an instrument, a technology that humans use, or a creature that individual humans have brought into the world, which will develop, reproduce and improve independently? Are we talking about innovation - in the sense of human action - or evolution - in the sense of creaturely development? The theory of technological singularity currently characterises many people's ideas about AI. It serves as a blueprint for popular science, novels and Hollywood films and explains the path from the computing machine to AI and from there to AI as an evolutionary process.

Technological singularity

Computing is not thinking. And computing capacity is not the same as intelligence. However, the theory of the technological singularity does not make a clear distinction between these terms. The idea behind this widespread theory is simple: the computing power of computers is growing rapidly. In all probability, it will very soon overtake the computing power of human brains. A very powerful artificial intelligence that functions in a self-learning way, as current language models such as ChatGPT, Bard or Sydney do, coupled with this sheer computing power, could not only defeat the human mind in certain areas, such as chess or similar tasks, but also outperform the human mind in general. Like the broom of the sorcerer's apprentice, AI will develop a will of its own, which it will then impose on the world.

This idea sounds fascinating. But if it were to become reality, so that AI could operate like a living being in the future, the following conditions would have to be met:

  1. Self-reproduction: Living organisms are autopoietic systems. They are therefore networks of processes that are able to produce more of themselves and maintain themselves.
  2. Evolution: Over generations, living organisms adapt to their environment and change.
  3. World of experience: Living beings have an inner world of experience. They have sensations, sometimes emotions and certain living beings even have consciousness.

In principle, I think it is conceivable that AI technologies themselves could generate and train new AI technologies. A so-called generative AI would then also virtually realise what we understand by evolution in biology: AI systems programme their improved successors. However, in order for AI to really learn something fundamentally new and open up a previously unknown field, it would also need to have a world of experience. However, this cannot be gained from more memory capacity and computing power. A completely different learning process would have to be set in motion.

«However, for AI to really learn something fundamentally new, to open up a previously unknown field, it would also need to have a world of experience.»

This third criterion is probably also the basic difference between human thinking and the functioning of artificial intelligence algorithms. The basic operation of artificial intelligence differs significantly from human thinking. When a language tool calculates, it analyses large amounts of data in order to recognise patterns in this flood of data. Human thinking, on the other hand, is based on a linguistic framework that categorises what is the case, what could have been the case, what will never be the case, and can place it under principles. The human mind can not only think about the world by categorising what is, what is possible or what is impossible. It can also give an account of the way in which this classification is made, criticise it, confront it with other judgements and weigh it up against them.

AI must constantly recognise new orders in the data clutter, which - and this is the big difference! - are not reflexively transparent for the AI itself. It does not know and cannot explain why and how it has recognised these patterns. It operates purely statistically.

Stochastic parrot

Computer scientist and AI ethicist Joanna Bryson has coined the term «stochastic parrot» to describe this: «Stochastic» refers to the use of probabilities or statistical methods to generate text, while «parrot» metaphorically stands for the kind of artificial intelligence that repeats or reproduces information without having a real understanding of what is being said.

The idea behind the term «stochastic parrot» is that these AI systems can produce superficially human-like language, but without a deep understanding of the meaning, context or connotations of the sentences generated. They are essentially reproducing patterns from the data rather than truly understanding what they are saying. They have no awareness that allows them to relate themselves to these statements.

Emergence and intelligence

If this is true, then regardless of how much the storage capacities for language models grow, there may be language tools or other AI applications that outperform human ability in certain areas, but no superintelligence that surpasses and rivals the human mind as a whole. The human brain is extremely complex and dynamic and has an incredible capacity for adaptation, learning and creativity. It is incredibly efficient in terms of computing power. Current AI systems are very limited in comparison and can only partially or in specific areas mimic human cognitive abilities.

A child does not need to have seen 1000 dogs, for example, to be able to distinguish dogs from cats with some degree of reliability. The child develops a concept of the dog, of the cat, of the things we have given names to. They have a concept of «dog», an understanding of what dogs are. And they can relate to these dogs, cats and things themselves. This is a complex process that involves both sensory input and cognitive processing and is shaped by experience, context and learning. Pattern recognition is only one part of this process. It does not independently lead to judgement and awareness.

I therefore believe that the theory of technological singularity is wrong. Intelligence does not arise from more and faster pattern recognition. This would require a categorical leap, an emergent moment that the theory of technological singularity cannot even describe. However, not only from a cognitive science perspective, but also from a philosophical one, I consider the idea of a self-aware, autonomously acting superintelligence to be science fiction. And that has to do with what language actually is.

Language and communication

Language is originally and primarily not simply a means of representing things in the world, but of communicating with each other about these things in the world. The human mind also uses pattern recognition to a certain extent: we recognise similarities between objects, draw analogies between situations and recognise patterns in a data set.

But the human mind does more than this. It can state in constative sentences that something is the case: «There is a table here.» It can authentically confess love or express pain: «Ouch!» For both constative sentences and authentic expressions, it is crucial that we not only read, statistically continue and add to the world, but also experience it.

Language and thought are conceptually linked to people who communicate about something in the world - be it imaginary, imagined or real.

Rationality as a prerequisite for language

This requires two basic prerequisites: Firstly, that speakers are fundamentally aware of the affected person's perspective of the world and secondly, that they can be approached as persons for their statements. Without the perspective of those affected, language lacks its lifeworld backbone, meaning that neither authenticity nor experiential learning are possible. Pattern recognition, as used by AI tools, only represents second-order experiential learning. They do not learn from the world, but by means of data that represent this world secondarily and without context. Sensory experiences are only indirectly accessible through processed data. However, it is different to know that the schoolchild Max has to cross a road junction with traffic lights on the way home than it is to know this as someone who was a schoolchild themselves, perhaps had children themselves, has already worried when their own child is late or even knows someone who has been hit by a car.

What we know and think as humans is situated in a context that we are not always aware of, but to which we are in principle responsive. When Peter complains: «This shitty junction! Something has to be done about it!», there are different ways of responding. I can reply: «I think the junction is nice. It calms me down to stand on the corner and listen to the traffic.» I can agree: «Exactly! Let's block the junction!» But I can also ask back, assume motives, put the statement into a framework within which I can understand it: «Are you worried about your children, whose way to school leads through this dangerous junction?»

We assume that the other person has reasons behind their statement. If we find a statement problematic or disagree with it, we can ask about these reasons and discuss and weigh them up together in a process of mutual understanding. Of course, we often impute dubious and false motives to each other. However, this does not speak against the fundamental possibility of communication, but rather shows that communication can also and especially succeed through misunderstandings.

Interim conclusion: AI is a technology

Based on the above considerations and reasons, I have come to the conclusion that current and foreseeable AI in the medium term is technology and not creation. I would like to summarise the most important arguments in favour of this:

  1. Self-reproduction and evolution: Living organisms have the ability to reproduce themselves and are subject to evolution, which enables genetic changes and adaptations over generations. AI systems do not reproduce themselves and are not able to evolve in this way in a species-specific manner. They do not adapt to a living environment, but merely improve themselves in performing a certain operation better. They then recognise deviations from a certain pattern more quickly and reliably. But they do not, for example, suddenly ask themselves whether it would not make more sense to do something completely different.
  2. Creativity and intuition: Living beings can be creative and make intuitive decisions. AI systems can «learn» based on patterns and data, but their «creativity" is limited to reproducing existing patterns or data.
  3. Consciousness and experience: AI technologies have no consciousness or subjective experience. They are not authors to whom actions can be attributed as deeds for which they are accountable.

Or to put it in a nutshell: Artificial intelligence is not rational in the sense that we attribute rationality to living beings, but at most rational for a specific purpose.

«Artificial intelligence is not rational in the sense that we attribute rationality to living beings, but at most rational for a specific purpose.»

Verantwortung und autonome Technologie

Against this realistic backdrop, we should therefore be very cautious with our expectations of the problem-solving power of AI tools. Whether we are talking about the mega issue of global warming or simpler things like self-driving cars, decisions are being made. In individual cases, these decisions must be justified not only legally, but also morally and ethically. Because AI tools are not people, they are not seen as ethical or moral subjects, but only as objects. We are discussing the ethical consequences of self-driving car technology. But we are not discussing them with the self-driving cars.

You may be wondering whether AI will be able to make such ethical decisions for us in the future. To answer this question, we first need to understand how AI works in the first place. I make a rough distinction between two types of AI. Classical AI, which was widespread until two years ago, learns from selected data sets. Classic AI systems often use explicitly programmed rules and algorithms created by developers to solve specific problems. These rules are used to analyse data, draw conclusions and make decisions. They use sets of rules to answer complex questions or solve problems by drawing on stored expertise.

Classical AI can therefore draw rule-based conclusions that depend on the rules that humans apply to this AI system. This technology can be used to defeat the world chess champion or recognise faces. However, it would be pointless to use such a tool to discuss the elegance of chess or the beauty of a face.

Modern machine learning approaches are mainly based on processing large amounts of data to recognise and generalise patterns in data in order to make predictions or solve tasks. The AI technology behind the AI tools we refer to as deep learning is often compared to neural networks. This technology works with different layers of artificial neurons that are networked with each other. The neural network learns by adjusting the weights between the neurons in order to minimise the errors or differences between the predictions and the actual results. This happens during training with labelled data sets as the network optimises the weights through constant repetition and feedback to make better predictions. A well-trained neural network can recognise complex patterns, perform classification, understand patterns in data and make predictions, making it a powerful tool for many machine learning applications.

The performance of such AI systems is very impressive, as shown by OpenAI's Chat GPT, for example. However, strictly speaking, they do not learn themselves. Learning is purely incremental. A trained system can easily reproduce criteria for an ADHD diagnosis or calculate the probability of an accurate ADHD diagnosis based on established criteria. In principle, however, it is not in a position to discuss whether ADHD is a disease or a potential and the question itself remains foreign to it.

AI will not exempt us from the need to analyse and understand facts, to assess them and to give comprehensible reasons for this judgement. It can support us in analysing complicated contexts more quickly or by rendering difficult texts in simpler language. But it cannot reflect on a meta-level about why it arrives at a certain moral judgement, nor should it do so.

I asked ChatGPT for reasons that justify killing a human being:

The question of the justification for killing a human being is extremely complex and context-dependent. There are ethical, legal and moral principles that could be considered as justification for killing a human being in different situations. Here are some examples where killing could be considered justified in certain contexts:

  1. **Self-defence:** If a person's life is under immediate and serious threat and killing another person is the only way to defend oneself and protect one's own life, this is often considered justified.
  2. **Self-defence:** In situations where someone else is in immediate danger and intervention is necessary to protect a person's life, killing in self-defence could be considered justified.
  3. **War and defence:** In certain circumstances, killing could be considered legitimate in the context of a just war or in defence of a country, although the laws of war and international law have strict rules and restrictions on the use of force.
  4. **Statutory execution:** In some countries, the death penalty may be considered a legal punishment for serious crimes, although there is much ethical and moral debate about its use.
  5. **Emergency situations:** In extreme situations, such as terrorist attacks or hostage taking, decisions to kill could be made to prevent even greater misfortune or damage.

It is important to note that the question of justification for killing a human being depends heavily on cultural, legal and ethical beliefs as well as specific circumstances and contexts. These reasons are not universally accepted and are often controversial. The judgement of whether killing is justified in a particular situation is a complex issue that needs to be considered from different perspectives.

I think this answer is excellent! At the level of knowledge, it provides the decisive contexts in which the question of the justifiable killing of a human being arises and describes them concisely. What is really strong, however, is the boundary demarcation that ChatGPT makes in the answer. The final section explains that the answer to the question cannot be based on a universally shared principle, but is subject to contextual preconditions. These contexts are lifeworlds that are characterised by historically coagulated experiences, mentalities, educational processes or cultural memory. ChatGPT does not inhabit these environments. The language tool lacks what would make an ethical judgement possible in the first place: The participant perspective on life in a particular context that develops and shapes values and forms perspectives on other contexts. AI systems can form the basis of a self-organised subsystem. But they are not autonomous in a comprehensive sense, because they lack reason insofar as they could act as accountable subjects, take responsibility for decisions and form their will in a self-determined way.

As humans, we cannot dispense ourselves from ethical decisions and moral judgements, because only humans as humans can make these judgements. AI can point out various possibilities or conclusions, but it cannot make the judgements for us, because otherwise they are no longer ethical judgements.

«As humans, we cannot exempt ourselves from ethical decisions and moral judgements, because only humans as humans can make these judgements.»

Information and responsibility

For all the advantages that search engines, social media and language tools such as ChatGPT offer us, they also present us with new challenges:

The proliferation of digital and AI-based technologies has brought with it a flood of information, but it also harbours significant risks associated with misinformation. The automation of content and the manipulation of media by AI algorithms can lead to misleading or false information being spread quickly and widely. This poses a serious threat to public opinion and democracy.

Just imagine that an AI would be programmed to not only produce the image of the Pope in a Balenciaga coat, but also to produce websites, cross-references and supposed media reports suggesting the authenticity of the image. And now imagine it's about something far more important than the Pope's coat.

In this context, the importance of reliable sources and quality journalism becomes more important. Journalists play a crucial role in fact-checking, contextualising information and providing balanced, in-depth reporting. By focussing on transparency and using reliable sources, media can build trust and curb the influence of misinformation, which is crucial to promote an informed and therefore democratic society.

Feedback

We humans not only create technological means that we use instrumentally, but are also culturally characterised by these technologies. When you read «Six Taliban fighters eliminated», it makes a big difference whether you have in your mind's eye the image of a conspiratorial reconnaissance force that, after hours of observation and at considerable personal risk, shot the enemy fighters in a firefight, or whether you think of a drone squadron that identified and executed these six Taliban fighters through pattern recognition. It's true: They are dead in both cases. But there is a difference between people risking their lives to realise a deadly mission that they could have aborted, and an abstract technology following the intrinsic calculation of its programmed pattern recognition. And it may well be that the second variant additionally dehumanises the opponent. Opponents and enemies are then vermin that are eliminated as rationally and efficiently as possible.

You probably have a similar intuition that it is important for a judge and not an AI to pass judgement on you. Just how challenging the interaction between technology and humans can be is currently demonstrated by the discussions surrounding the correct use of the VAR (Video Assistant Referee) in football.

Football and war may be examples that don't concern you. But think of job application procedures and the use of AI tools in the screening of application portfolios. For reasons of efficiency and coherence, the use of artificial intelligence will increase in many areas of life. This is not a bad thing in principle. But we need systems that preserve the immediacy, the tangibility and the sensuality of human interaction. As humans, we are not simply human. What makes us human is culturally transmitted to us and behind this culture are learning processes, painful injustices and developments. At least enough not to give it up lightly.

«What makes us human is culturally transmitted to us and behind this culture are learning processes, painful injustices and developments. At least enough not to give it up lightly.»

Facebook
WhatsApp
Twitter
Email
Picture of Stephan Jütte

Stephan Jütte

Dr. theol.

Leiter Theologie und Ethik
Mitglied der Geschäftsleitung

Alle Beiträge

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.