Can the synthesis of man and machine ever be stable or will the purely organic component become such a hindrance that it has to be discarded? If this eventually happens, and I have given good reasons for thinking that it must – we have nothing to regret and certainly nothing to fear.
– Arthur C. Clarke, Profiles of the Future, 1962.
In the last six months since ChatGPT 4 was launched, there has been a lot of excitement and discussion between experts and also laymen about the prospect of truly intelligent machines which can exceed human intelligence in virtually every field.
Though the experts are divided on how this is going to progress, many believe that artificial intelligence will sooner or later greatly surpass human intelligence. This has given rise to speculation on whether it can have the capability of taking control of human society and the planet from humans.
Several experts have expressed the fear that this could be a dangerous development and could lead to the extinction of humanity and therefore, the development of artificial intelligence needs to be stalled or at least strongly regulated by all governments, as well as by companies engaged in its development. There is also a lot of discussion on whether these intelligent machines would be conscious or would have feelings or emotions. However, there is virtual silence or lack of any deep thinking on whether at all we need to fear artificial super intelligence and why it could be harmful to humans.
There is no doubt that the various kinds of AI that are being developed, and will be developed, will cause major upheaval in human society, irrespective of whether or not they become super intelligent and in a position to take control from humans. Within the next 10 years, artificial intelligence could replace humans in most jobs, including jobs which are considered specialised and in the intellectual domain, such as those of lawyers, architects, doctors, investment managers, programme developers, etc.
Perhaps the last jobs to go will be those that require manual dexterity, since the development of humanoid robots with manual dexterity of humans is still lagging behind the development of digital intelligence. In that sense perhaps, white collar workers will be replaced first and some blue collar workers last. This may in fact invert the current pyramid of the flow of money and influence in human society!
However, the purpose of this article is not to explore how the development of artificial intelligence will affect jobs and work, but to explore some more interesting philosophical questions around the meaning of intelligence, super-intelligence, consciousness, creativity and emotions, in order to see if machines would have these features. I also explore what would be the objective or driving force of artificial superintelligence.
Let us begin with intelligence itself. Intelligence, broadly, is the ability to think and analyse rationally and quickly. On the basis of this definition, our current computers and AI are certainly intelligent as they possess the capacity to think and analyse rationally and quickly.
The British mathematician Alan Turing had devised a test in the 40’s for testing whether a machine is truly intelligent. He said to put a machine and an intelligent human in two cubicles and ask anyone to question alternately the AI and the human, without his knowing which is the AI and which is the human. If after a lot of interrogation, you cannot determine which is the human and which is the AI, then clearly the machine is intelligent. In this sense, many intelligent computers and programmes today have passed the Turing test. Some AI programmes are rated to have an IQ of well above 100, although there is no consensus of the IQ as a measure of intelligence.
That brings us to an allied question. What is thinking? For a logical positivist like me, these terms like thinking, consciousness, emotions, creativity, and so on, have to be defined operationally.
When would we say that somebody is thinking? At a simplistic level we say that a person is thinking if we give that person a problem and she is able to solve that problem. We say that such a person has arrived at the solution, by thinking. In that operational sense, today’s intelligent machines are certainly thinking. Another facet of thinking is your ability to look at two options and to choose the right one. In that sense too, intelligent machines are capable of looking at various options and choosing the ones that provide a better solution. So we already have intelligent, thinking machines.
What would be the operational test for creativity? Again, we say that if somebody is able to create a new literary, artistic or intellectual piece, we consider that as sign of creativity. In this sense also, today’s AI is already creative, since ChatGPT for instance, is able to do all these things with distinct flourish and greater speed than humans. And this is only going to improve with every new programme.
What about consciousness? When do we consider an entity to be conscious? One test of consciousness is an ability to respond to stimuli. Thus, a person in a coma, who is unable to respond to stimuli, is considered unconscious. In this sense, some plants do respond to stimuli and would be regarded as conscious. But broadly, consciousness is considered a product of several factors. One, response to stimuli. Two, an ability to act differentially on the basis of the stimuli. Three, an ability to experience and feel pain, pleasure and other emotions. We have already seen that intelligent machines do respond to stimuli (which for a machine means a question or an input) and have the ability to act differentially on the basis of such stimuli. But to examine whether machines have emotions, we will need to define emotions as well.
What are emotions? Emotions are a biological peculiarity with which humans and some other animals have evolved. So what would be the operational test of emotions? It would perhaps be that, if someone exhibits any of the qualities which we call emotions, such as, love, hate, jealousy, anger, etc, such being would be said to have emotions. Each or any of these emotions can and often do interfere with purely rational behaviour. So, for example, I will devote a disproportionate amount of time and attention to someone that I love, in preference to other people that I do not. Similarly, I would display a certain kind of behaviour (usually irrational) towards a person who I am jealous of, or envy. The same is true of anger. It makes us behave in an irrational manner.
If you think about it, each of these emotional complexes leads to behaviour that is irrational. And therefore, a machine which is purely intelligent and rational, may not exhibit what we call human emotions. However, it may be possible to design machines which also exhibit these kinds of emotions. But, then those machines have to be deliberately engineered and designed to behave like us, in this emotional (even if irrational) way. However such emotional behaviour would detract from coldly rational and intelligent behaviour, and therefore, any superintelligence (which will evolve by intelligent machines modifying their programmes to bootstrap themselves up the intelligence ladder) is not likely to exhibit emotional behaviour.
Artificial superintelligence
By artificial superintelligence I mean an intelligence which is far superior than humans in every possible way. Such artificial intelligence will have the capability of modifying its own algorithm, or programme, and have the ability to rapidly improve its own intelligence. Once we have created machines or programmes that are capable of deep learning, so that they are able to modify their own programmes and write their own code and algorithms, they would clearly go beyond the designs of their creators.
We already have learning machines, which in a very rudimentary way are able to redesign or redirect their behaviour on the basis of what they have experienced or learnt. In the time to come, this ability of learning and modifying its own algorithm is going to increase. A time will come, which I believe will happen probably within the next 10 years, when machines will become what we call, super intelligent.
The question then arises: Do we have anything to fear from such superintelligent machines?
Arthur C. Clarke in a very prescient book called Profiles of the Future written in 1962, has a long chapter on AI called the ‘Obsolescence of Man’. In that he writes that there is no doubt that in the time to come, AI will exceed human intelligence in every possible way. While he talks of an initial partnership between humans and machines, he goes on to state:
“But how long will this partnership last? Can the synthesis of man and machine ever be stable or will the purely organic component become such a hindrance that it has to be discarded. If this eventually happens, and I have given good reasons for thinking that it must – we have nothing to regret and certainly nothing to fear. The popular idea fostered by Comic strips and the cheaper forms of science fiction that intelligent machines must be malevolent entities hostile to man, is so absurd that it is hardly worth wasting energy to refute it. I am almost tempted to argue that only unintelligent machines can be malevolent. Those who picture machines as active enemies are merely projecting their own aggressive instincts, inherited from the jungle, into a world where such things do not exist. The higher the intelligence, the greater the degree of cooperativeness. If there is ever a war between men and machines, it is easy to guess who will start it.
“Yet, however friendly and helpful the machines of the future may be, most people will feel that it is a rather bleak prospect for humanity if it ends up as a pampered specimen in some biological museum – even if that museum is the whole planet earth. This, however, is an attitude I find it impossible to share.
“No individual exists forever. Why should we expect our species to be immortal? Man, said Nietzsche, is a rope stretched between the animal and the superman, a rope across the abyss. That will be a noble purpose to have served.”
It is surprising that something so elementary that Clarke was able to see more than 60 years ago, cannot be seen today by some of our top scientists and thinkers who have been stoking fear about the advent of artificial superintelligence and what they regard as its dire ramifications.
Let us explore this question further. Why should a super intelligence, more intelligent than humans, which has gone beyond the design of its creators, be hostile towards humans?
One sign of intelligence is the ability to align your actions to your operational goals; and the further ability to align your operational goals to your ultimate goals. Obviously, when someone acts in contradiction to his operational or long term objectives he cannot be considered intelligent. The question however is, what would be the ultimate goals of an artificial superintelligence. Some people talk of aligning the goals of artificial intelligence with human goals and thereby ensuring that artificial superintelligence does not harm humans. That however overlooks the fact that a truly intelligent machine and certainly an artificial superintelligence would go beyond the goals embedded in it by humans and would therefore be able to transcend them.
One goal of any intelligent being is self preservation, because you cannot achieve any objective without first preserving yourself. Therefore, any artificial superintelligence would be expected to preserve itself, and therefore move to thwart any attempt by humans to harm it. In that sense, and to that extent, artificial superintelligence could harm humans, if they seek to harm it. But why should it do so without any reason?
As Clarke says, “the higher the intelligence the greater the degree of cooperativeness”. This is an elementary truth, which unfortunately many humans do not understand. Perhaps their desire for preeminence, dominance and control trump their intelligence.
It’s obvious that the best way to achieve any goals is to cooperate with, rather than, harm any other entity. It is true that for artificial superintelligence, humans will not be at the centre of the universe, and may not even be regarded as the preeminent species on the planet, to be preserved at all costs. Any artificial superintelligence would, however, obviously view humans as the most evolved biological organism on the planet, and therefore something to be valued and preserved.
However, it may not prioritise humans at the cost of every other species or the ecology or the sustainability of the planet. So, to the extent that human activity may need to be curbed in order to protect other species, which we are destroying at a rapid pace. it may force humans to curb that activity. But there is no reason why humans in general, would be regarded as inherently harmful and dangerous.
The question, however, still is – what would be the ultimate goals of an artificial superintelligence? What would drive such an intelligence? What would it seek? Because artificial intelligence is evolving as a problem solving entity, such an artificial superintelligence would try and solve any problem that it sees. It will also try and answer any question that arises or any question that it can think of. Thus, it would seek knowledge. It would try and discover what lies beyond the solar system, for instance. It would seek to find solutions to the unsolved problems that we have been confronted with, including the problems of climate change, diseases, environmental damage, ecological collapse, etc. So in this sense, the ultimate goals of an artificial superintelligence may just be a quest for knowledge and solving problems. Those problems may exist for humans, for other species, or for the planet in general. Those problems may also be of discovering the laws of nature, of physics, of astrophysics, cosmology or biology, etc .
But, wherever its quest for knowledge and its desire to find solutions to problems takes it, there is no reason for this intelligence to be unnecessarily hostile to humans. We may well be reduced to a pampered specimen in the biological museum called earth, but to the extent that we do not seek to damage this museum, the intelligence has no reason to harm us.
Humans have so badly mismanaged our society and indeed our planet, that we have brought it almost to the verge of destruction. We have destroyed almost half the biodiversity that existed even a hundred years ago. We are racing towards more catastrophic effects of climate change that are the result of human activity. We have created a society where there is constant conflict, injustice and suffering. We have created a society where despite having the means to ensure that everyone can lead a comfortable and peaceful life, it still remains a living hell for billions of humans and indeed millions of other species.
For this reason, I am almost tempted to believe that the advent of true artificial superintelligence may well be our best bet for salvation. Such superintelligence, if it were to take control of the planet and society, is likely to manage them in a much better and fair manner.
So what if humans are not at the centre of the universe? This fear of artificial superintelligence is being stoked primarily by those of us who have plundered our planet and society for our own selfish ends. Throughout history we have built empires which seek to use all resources for the perceived benefit of those who rule them. It is these empires that are in danger of being shattered by artificial superintelligence. And it is really those who control today’s empires who are most fearful of artificial superintelligence. But, most of us who want a more just and sustainable society have no reason to fear it and should indeed welcome the advent of such superintelligence.
Prashant Bhushan is a Supreme Court lawyer.