The Creation of Eve, held by Ashmolean Museum, which depicts God creating Eve from the rib of the sleeping Adam (Genesis 2: 21–23)
By John Harris
'How to be good?' is the pre-eminent question for ethics, although one that philosophers and ethicists seldom address head on. It was the question Plato posed in a slightly different form in The Republic when he said: “We are discussing no trivial subject, but how a man should live”.
Marcus Aurelius thought he knew the answer. When he unequivocally stated in his Meditations “A King’s lot:to do good and be damned” he was himself a king and ruled almost all of the world that was known to him. He could with impunity both do good and be damned. Edward Gibbon famously remarked that “If a man were called to fix the period in the history of the world during which the human race was most happy and prosperous he would, without hesitation, name that which elapsed from the death of Domitian to the accession of Commodus”. Marcus Aurelius, the father of Commodus ruled for the last 19 years of this period. Carving of Eve and the forbidden apple, on the bell tower of New College, Oxford
Recently philosophers and scientists have tried to identify how to make the world better by making people more likely to do good rather than evil. Many of these have proposed ways of changing human kind by chemical or molecular means so that they literally cannot do bad things, or are much less likely so to do, in other words by limiting or eradicating their freedom to do bad things.
This same problem has also faced those interested in Artificial Intelligence (AI). If we create beings as smart or smarter than us how can we limit their power deliberately to eliminate us or simply act in ways that will have this result? How can we ensure that they act for the best? Many people have thought that this problem can be solved by programming them to obey some version of Isaac Asimov’s so called “laws” of robotics, particularly the first law: “a robot may not injure a human being, or, through inaction, allow a human being to come to harm.” The problem of course is how the robot would know whether its actions or omissions would cause danger to humans, or for that matter, to other self-conscious AIs. Consider that ethical dilemmas often involve choosing between greater or lesser harms or evils rather than avoiding harm altogether, allowing or causing some to come to grief for the sake of saving others.
How would a human being who, for example, had been rendered unable act violently towards other people, or in ways that caused pain, defend herself or others against murderous attack? How would an AI programmed according to Asimov’s laws do likewise?
John Milton knew the answer. In Paradise Lost Milton reports God as reminding humankind that if we want to be good, to be “just and right”, then we need autonomy: “I made him (mankind) just and right, sufficient to have stood, though free to fall.”
This dilemma, felt no less keenly by God than by the rest of us, of how to combine the capacity for good with the freedom to choose, is now facing those trying to develop moral bio-enhancers and those working on the new generation of smart machines. This is what Steven Hawking meant when he told the BBC in 2014 that:
“the primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.” How could full AI which would enable the machine which (who?) possessed it to determine its own destiny as we do, be persuaded to choose modes of flourishing compatible with those of humans? Of course we currently have these problems with respect to one another, but at least we have not as yet shackled our capacity to cope with these by foreclosing some options for self-defence by moral bioenhancement.
In the future there will be no more “men” in Plato’s sense, no more human beings therefore, and no more planet earth. No more human beings because we will either of wiped one another out by our own foolishness or by our ecological recklessness; and no more planet Earth because we know that ultimately our planet will die and any surviving people or AIs along with it. A programmable robot called Nao at the Cyber Security Institute at Oxford
Initial scientific predictions on the survival of our planet suggested we might have 7.6 billion years to go before the earth gives up on us. Recently Steven Hawking said
“I don't think we will survive another thousand years without escaping beyond our fragile planet.”
To be sure we need to make ourselves smarter and more resilient and we may need to call AI in aid to achieve this if we are to be able to find another planet on which to live when this one is tired of us, or even perhaps develop the technology to construct another planet. To do so we will have to change, but not in ways that risk our capacities to choose both how to live and the sorts of lives we wish to lead.
As Giuseppe di Lampedusa had Tancredi say in The Leopard, “If we want things to stay as they are things will have to change”…and that goes for people also!
John Harris is the author of How to be Good (Oxford University Press, 2016) and is professor emeritus in science ethics at the University of Manchester
Read more on Oxford Today:
Images © OUP, Shutterstock