Moral mechanisms

David Davenport

pp. 47-60

As highly intelligent autonomous robots are gradually introduced into the home and workplace, ensuring public safety becomes extremely important. Given that such machines will learn from interactions with their environment, standard safety engineering methodologies may not be applicable. Instead, we need to ensure that the machines themselves know right from wrong; we need moral mechanisms. Morality, however, has traditionally been considered a defining characteristic, indeed the sole realm of human beings; that which separates us from animals. But if only humans can be moral, can we build safe robots? If computationalism—roughly the thesis that cognition, including human cognition, is fundamentally computational—is correct, then morality cannot be restricted to human beings (since equivalent cognitive systems can be implemented in any medium). On the other hand, perhaps there is something special about our biological makeup that gives rise to morality, and so computationalism is effectively falsified. This paper examines these issues by looking at the nature of morals and the influence of biology. It concludes that moral behaviour is concerned solely with social well-being, independent of the nature of the individual agents that comprise the group. While our biological makeup is the root of our concept of morals and clearly affects human moral reasoning, there is no basis for believing that it will restrict the development of artificial moral agents. The consequences of such sophisticated artificial mechanisms living alongside natural human ones are also explored.

Publication details

DOI: 10.1007/s13347-013-0147-2

Full citation:

Davenport, D. (2014). Moral mechanisms. Philosophy & Technology 27 (1), pp. 47-60.

This document is unfortunately not available for download at the moment.