PhiloComp.net

menu

Computers and Ethics

Computers can impact on ethical theory in at least four quite distinct ways:

  1. Robots and codification of ethics
    The development of autonomous machines raises important issues about Ethical Control of Robots, analogous to issues that have traditionally arisen in respect of human behaviour, but requiring precise codification in advance (quite unlike the mostly vague ethical precepts that we tend to rely on).
     
  2. Applied ethics of IT and AI
    Computers' ubiquitous use in modern society raises many distinctive issues, some of which have been with us for many years (e.g. impacts on employment, data privacy and security, intellectual property, globalisation and information inequality), but many of which have emerged specifically with the advent of "big data" and machine learning. For more on this, see the section "An Expanding Field" below.
     
  3. Fundamental philosophy of mind
    The development of powerful AI systems potentially throws light on fundamental questions about such things as the nature of thought and consciousness, personal identity, freedom and autonomy, and what it is to be human. For a brief discussion of some of these, see Philosophical Issues in Robot Design.
     
  4. Meta-ethics and the nature of morality
    Computer modelling has played a significant role in game-theoretic accounts of the evolution of behaviour, which are increasingly influential in contemporary attempts to provide a naturalistic foundation of morality. For a brief discussion, see the section on "The Foundation of Morality" below.

An Expanding Field

Work on the Ethics of AI has been growing rapidly in recent years, spurred on by widespread concerns about the impact of "big tech" in an age of massive proliferation of personal data and accelerating development of machine learning techniques to exploit it. This website will soon be expanded with an entire new section on AI Ethics. In the meantime, there is plenty of relevant and interesting material to read in the online Stanford Encyclopedia of Philosophy:

Oxford University has recently started a major "Ethics in AI" initiative, and is in the process of founding an Institute for Ethics in AI. Links to various materials can be found from Oxford Seminars on Ethics in AI.

The Foundation of Morality

The foundation of morality has been debated since ancient times, with Aristotle influentially attributing it to the cultivation of habits (a view quite amenable to "naturalistic", i.e. non-religious, game theoretic approaches). With the dominance of Christianity over the medieval period, however, morality became widely seen as a key spiritual characteristic – like reason – that sets us radically apart from the animals, and aligns us instead with God and His angels. Only in the 17th century did this idea of morality as fundamentally God-given begin to be seriously challenged, with Thomas Hobbes suggesting an account of morality that grounds it in rational prudence. Then in the next century (1740 and 1751), David Hume went even further in the naturalistic direction, explaining our moral behaviour not in terms of pure reason, but rather as the outcome of our animal sentiments of "fellow feeling" and benevolence, together with a tendency to systematise our judgements with a view to intersubjective agreement.

Darwin's establishment of the theory of evolution gave strong support to the naturalistic perspective, seeing man as one animal amongst others. But at the same time, evolutionary theory made morality seem anomalous, especially after the general rejection of "group selection" in favour of "selfish gene" theory as a result of work by Williams, Hamilton and Dawkins in the 1960s and 1970s. It was in this context that Axelrod's novel approach through computer "tournaments" demonstrated how unselfish altruism could indeed evolve within a competitive environment, thus removing an important obstacle to seeing morality as a natural, evolved phenomenon. Since then, there has been a great deal of research into evolutionary game theory and the origin of morality, involving prominent thinkers from the philosopher Brian Skyrms (who takes inspiration from computer models) to the economist Ken Binmore (who takes greater inspiration from Hume). For more on this, see the page on the Evolution of Cooperation.