PhiloComp.net

Robot Ethics

How can we make sure that we "bake in" the right values for AI? We might like the idea of coding them in, but it's very unclear whether practical ethics is the kind of thing that can be reduced to code. There are also questions about who gets to decide what are the appropriate ethics to be "baked in". We could try to aggregate people's preferences and then code in the result, but that might well lead to an undesirable "tyranny of the majority": policies preferred by the majority but potentially abusive or exploitative of a minority. We could try to design AI in such a way that it will develop virtue by itself, much like children do (at least if they're well brought-up). But it's not clear how we can do that. Or we can simply make sure that algorithmic decisions are in accordance with our own ethical principles, but in that case, the AI would not be making ethical choices by itself.

This page is under development, and will soon be expanded considerably.