More on Rules
One general theme of the view of moral reasoning I've been suggesting is the following. There are many moral values in the set of values to which we are commited. We can describe these values; the descriptions are principles. But no one has ever shown that any principle trumps all others. On the contrary, it is obvious that every principle is overridden in some cases. "Never kill an innocent" is overriden in some cases, and if it is, then every principle is overridden in some cases. The large set of principles in our set of values are weighted parameters, but none is a rule (a principle that overrides in all cases). Therefore, to satisfy as many of these parameters as much as possible in each case one encounters ("to satisfice" the parameters) is the goal of moral reasoning. In fact, to satisfice desires is the goal of any practical reasoning. The reasoning one uses in traveling from one's house to downtown is an example. Rule-based AI will fail to produce a robot that can do this as well as a human can, and the reason is that the possible conditions encountered during the trip are indeterminately many, and their ramifications for the various parameters to be satisficed indeterminately many, as well. If you can't get downtown by following rules, you can't reliably pick out the right thing to do by following rules. Moral life is vastly more complex than the trip downtown.