The following essay pretty much is a restatement of my post Consequentialism (2): On Value. It is more precisely stated, and there is an interesting bit I want to develop towards the end.
Consequentialism and Value
Consequentialism simply is the theory that the only relevant considerations when making a moral evaluation are the consequences of the action, motive, rule etc depending on whether we are talking about act, or rule, or motive consequentialism. Its attraction lies in the intuition that it is a good thing (or at the least not impermissible) to make the world a better place. The aim of this paper is to propose that such an intuition is not as justified as it appears. Making the world a better place is not something that is easily defined. Once we start asking in what ways the world can be made better, it seems that we come across a problem of metrics. By what measure/metric do we judge that the world is better or worse? Another way we could say this is: what is the standard of value? Different consequentialist theories give different accounts of what is valuable. However, most of the classical consequentialist theories like utilitarianism propose that value is agent-neutral.
Much has been said about agent-neutral value (of which some of the arguments may be repeated here) such that it is often plays a role in the many criticisms of consequentialism. One criticism is that many of our moral intuitions require a theory that incorporates agent-relative elements (Portmore, 3,2001). For example, it seems that we ought to avoid murdering an innocent even if it will prevent the murder of two innocents. Douglas Portmore argues that it is not necessarily the case that consequentialism involves agent-neutral value. He proposes that consequentialism can incorporate agent-relative value. I will try to show three things in this paper. First, I will show that agent-relative consequentialist theories are subject to a reductio. Then I will argue that agent-neutral value is under-motivated. Finally, I will argue that the limits of a theory of value suggest a move to deontological ethics.
Value, according to Scanlon, is simply that which gives us reason to act. The questions to be asked, of course, are “what type of value?”, and “what type of response. If we are rationally required to do that which we have most reason to, then we are rationally required to maximise value. Following this reasoning, consequentialists can be said to apply the basic principle: “Act so as to promote value”. It is Portmore’s contention that the agent-neutral / agent-relative distinction is not the same as the consequentialist / non-consequentialist distinction (Portmore, 11, 2001). Hence, it is possible to have a theory where an agent can value his own commission of one murder far less than he does the commission of a number of murders by other people.
There are a number of things wrong with this. The most obvious criticism is that there is no hard and fast rule about how the two balance out. (Portmore, 19, 2001) Portmore actually argues that the balancing point where the value of ‘not being a murderer and a number of people being murdered’ outweighs the value of ‘being the murderer who murders one person’ is different for different people. Nominally, this point could be anywhere. Taken abstractly, there is no reason why any particular person may not set the threshold at one; i.e. he or she places a positive value on being the murderer. Hence, in some ghastly macabre version of Amartya Sen’s Prude and Lewd, our sadistic Prude would be morally obligated to be the murderer whenever he encounters a situation where an innocent would be murdered anyway.
Portmore says that this objection would be true only under particular theories of value where the value of the state of affairs is dependent on the agent/evaluator’s subjective desires (Portmore, 15, 2001). What type of desire independent, agent-relative value could there be? Portmore seems to be talking about cases where the facts of what happened may be agreed upon by two people but their evaluation differs and both are still correct in their evaluation. For example if A lets the murder of five others take place by refusing to murder an innocent, A will evaluate himself as doing the right thing while a third party C would evaluate A as have done wrong. He justifies the move by using the sunset example (Portmore, 19, 2001). At a particular point in time, the statement the sun is setting is true for someone standing at the east coast of the US but not at the west coast. Hence, the truth value of a factual issue is dependant on the location of the observer. The example is problematic in that the two situations seem to be disanalogous. In the sunset situation, the position of the sun in the sky at any point in time is necessarily dependent on the position of the observer on earth. This would necessarily have to be the case given the shape of the earth and the fact that light travels in a straight line. However, it does not seem to be the case that morality is such a creature that A’s moral judgement of a situation, could be different from B’s of that same situation and yet, both be right at the same time. It is not enough that A judge that it is wrong for A to murder in order to prevent five other murders. A must not only judge it impermissible when A murders one to prevent the murder of five others, but also judge it impermissible when B does the same thing. However, Portmore’s agent-relative consequentialism does not deliver this.
One curious aspect of consequentialist notions of value and the good are that they are not desire dependant. In order to avoid the nihilistic conclusion that it is right for one to do as one desires no matter what that desire is, consequentialists have often moved towards agent-neutral value. This seems to make sense in that agent-neutral value is definitely desire-independent. However, this seems to conflate agent-relative value and desire dependent value. But, is this conflation justifiable? One could ask, in what way does an agent/evaluator say that A is a better state of affairs than B? A would be a better state of affairs than B if and only if A was greater than B along the dimensions X, Y and Z, where X, Y and Z are the only dimensions along which it makes sense to measure the state of affairs. i.e. X, Y and Z are final goods and are the measures of value which all other measure are dependent on. To take the example of utilitarianism, let pleasure be the good. In what sense is pleasure valuable? It seems to be that pleasure is valuable in virtue of its desirability for its own sake.
Actually, I have made an assumption here. There are actually three possibilities, only two of which can hold in order for the last sentence of the previous paragraph to make sense.
1. Pleasure is valuable only because it is desirable
2. Pleasure is valuable because it is desirable, but things may be valuable for other reasons as well.
3. Pleasure is desirable because it is valuable.
The first two statements pertain to what is under discussion in the previous paragraph. The third possibility will be dealt with later in this essay. The problem then lies with whether anything other than its desirability can make something valuable. If something is not desirable, there seems to be no reason to pursue it. The only way it could be a conceptual truth that value is what gives us reasons to act is if value is desire dependent. This means that we should be sceptical about the existence of desire independent value. This would apply equally to agent-relative desire independent value as well as agent-neutral value. This seems to provide a strong argument against consequentialist ethical theories.
The consequentialist could however argue that pleasure and other final goods are desirable because they contain value. This would put value as a constitutively basic concept, with desirability as the epistemic process by which we access value. If desire is an accurate epistemic guide to what contains value, then we are still reduced to pursuing only what we desire. If we, however, propose that our desires are not accurate guides to identifying what is valuable, how do we know what is valuable? Perhaps an idealised evaluator’s desire would track value perfectly. This would change the syllogism to: do what an idealised actor has most reason to. However, this still poses an epistemological problem. We seemingly have neither access to the desires of an idealised actor (excepting religions which state that God, the idealised actor has revealed his preferences in their religious texts), nor any desire independent access to value. We are therefore unable to really decide whether any quality is really valuable in any desire independent manner. Hence it seems that the only coherent notion of value has to be intimately tied to desire.
How about the maximisation of preference satisfaction? If we find that only the satisfaction our personal preferences and desires are valuable, then shouldn’t we aim to maximise this? No, we only value the satisfaction of our own preferences and only have sufficient reason to satisfy our own, not other’s preferences. But it is not clear that we care about everybody’s preferences, maybe just our own and our loved ones’. Therefore, according to the consequentialist framework, there seems sufficient reason to maximally satisfy our own desires but not other people’s. This is unsatisfying in a moral theory as a moral theory must at the very least conform to some basic intuitions like: it is immoral to kill innocent strangers.
One of the basic flaws in the consequentialist formulation is that while values do give reasons to act, they are not the only things that do so. One way of remedying this is to add in Ross’s list of prima facie duties. These prima facie duties are supposedly self evident and in themselves give reasons to act. Hence, some ad-hoc theory which incorporates both prima-facie duties as well as values that are deemed to be self evident would work. However, Ross’s duties are subject to the same criticism as consequentialist value in that there is no way to independently verify that these values or duties exist.
Another approach is to consider higher order reasons that would order an agent’s desires. If we take higher order reasons into account, we may be able to find a way to decide what type of desires an ideal agent may act from. Consider an idealised agent in a polity consisting of other ideal agents. If such an idealised world is to be logically possible, then, an idealised agent would not choose any desires which would destroy another agent. For if our idealised agent was to act from those desires, then all other agents would also act from those desires and they would surely destroy themselves. Of all the desires, duties and values that any common agent can have, an ideal agent can only act from those desires which would leave our thought experiment logically possible. If we are to act only the desires and duties that an idealised agent could have, we have higher order reasons to choose such desires. These higher order reasons are prior in consideration to the first order duties and desires as, without them, we would incorrectly choose the reason giving forces from which to act. When phrased in the context of non-ideal agents, these desires, values and duties, maxims, if you will, must be universalisable. This principle could be formulated as such: Act only on maxims which can be universalised. This bears much similarity to Kant’s first formulation of the categorical imperative: “Act only on that maxim whereby thou canst at the same time will that it becomes a universal law” (Kant, 15, 1785) Hence the categorical imperative is a formal basic principle that is prior to other substantive maxims.
To summarise, we have explored various notions of value and concluded that the best notion of value is a desire dependent, agent relative one. Moreover, since this notion of value is unsuitable to a serious consequentialist ethic, a move to Kantian ethics was made using an idealised agent approach. Kantian ethics, however, are a deontological ethical system. Further criticism of this system is the work of other papers. More work can also be done in justifying the ideal agent approach to derive the categorical imperative.
Kant, Immanuel - Fundamental Principles of the Metaphysics of Morals, 1785, translated by Thomas Kingsmill Abbott
Portmore, Douglas W – Can an Act Consequentialist Theory be Agent Relative? American Philosophical Quarterly 38 (2001): 363-377
I will indeed wish to say more about this ideal agent apprach in future posts