Thursday, May 5, 2011

The formula of Universal law: re-inventing the wheel - sort of

Kant’s formula of the universal law says:


Act only on the maxim that you can at the same time will to be a universal law.


Applying the formula of universal law involves the following steps

Step 1: Formulate a maxim that connects your action to your reason for acting

Step 2: Recast the maxim as a universal law of nature governing all rational agents

Step 3: Check whether it is conceivable that it could be a universal law of nature.

If not, the maxim has failed the contradiction in conception test. If it passes this test proceed to step 4

Step 4: Consider whether you have any legitimate reasons to desire that the maxim not be universalised.

If so, the maxim has failed the contradiction in willing test.


It is not obvious why we should care about whether i) a maxim is universalisable and if it is ii) why we should care about whether we have reason to desire that it be universalisable. Being able to deduce i and ii is equivalent to deriving the formula of universal law. The following is the best derivation for the formula of universal law that I have arrived at:


1. A morally committed person is committed to acting according to practical laws, i.e. principles that are valid for all possible rational beings.

i.e. Moral particularism is demonstrably wrong. I do not need to go into this demonstration at this point in time. I presume that that has already been done for me) I also avoid looking at whether there are reasons to be morally committed or not. It is just simple to note that when we talk about moral reasons, we are not talking about reasons that apply to some and not others. A moral reason would apply to anyone in that particular situation no matter who that person may be.

2. Any principle for practical action links an act, or class of acts to a particular ground which provides reason for that action.



i.e. it is not sufficient to merely say “in this situation, do that”. A principle must also in addition have a reason that grounds it. For example: When people are in dire need of help (the condition) we ought to help them (the action) because it will increase the total amount of pleasure in the world (the reason)

Note that the reason, while being logically distinct from the conditions can coincide with them. E.g. do phi if it increases the number of sheep in Texas because it increases the number of sheep in Texas. The two happen to coincide. It just happens to be that increasing the sheep in Texas seems to be an absurd thing to do for its own sake. However, compare with utilitarianism which says that we should do something if it maximises the amount of pleasure in the world because it maximises the pleasure in the world. Classical utilitarians, at least one of whom can be found in every philosophy faculty find that principle highly plausible. At the same time, the reason to increase the number of sheep in Texas may be that it best boosts the farming economy and that will increase the net pleasure (or some other good)

3. Since practical laws are valid for all possible beings, their grounds provide (genuine/moral) reason for all possible beings to act as the principle dictates. For the rest of the essay I will consider the theft maxim “I will steal from someone if I can get away with it because it is in my self interest”

4. Note that there seems to be a distinction between there being a reason for each and every person to act and a reason for me to will that everyone act so. A morally committed person in addition to being committed to act on practical laws, must also desire that everyone else act on practical laws.

Explanation: If one desired that there be exceptions, that some people (either oneself or others) not act on a practical law, he would in fact desire that a practical law not apply to some people. But this is impossible; a practical law by definition is valid for everyone. To speak of a practical law which did not apply to some people would be a contradiction.

Corollary: As will be explained later, if a particular ground provides genuine reason giving force, any objection that the reason grounds also stands. Therefore, any objection to the universal application of a particular principle also stands. Note, however this objection would stand only if the ground was genuine/moral.

5. A morally committed person would steal from self interest iff self interest provided genuine (moral) ground for stealing.

6. A corollary to premise 3 is that it must be possible that everyone could steal and get away with it whenever it was in their self interest. i.e. saying that everyone ought to act according to a maxim presupposes that everyone can at the same time can (ought implies can) act according to the maxim. If the latter is not possible, neither is the former.

This settles the third part of the test in the formula of universal law. If everyone at the same time cannot do the act, there is a contradiction in conception. It cannot even be conceived that the maxim would be a universal law.

It may be plausible to argue that under more widespread cover theft, the property regime would be less stable and inefficiencies may be created in the economy as people divert more resources to keeping their property secure. The deadweight loss may very well make it such that the piece of property that people wanted to steal would not have been formed in the first place. i.e. if everybody is much poorer, some particular material good may not even be available for theft. A contradiction in conception may occur as people are thinking of obtaining an object by violating the rules of a particular institution without which the object would not exist in the first place.

7. Even if there is no contradiction in conception, if everyone stole whenever they could get away with it because it was in their self interest to steal, there would be some situation where people would steal from you and injure your own interests.

Explanations: As with prisoner’s dilemma situations, the additional marginal theft act would benefit the thief in question, but a background of more widespread theft would make everyone including the thief worse off. There may be less to steal because of the earlier mentioned inefficiencies. The increase in number of thefts would mean the thief himself would have his stuff stolen more often.

8. Therefore self interest seems to ground an objection to everybody acting on the theft maxim. Since the theft maxim is itself grounded in self interest, a contradiction becomes apparent. The morally committed person in acting from the theft maxim is implying that self interest has genuine reason giving force. But if self interest has legitimate reason giving force, he also has a legitimate objection to everyone else acting on the maxim. But in that case he is not acting on a practical law if he is acting on a principle which admits of exceptions. However, as a morally committed person, he has to act on practical laws.

9. Therefore a morally committed person would not act on the theft maxim. Thus we derive the test for a contradiction in the will.

10. Note that really, no person would benefit if the maxim were to become a universal law. However, it is not that nobody benefits which grounds our rejection, but the very person who was herself considering the principle who would not benefit. Consider that for some person, it would still be beneficial if everybody complied with the theft maxim. Then for that person self interest would indeed be grounds for having everyone act on the theft maxim. It is however strange that for some people a maxim is a practical law and for others it is not. Moreover, it is not clear that if a principle were to become a universal law, everyone would necessarily be affected similarly.

The possibility that one person could will a maxim as a universal law while another could not will that same maxim due to differences in empirical power or relative positions in society seems awkward. It seems that such issues arise primarily in the context of self interest as a ground, and in particular discrimination as a ground for action. Another plausible case could be the case of the penniless thief. Since the thief himself holds no serious property beyond a few possessions, widespread theft is not likely to adversely affect him.

11. Note that we can replace self interest with any particular ground. Once we replace the ground, we will have to repeat the analysis to determine whether the ground of the maxim gives us reasons against everybody acting according to it.

12. Therefore it should be noted that the test in the contradiction in willing should be modified for maxims which do not ground themselves in self interest. Instead of considering whether one could desire that a particular maxim be a universal law (given that everyone desires their own happiness) one should instead consider whether the grounds which give us reason to act in accordance with the principle also give us reason to object to the principle being acted on by everyone.

Consider the classical utilitarian principle “Perform an action if doing so among all other alternatives maximises the total pleasure in this world because it will maximise the pleasure” If I objected on grounds of self interest that this maxim should not be followed by everyone because I can conceive of situations where my own welfare is sacrificed for the sake of others’ pleasure, it would not be clear that the objection has any force. There is no particular reason why the objection has any force unless we can independently establish the legitimacy of self interest as a ground for actions and objections. But if the ground for the maxim also grounds the objection to everyone acting under the maxim, there is a presumptive legitimacy of the ground because the maxim is being acted on by the morally committed person under the presumption that the maxim is a practical law.

Note that for the classical utilitarian principle, an act, or a class of actions can be legitimately objected to if while each action among all other alternatives maximised pleasure but everyone acting on the maxim did not as compared to no-one acting on said principle. One such case are actions that fall under a practice, or institution, which itself maximises pleasure, but whose rules forbid each individual from acting to maximise pleasure in any one instance. Political institutions which assign people various rights and forbid their violation may be one such case. It does not necessarily follow that institutions which maximise pleasure are the right institutions and practices to act under. But if maximising pleasure was a relevant consideration, the above argument would legitimate a move from act consequentialism to rule consequentialism.

13. Also note that the mere fact that a maxim passes the test for a contradiction in willing is not sufficient to guarantee that the principle is a practical law. Rather failure to pass either the test for contradiction in conception or the contradiction in will guarantees that the principle is wrong and ought not to be acted on.

14. At least one remaining objection seems to be that wanting to act on a maxim and wanting others to act on a maxim are two different things. If self interest can ground some actions (like good hygiene) but not others (like theft) then there is no particular reason why it cannot ground an action like theft, but not the objection to everyone stealing.

One reply to this objection is that if the moral particularist is substantively wrong (as it should have been demonstrated elsewhere) then self interest cannot ground good hygiene but not theft. Therefore, it is not self interest per se which grounds good hygiene, because it cannot in itself do so. If self interest is prima facie legitimate, there must be another separate reason that grounds a limitation to self interest, such that the complete grounds for action is not self interest per se, but self interest limited by respect for rational nature (to pick a possible example), or maybe a general interest in everyone’s welfare or something else.

An appeal must therefore be made to the nature of grounds and the way in which they provide reason to act. But this would be work for a future post.


* Moral Particularism is the view that there are no general moral principles, rather the rightness or wrongness of any act in any situation depends on the particulars of that situation.

Monday, November 23, 2009

Two Types of reasons: Prisoner's Dilemma and the categorical imperative revisited.

Warning: This post is very long, and goes quite a bit further than what I originally set out to do, namely to defend a narrow point about what the implications of a prisoner’s dilemma in a society of perfectly rational agents is. I tried to posit a few general implications, and the whole thing got out of hand. Anyway, the next paragraph basically starts where I originally wanted to start. Get some snacks and start reading, this is going to be long.

In my previous post, I basically said something so briefly that people did not seem to get what I was saying. Reproduced here:

20. One should note that things like prisoner's dilemma and tragedy of the commons seem to posit a conflict between individual rationality and group rationality. The payoff for any individual for defecting is always more than that for cooperating, however mass cooperation gives off a larger payoff. It seems as if the simplest way for cooperation to be individually rational as well as globally rational is if each person internalises the "externalities" namely, that the payoff that the "opponent" receives is also reflected back on the person. e.g. if by defecting I receive a payoff of +10 and my opponent -10, I must internalise everything by incorporating his payoff into mine. Then, a mutual cooperation where each of us receives +5 payoff would be more rational as by internalising, I've got a net payoff +10 instead of 0.

21. There are a variety of ways in which we could interpret this requirement to internalise the costs. One way, is by supposing that other people's payoffs really do matter. To be very clear, I am not assuming some notion of the good. Instead, I am concluding that some notion of the good is necessary in order to resolve paradoxes inherent in prisoner's dilemma situations.





I will try in this post to defend the following thesis:

There are rational paradoxes (e.g. hedonic paradoxes) whereby achieving an end A is hindered by actively pursuing A (or A only) and can instead be achieved by instead/also pursuing end B or cultivating disposition C.

If an ideal evaluator (perfectly knowledgeable and rational) necessarily held disposition C or pursued end B (in order tat A be achieved), then it follows that B is an end in itself worth pursuing or C is a fitting disposition to have (little difference really. An end being valuable conceptually entails a disposition being fitting).

1. We must consider precisely what it means when we say someone is perfectly rational.
1.1 A person is rational to the extent that he is sensitive (i.e. responsive) to reasons. Note that in it self, it says nothing as to what the correct reasons are. A theory can specify what the reasons are as broadly or as narrowly as it wants. However 1.1 is a conceptual truth, and doesn’t even say anything controversial either.

However, there are theoretical implications.

1.1.1 Lets talk about the case of act utilitarianism where pleasure itself is reason giving. In this case, a fitting disposition is one which is sensitive to the consequences in terms of how much pleasure they generate. i.e. a person with this fitting disposition generally tries to maximise pleasure with each act.
1.1.2 However, because such a disposition is very inefficient (its time consuming to consider all the consequences all the time and figure out the pleasure each one has) and people are often inaccurate (it is prone to people making lots of mistakes as well as biasing in favour of the agent’s own pleasure), it is better to cultivate other dispositions or act according to rules of thumb. For example, an indirect consequentialist would advise you to cultivate such dispositions such as Aristotelians would recommend: justice, prudence, courage etc. However, for classical utilitarians, the only reasons that they acknowledge are means ends reasoning and pleasure. Thus, these dispositions are fortunate in that they result in the agent acting in accordance with what the theory describes as reasons to act. However, these fortunate dispositions are not fitting. These dispositions almost never involve any consideration of how much pleasure they are generating, or how to achieve the end of maximising pleasure. An agent who has these dispositions is irrational, but morally fortunate in that even though he is not sensitive to reason, he tends to do things that he really morally ought to do. This of course puts indirect act consequentialism in the very awkward position of having a rational agent (one with fitting dispositions) try to cultivate merely fortunate dispositions and therefore become an irrational agent. (H/T to Richard Chappell from Princeton This seems like a strong criticism of act consequentialism.

We have wandered quite far afield here. Nevertheless, it is certainly true that a perfectly rational agent is perfectly sensitive to reasons

1.2 The reasons in question definitely involve means-ends reasoning. i.e. reasons relate to finding the best means to achieving a given end. However, there is also the question of whether there are any reasons to prefer some ends over others.

1.2.1 It would be uncontroversial to say that we may not want to pursue some of our ends because they would inhibit the pursuit of other ends dearer to us or interfere with our other ends.

1.2.2 While it is definitely controversial to assume that there are certainly reasons to adopt some particular definite ends, it is also questionable to flat-footedly assert that there can be no such reasons. That would require arguing that any such reason would in fact be contradictory etc. What we can however do is judge whether any particular attempt at a reason to adopt specific ends works on its own merits.

1.2.3 i.e. if you are forced to conclude that we have rave reason to adopt some particular end, this is not reason to think that the whole line of argument has gone wrong

2. This is roughly the form my argument is going to take:

2.1 A perfectly rational agent necessarily has some other regarding disposition X

2.2 Such a disposition is not fitting if we only consider the kind of self interested reasons that can be uncontroversially asserted about all agents, or all humans/mortals.

2.2.1 Things like: we all desire our own pleasure/happiness/the success of our ends etc

2.3 However all of a rational agent’s dispositions are fitting. This is follows from the argument set up in (1), specifically the conclusion obtained in 1.1.2

2.4 It must be the case, not only that a rational agent will have disposition X, but that X has an appropriate relation to reasons

2.4.1 For example, if a rational agent were to have the otherwise inexplicable disposition to increase the number of sheep in Texas, then barring that such, the simplest explanation for this must be that Texan sheep are genuinely valuable.

2.5 The nature of such reasons as these which go beyond means to particular ends is such that they are tied to features of the situation. i.e. becoming fully aware of the feature in every relevant way would motivate the properly reasoning agent to respond to it appropriately. We could formalise the structure of such reasons by the following feature Y demands B-ing, where Y is a feature of the world and B is an action directed at Y.  For example, utilitarians would say that pleasure requires maximising it, Kant would say that rational natures require respecting them, conservatives would say that traditions require preserving and theists would claim that sins require avoiding them.

3. In order to proceed, I need to actually demonstrate that there are some dispositions which cannot be explained under means-ends reasoning (MER). Some obvious paradoxes in means end reasoning include hedonic paradoxes and consequentialist paradoxes. These are where common wisdom tells us that actively pursuing our own happiness, or actively trying to maximise pleasure often fall short of the goals. Achieving success in these ends often requires us to forget about these goals and do other things e.g. immersing ourselves in charity work (for the former) and using commonsense morality (for the latter). However, it seems that these are merely products of our own incompetence. It is not clear that a perfectly rational and knowledgeable agent would in fact suffer from these problems. However, it is my contention that the prisoner’s dilemma is precisely such a case where perfectly reasoning self interested agents can face paradoxical circumstances.

3.1 The prisoner’s dilemma will take place in a society of ideal agents. i.e. one where all agents are perfectly knowledgeable and are able to reason perfectly.

3.2 It is not at first clear what this society would look like. The society may just be us, except that all of us know all the facts and are able to reason perfectly. This would be the most relevant instantiation of the hypothetical. However, we may want to make some adjustments. If we were all perfectly rational etc, it is almost certain that we would not be doing the things we are currently doing now. We currently compensate for many of our weaknesses using various heuristics etc. When idealised, we would not have to do so. Maybe we wouldn’t need a state, or maybe we still would. The exact shape of society is not known. But one thing we can be sure of is that there wont be any strange utility monsters, or any strange creature demanding that we adopt certain dispositions else it will kill us all or any of the standard list of monsters that consequentialists like to throw at each other. (See, I can just stipulate them out of existence!!!)

4. Here I will explain the salient aspects of a prisoner’s dilemma (PD) and explain where the paradox lies.

4.1 A prisoner’s dilemma is a game traditionally played by 2 people where you and your opponent have 2 options each and the payoff you receive depends on the exact combination of your and your opponent’s move

4.1.1 The traditional story goes like this: two thieves have robbed a bank and hidden the money $10, 000, 000. Both of the thieves are subsequently brought in for questioning where they are questioned separately. If both thieves keep silent (cooperate), the police let both of them go and they can later collect the money $5000, 000 each. If only one of them defects by ratting out his friend, he gets away scot free and gets the 10 million while his friend (ex-friend by now) is thrown into prison for 10 years. If both rat each other out, they each get 5 years in jail.

4.2 To formalise the whole system, there are 4 outcomes based on whether you or your friend cooperate or defect.

4.2.1 Both cooperate: payoff is A
4.2.2 You cooperate, your friend defects: payoff is B
4.2.3 You defect, your friend cooperates: payoff is C
4.2.4 Both defect: payoff is D

4.2.5 C > A > D > B is the weak (necessary) requirement for the situation to be considered a prisoner’s dilemma. For a single round game, this condition is sufficient. (for multiple rounds with the same partner, an additional 2A > B + C is required to prevent the winning strategy to be alternate cooperation and defection)

4.2.6. Because C > A and D > B whether or not your opponent defects, you can always improve your payoff by defecting

4.2.7 However, A > D. There is a better payoff if both cooperate than if both defect.

4.2.8 To recap, given that an agent knows everything and is reasoning perfectly, there is no action he could have taken that would make him better off with respect to his goals than those actions he has taken.

4.2.9 Therefore, it would be rational to X iff X-ing maximises the success of the agent with respect to all his ends.

4.2.10 Note that we can spell out exactly what the payoffs represent. It may, for example represent the agent’s own happiness or his own and those of his closest relatives. However, if the latter, the opponent cannot be those same relatives that the player is concerned about. Also, it would also be a rather strange interaction where the happiness of the opponent’s loved ones is directly affected when the interaction is with the opponent himself. Similarly the payoff cannot simply be the general welfare of everyone either. If the payoffs were arranged such, it would be impossible for the payoffs to conform to the requirement of the prisoner’s dilemma. Any gain the general welfare (i.e. agent’s payoff) would similarly increase the opponent’s payoff. Keep in mind that this is only a limitation when we are considering setting up a prisoner’s dilemma. Solving the prisoner’s dilemma always involves dissolving it. i.e. changing the game so that the payoffs are different either by re-specifying what the payoffs reflect, or by introducing various incentive changing practices like punishment etc.

4.2.11 For the moment, lets specify that it is one’s own happiness. i.e. for a selfish agent, he can always increase his payoff by defecting.

4.2.12 It is rational for a selfish person to defect.

4.2.13 In a society of perfectly rational and knowledgeable selfish agents, all agents would defect.

4.2.14 But everybody could do better if they all cooperated.

4.2.14.1 Note that if cooperation were rational, then everybody would cooperate, since they are all perfectly rational.

4.2.14.2 The unanimity (everyone defects or everyone cooperates) always follows from the premises that everyone is rational. It seems that in-so far as we can analyse a situation through the lens of PD in a society of ideal agents, an action is rational iff the maxim behind the action, if made a universal law, is consistent with the ends of the agent in question. i.e. it is rational if the maxim can be willed to be universal law.

4.2.14.2.1 This may in fact extend to all games and not just PD.

4.2.15 Selfishness is self defeating. Caring only about your own happiness means that the actions taken thereby have not necessarily maximised your happiness.

4.2.16 To whit, given that everybody cares about their own happiness, everything else being equal, everybody does better by cooperating.

4.2.16.1 It is taken as a given that all agents do in fact desire their own happiness.

4.2.16.2 We can also take everything else to be equal too. The only ends that can be better achieved by defecting are one’s own happiness and the opponent’s unhappiness, the latter rarely, if ever, casually being desired.

4.2.16.3 It seems that one cannot consistently desire one’s own happiness and your opponent’s unhappiness at the same time

4.2.17 Given that the society of ideal agents is one where they cannot do better, it follows that it is one where they all cooperate.

4.2.18 Since each agent is rational and each agent cooperates, it is rational to cooperate

4.2.19 But consideration of only one’s self interest fails to provide sufficient reason to cooperate since one can always increase one’s payoff by defecting.

4.2.19.1 Neither can an agent argue that since his actions are necessarily rational, everyone’s will mirror his, and therefore cooperation will in fact produce the better outcome. The reason is primary and the agent will act only if he has reason to do so. If there are no considerations other than self interest, there is no reason why the agent cannot in fact improve his payoff by defecting.

4.2.20 Points 4.2.18 and 4.2.19 essentially distils the paradox to its essence.

4.3 Here are a number of reasons why PD is relevant

4.3.1 Tragedy of the commons is an example of PD with multiple players. Defection refers to over-use of a common resource such that the resource supply starts to fail (e.g. over-fishing destroys the ecosystem). Cooperation is simply refraining from overuse. The payoffs are simply the sum of all resources extracted over time.

4.3.2 A prisoner’s dilemma basically reflects any situation where one could harm another for personal gain. Note that prisoner’s dilemmas are symmetrical.

5 There are a variety of ways of resolving the paradox detailed in 4.2. All of these ways make it such that it is not PD anymore. Any solution must also make it the case that the disposition to cooperate is a fitting one.

5.1 One way would be to introduce practices like punishment etc which would incentivise cooperation or disincentivise defection. If defection could be punished, then self the payoff from defecting would be reduced.

5.1.1 However punishment is not always possible. When 2 people who do not know each other meet briefly at a market to trade, then there is no opportunity to retaliate etc. However they would both be better off if they tried to deal honestly with each other rather than both cheating each other by giving shoddy goods, imitations, counterfeit money etc.

5.2 A disposition to cooperate is fitting iff there are reasons to cooperate or reasons to not defect. Absent any punishment practices, there are a variety of ways we could cash out the reasons for cooperation or non-defection. What follows are some possibilities.

5.2.1 Cooperation (in the PD sense) is an end in itself. Merely being aware of cooperation and all it entails is sufficient to make it the case that it would be irrational to not adopt it as an end.

5.2.2 We could do the same with non-defection. i.e. non-defection is an end in itself. Characteristic about 5.2.1 and 5.2.2 is that these do not regard the payoffs to others (their happiness) for cooperation and defection. This could be along the lines about recognising that the opponent is a person. It could simply be the case that from the simple fact that the other guy is a person, we are not to use them as a mere means to an end.

5.2.2.1 Hey, it’s possible! Besides, we are just speculating here.

5.2.3 Other people’s happiness is intrinsically valuable. Understanding what happiness is, means that we would want to maximise it for everybody and not just ourselves. This principle is sensitive to the payoffs. Discount rates may also apply. It is not obviously unreasonable.

5.2.4. Replace happiness in 5.2.3 with welfare, pleasure etc mutatis mutandis.

5.3 Just to remind everyone of point 1.2.2. Don’t get your panties in a bunch just because I managed to introduce some end which we rationally ought to adopt.

5.4 Note that these reasons are speculative. What we could do is try to look at all the dispositions that ideally rational agents have and try to come up with the simplest set of principles that would consistently motivate these dispositions.

5.4.1 See what I’ve done here. There are basically practical reasons and theoretical reasons. That everybody will do what is rational and that since they do better when they cooperate than when they defect, cooperation is rational is a theoretical reason. But theoretical reasons are not motivating, only practical reasons are. A practical reason is one like 5.2.3 which says that happiness is intrinsically valuable (valuable simply being what we have reason to desire).


To look back at what I’ve done in this post so far, I have established that cooperation in a prisoner’s dilemma is what rational agents would do, that there therefore has to be practical reasons in favour of cooperating towards which self interest alone is insufficient and that there are two types of reasons: practical and theoretical. Practical reasons are those which will motivate rational people to act, and theoretical reasons, even if they concern human action, are not motivating. From a theoretical consideration that agents do aim at their own happiness, we derive that since perfectly rational and knowledgeable agents must be maximally successful and that all of them being similarly situated (to cooperate or defect) and rational and knowledgeable, they must all cooperate. Therefore there must be some practical reason/principle which would motivate cooperation, of which I have provided a list while admitting that they are indeed speculative. This concludes the bulk of what I set out to do. What follows will be a quick assay into whether I can extend these conclusions about people in a symmetrical situation to agents in asymmetric situations i.e. where the opponent couldn’t possibly do anything to the player.

6 Note that whichever of the reasons 5.2.1 – 5.2.4 are true, they automatically apply to not only the symmetrical prisoner’s dilemma case, but also to asymmetric cases where the other guy cannot defect. Their happiness is still valuable, or they are persons too etc etc.

6.1 Note however, that from a purely theoretical consideration, it seems that there is no paradox as is in the PD case with regards to self interest. We cannot however simply leave the issue at saying that it is obvious that the other kinds of reasons do in fact apply. We are at lest obligated to investigate whether we could justify limiting such practical reasons as could motivate cooperating to the symmetrical situation only.

6.2 The only difference between the two is that in the asymmetric case, the opponent has no choice but to cooperate (not that they are automata, but that attempted defection wouldn’t harm you in any way either, nor would cooperation do anything for you either.). Think of this as the case where everybody else has the coordination and the strength of 2 year olds. Here defection is always dominating. Defection always gives a better self interested payoff than cooperation. Even if everybody defects, the payoff is better than if everybody cooperates.

6.3 This principle cannot be based on the fact that the opponent cannot retaliate, as even in the one off PD case, retaliation is not possible.

6.4 Is their ability to harm you a sufficient consideration?

6.4.1 One is tempted to argue that it isn’t. That there is no logical connection between one being stronger and it being acceptable to do so especially once we rule out fear of retaliation. However, that would be too stringent a standard. Any reason giving feature would in fact be a substantive claim, which would not follow merely logically from the feature. Other people’s happiness is not logically connected to any notion of maximisation etc. However claiming that happiness is valuable and therefore ought to be maximised is a substantive claim. Similarly, a claim about strength having prerogative would also be a similar substantive claim, as is a claim about desert or moral responsibility.

6.5 One could however generalise the lesson in point 4.2.14.2. Any practical principle/ reason, upon which we act with respect to our opponent, also applies when some third party acts with respect to us as long as said third party is situated in the same respects to us as we are to our opponent.

6.6 For people who are situated with respect to each other as equals (or approximately so), this article has already demonstrated how this theoretical would work.

6.7 For the case described in 6.2, we could do a regress, saying that some other person is situated in such a superior position to you etc etc. However, the regress has to stop somewhere, and it can only stop with some entity that is so potent and powerful that there are no competitors anywhere near. This entity has an effective monopoly on the use of force. We can call this either the Leviathan, or the state, or maybe just a pro wrestler. (Yes, I’m borrowing shamelessly from Hobbes)

6.8 However, we can also note this. People do best when the leviathan does not transgress against them. Therefore any principle which allows or motivates people to transgress against those weaker than them would also similarly motivate the leviathan to transgress against the person. More generally, any practical principle which motivates an agent towards his inferiors would also motivate the leviathan with respect to the person.

6.9 Since people do best when the leviathan does not transgress against them, they would not similarly transgress against their inferiors. Let the practical principle that motivates this principle be X.

6.10 For the same reason X, the rational leviathan would not transgress against the people.

7 I think that now, we can generalise the point made in 4.2.14.2.

7.1 If a reason genuinely counts in favour of an action in a particular situation, then it would count similarly for all people who are similarly situated. If it doesn’t, there has to be some principle that explains why.

7.2 Therefore any maxim which acts as a reason would function, in a society of ideal agents as a universal law of nature.
7.3 Because people necessarily desire their own happiness, we can measure success, by whether or not people can do any better as far as their happiness is concerned.

7.4 Ideal agents are maximally successful. Being completely rational and knowledgeable, it is in fact impossible for them to do any better than they are doing.

7.5 Therefore the maxims which they act on are those which, when conceived as universal laws of nature, will maximise their happiness.

7.6 This is not different from the categorical imperative which tells us to act on the maxim which we can will to be a universal law of nature.

7.6.1 In the understanding that you cannot will your own unhappiness.

I think that that is it for now. Of course, this says nothing of what rational people would do in the current world, or what the rational leviathan would do in the current world. But if having reached this point, all is agreeable, then, we can proceed confidently.


8 Much, however, can be said regarding the maxims that can be willed as universal laws.

8.1 A maxim, if explicitly stated will hold to the general form: Perform action A, under conditions M

8.1.1 A would be a general imperative e.g. kill a person.

8.1.2 M would be a qualifying condition e.g. if the person has white hair and it would increase the number of sheep in Texas.

8.1.3 For now, lets not quibble about whether the maxim is right or not, although, the reasons are limited to the extent that they do not specify a false link between the action and the rationale. Let us presume that killing this particular white haired man would increase the number of sheep in Texas. However, we should note that there is a limitation to what these maxims can say. The maxim cannot be obviously false in the sense that killing this white haired man would not in fact increase the sheep in Texas, or the person is not white haired. i.e. the conditions M refer to  must actually apply. Anyone following such a maxim can be fairly accused of being utterly retarded if he did anything to a black haired person based on a maxim where the stated conditions did not apply.

8.2 Even given the limitations mentioned in 8.1.3, there are an infinite number of semantic variations a maxim could take. i.e. there is nothing in the formal structure of a rational maxim which distinguished it from an irrational one. They all have the same formal structure

8.3 There is little or nothing in the semantic content itself (to us) that, apart from determining whether the maxim applied or not, would be indicative of the rationality of the maxim.

8.3.1 Note that being willed as a universal law is not part of the semantic content of a maxim. Conceptual analysis of the content of the maxim yields no information as to whether or not it can be willed as a universal law. Trying to see if it can be willed as a universal law is in fact a synthetic proposition.

8.3.2 The point is that there are few non question-begging ways in which we could reject a maxim based on the semantic content alone.

8.3.3 Just because we cannot properly evaluate the semantic content of a maxim, doesn’t mean that fully rational and knowledgeable agents cannot. In fact, full knowledge of all the facts would apprise the agents of the semantic differences which were important. In fact, it seems that it is because we have special epistemic access to our own happiness, that we find that we necessarily desire it.

8.4 Note that the formula of universal law is not a practical reason in and of itself. It is a mere conceptual tool which we as disinterested observers could use to decide if a particular maxim would be truly motivating to an ideal agent in a society of ideal agents.

8.5 Therefore, if the ideal agent would in fact act on a particular maxim, it must be because of the semantic content of the maxim. i.e. if an ideal agent necessarily would act to increase the sheep in Texas, Then, there must be some feature about sheep in Texas which the agents having full knowledge of would want to increase it.

8.6 Texan sheep in the real world would have the same features as Texan sheep in a society of ideal agents. They would have the same reason giving force in both cases.

8.7 There could of course be other features of the case which may involve other maxims, and might change whether an action was right or not, but by looking at how all the features play out in the ideal setting, we could determine how those features which remain invariant play out in the real world.

8.7.1 In that even though there could be other principles involved, the principles and maxims which have reason giving force in the idealised world have reason giving force in the actual world.

8.7.2 Consider the PD case again. In the ideal world, it is a fact that the ideal agent cooperates. It is also a fact that self interest alone would motivate the agent to defect. Therefore, there must be some principle A, based on some feature of the situation that over-rides self interest in PD and all other relevantly similar situations. In order to justify defecting in the real world, there must be some additional principle B, which is neither self interested, nor parasitic on such notions. It is doubtful that there is any principle B which could in fact do this.

8.8 What talking about a society of ideal agents allows us to do is to talk about at least some of the features of the world which have reason giving force. If we find that an ideal agent in a society of other ideal agents necessarily cares about a lot of things other than just herself (lets call these things X), then she does not simply stop caring for those things just because the situation changes such that the people around her are not reasoning properly, or are ignorant in various ways. i.e. she may find that there are other considerations as well, but she cannot cease to care about X.

8.9 At this point, I might as well want to distinguish between reasons and the good. It may in fact be the case that happiness is simply the good. But, whether or not concepts like desert and need are genuinely reason giving, they in themselves are not the good. The concepts instead of being additional goods to promote, weigh in favour or against the provision, withholding, or alienation of the goods with respect to certain people in specific instances and to specific extents. i.e. they transform a utility function in a very localised manner. The value of giving a murderer pleasure becomes negative because he is not deserving of the pleasure, not that there is an additional disvalue which outweighs the hedonic value. The task of a future post would be to determine how these concepts, which at the moment are at best intuitions can be properly justified within the given framework. Scanlonian considerations might be informative.






Sunday, October 11, 2009

What, I'm wrong? Assumptions Assumptions Assumptions!!!

Ok, basically, people have been telling me that I've got way too many questionable assumptions. Also, people are not sure what I'm driving at, so I will try to do better.

1. What am I trying to show/achieve?
Given that it is conceptually the case that we ought to do what ideal agents will, the question about what exactly ideal agents will do arises. I'm trying to demonstrate that by simply considering the possibility of a society of ideal agents, we can rule out certain types of actions and ends. These considerations may or may not amount to applying the categorical imperative. I'm willing to let the cards fall where they will. Any duties that I extract from said considerations may very well have narrow applications, but that can be dealt with later. Also, my entire project should eventually yield some version of Ross's prima facie duties, or some Kantian wide duties etc.

2. In order to be able to reject certain actions or ends, I must conclude either that such actions are not conceivable, or that even if conceivable, such a society where those ends are pursued are not ideal. In order to do this, I stipulated that my ideal agents had to be maximally happy/ successful in the pursuit of their ends. (happiness and successful pursuit of ends are taken to be interchangeable) Then, if these ends were somehow frustrated, a society would not be ideal. This could yield a duty of non-maleficence, since maleficent actions are simply those that frustrate other's ends. (The actual argument is more subtle, and its actual success may in fact be questionable, but that is irrelevant for the moment)

3. In order to yield the condition that everybody be maximally happy, I posited that ideal agents were omnipotent. This was  a questionable move as  the status of 2 conflicting omnipotent beings is very indeterminate. Moreover, it would have been impossible to yield any duty of beneficence. This shouldn't already mean that I reject the approach, but the impossibility of obtaining such a duty arises merely from the strange fact that left to themselves, ideal agents would be able to achieve all their own ends and would not need any beneficence from others. This does not answer the question of whether we owe a duty of beneficence to people who may actually need it.

4. So, now, I propose that I drop that omnipotence requirement, and merely consider Ideal evaluators.i.e. perfectly knowledgeable and rational beings. But lacking the omnipotence feature, how do I propose that I reject societies which are not maximally happy? One way presents itself. Given perfect means-ends rationality (a bare minimum) and perfect knowledge, ideal evaluators still have a maximum ends fulfillment requirement. Only now, instead of supposing that all ends are fulfilled, it is as many ends as possible. i.e. I can reject a world if it is the case that choices other than the ones taken, would have fulfilled more ends, to a greater extent. The task now, is to plausibly derive duties of beneficence and non-maleficence. (at least towards other ideal evaluators)

5. Means ends rationality is taken as a given. i.e. it is rational to take action that will best satisfy your ends.

6. But this is also an exercise in the analysis of which ends are appropriate for pursuit, and which ends are not. This means that I must have some way of rejecting certain ends. One way I can go about this is by simply saying that there are possible reasons to reject certain ends and that ideal evaluators, in virtue of being perfectly rational and knowledgeable would be aware of these reasons if they existed.

7. However, I may be begging the question if I simply stipulate that ends which are impossible to achieve are irrational to hold, and I may also be digging myself into a hole by doing that. It may be the case that few or no ends are truly completely realisable.

8. From 4, I will restate the society of ideal evaluators requirement.
An ideal evaluator is one who is perfectly rational and perfectly knowledgeable. He is able to achieve his ends maximally (as distinct from completely). There are no actions that he can take which would achieve more of his ends (given standard human potency). A society of ideal evaluators is one where everyone maximally achieves their ends.

9. There are basically 3 types of interactions between people: conflicting interactions where both parties' ends cannot both be satisfied, cooperative interactions where both parties ends succeed or fail together and independent ends where both parties will succeed or fail on their own.

10. Whatever else we learn from economics and game theory, we know that people's ends can best be satisfied if cooperative interactions are maximised and conflicting ones are minimised. i.e. The society of ideal evaluators is one with maximal cooperation.

11. Here follows an analysis of conflict. There are some ends that are conflict prone. These ends which are conflict prone are those, the satisfaction of which, would preclude the satisfaction of ends that all agents (or maybe humans) necessarily have. For example, everybody necessarily has their own happiness as an end. Actions which necessarily impinge on the target's happiness, we call maleficent. Since it is in fact impossible for any clear thinking being to accede to maleficence, and since conflict is minimal or absent in a society of ideal evaluators, maleficence is minimal or absent in a society of ideal evaluators.

12. Here follows an analysis of cooperation. Occupying similar logical space at the other end of the spectrum is beneficence. There are some ends, which are necessarily cooperation prone. The end of other people's happiness always coincides with the other person's end which is his own happiness. Promoting another's happiness is beneficence, and it will also be found extensively in a society of ideal agents.

13. Cooperative endeavours require trust, and will therefore be found in a society of ideal evaluators

14. Given 11-13, and given that we ought to do what ideal agents would do, we have duties of non-maleficence, beneficence and fidelity.

15. Granted, these duties only seem to apply to other ideal evaluators, but that is the subject of another post.

16. One might ask why i ought to do what ideal agents would do? Especially since they seem so different from us with their perfect rationality, and perfect knowledge. However, we take as evident that what we ought to do is what we have most reason to do. We usually don't do what we have most reason to do because we lack knowledge about the world and the particular situation, and we fail to reason properly. While it is the case that we are ignorant irrational creatures, it does not follow that we ought to be this way. We really ought to do what we have most reason to, and by definition, that is simply what an ideal evaluator does.

17. We should also note that ideal evaluators are very consistent in their reasons. (This, I believe is quite uncontroversial, and would be endorsed even by particularists)

18. An interesting question is: who makes up the members of the hypothetical society of ideal evaluators? Is it idealised versions of ourselves and our countrymen? Any generic set of people? A set of people whose range of desires occupy full logical space? Can we simply exclude utility monsters?

 19. One should be very careful about the structure of my theory. I have not, as of yet made any claim as to what the good is. All I have claimed is that perfectly rational and knowledgeable agents will maximally achieve their ends, and described the conditions in a society under which ends can be maximally realised. i.e. maximum cooperation.

20. One should note that things like prisoner's dilemma and tragedy of the commons seem to posit a conflict between individual rationality and group rationality. The payoff for any individual for defecting is always more than that for cooperating, however mass cooperation gives off a larger payoff. It seems as if the simplest way for cooperation to be individually rational as well as globally rational is if each person internalises the "externalities" namely, that the payoff that the "opponent" receives is also reflected back on the person. e.g. if by defecting I receive a payoff of +10 and my opponent -10, I must internalise everything by incorporating his payoff into mine. Then, a mutual cooperation where each of us receives +5 payoff would be more rational as by internalising, I've got a net payoff +10 instead of 0.

21. There are a variety of ways in which we could interpret this requirement to internalise the costs. One way, is by supposing that other people's payoffs really do matter. To be very clear, I am not assuming some notion of the good. Instead, I am concluding that some notion of the good is necessary in order to resolve paradoxes inherent in prisoner's dilemma situations.

22. An interesting question is whether the requirement to align individual rationality with global rationality can be restated as a formal requirement to act only on the maxim that you will to be a universal law.

23. Are there any other substantive principles which would produce the same effect?

24. Just in case anybody is really confused, let me explicitly state what is in my premises.

a) We ought to be rational i.e. we ought to do what we have most reason to do.
b) By rational I mean means ends rationality
c) All agents necessarily desire their own happiness
d) The reasons people act on should be consistent with one another.
e) An ideal evaluator is one who is perfectly rational and knowledgeable
f) It follows that we ought to do what ideal evaluators would do
g) A society of ideal evaluators is conceivable, i.e. logically possible

The rest of it is just what follows logically from my premises.

Monday, September 21, 2009

Reasons and Invariability

The biggest problem with my argument is that I have assumed that the reasons an ideal agent will have when confronted with a society of non-ideal agents are the same as those if confronted with  a society of ideal agents. i.e. All I really demonstrated is that with regards to other ideal agents, our actions should be non-maleficent.

Part of what it means to be rational is a sensitivity to situations. Different situations provide different reasons in virtue of what the situations are. It is also not impossible that other non-ideal agents would present a different set of reasons than ideal agents.

 One way this would occur is if non-ideal agents behaved in obviously irrational ways. i.e. they behaved in ways which could not be truly justified by the reasons. On such cases, it seems that we have not as yet drawn any principle for acting except as in ways in which are consistent with the end of promoting your own happiness.

However, people could also act in ways, while not necessarily motivated by reason, accords with it. Given that we are unable to look into people's motivations, in these situations, we cannot distinguish them from being perfectly motivated by reasons (i.e. ideal agents). As such, the exact same reasons that apply in a society of ideal agents, also apply in such situations.

 Given that there are many  possible ends and reasons that are prima facie consonant with happiness, the mere fact that other people's actions are such that they would not be consonant with our ends if we did them is not evidence that their actions are in fact irrational.

However, someone who displays maleficence  towards those who we can regard has having behaved in not unjustified ways has violated the precepts of reason. This is sufficient evidence that they who are so maleficent are not ideal agents. As such, the duties which we have towards ideal agents are not necessarily owed to them. They may be, but that will have to be further demonstrated.

However, for now, the preceding paragraphs have been sufficient to establish the Non-agression principle (NAP) with one little caveat: maleficence towards presumptive ideal agents is not the only indicator of irrationality. There is possibly some set of actions that are non-maleficent and yet is irrational. Selling oneself into slavery is one, as are any violation of the various duties that an ideal agent has towards himself, as well as those that he has towards presumptive ideal agents. There might also be some set of actions that can never, in any possible circumstance be consonant with one's own happiness.

A future post will try to explicate what types of reasons govern our behaviour towards demonstrably non-ideal agents.


[UPDATE]
As a poster has asked, here is a guide to those who are lost about what I am talking about now.

You can start with Ideal Agent Approach for a quick look at my original thesis. You can also look at the linked essay.

Then you can move on to Concessions, where I try to restate my case while dropping a lot of unnecessary metaphysical baggage. This kind of stands alone as it repeats a lot of what was said in the Ideal Agents post as well as adds a whole lot more. To give credit where it is due, I've drawn quite a bit from Richard Chappell, a princeton postdoc

Then read Mutuality for some elaborations, clarifications and moves I've made in response to criticsms made by various friends.

The current post you are reading now is in response to a criticism from a friend, Wee Kien. Though he didn't state the problem in precisely that way, that is the closest approximation to the question he asked that poses a genuine problem to my system.

As it stands now, my system is vastly different from what I started off with, but I've still managed to arrive at at least some rational duties that bear a family resemblance to our moral intuitions.

Wednesday, September 16, 2009

Mutuality, parasitism, independant standing and happiness

In my previous post, I made the assertion that


All agents desire the satisfaction of at least their higher order desires*. (This we can call happiness, if for no other reason than lack of a better term. Some argument may be needed to satisfactorily conclude that this is indeed what we commonly mean when we talk about happiness. However successful that argument, desire satisfaction is what I mean when I talk about happiness from now on.
I would like to reword this. Generally speaking, we are happy when our desires are satisfied. But not all desires, I would be less happy if my desire for a chocolate heavy exercise free lifestyle were satisfied, than if it were frustrated because I've got a second order desire to suppress this first order desire. So, I think I should amend it to: I would be happy if I managed to satisfy my first order desires such as they would be if they were correctly ordered by second order desires. Note that this is yet again different from welfare, which is what is desirable for my own sake. It would be somewhat false to say that we all happen to desire what is actually desirable for our own sake. It is not necessarily the case that I desire what is good for me. e.g. smokers desire what is bad for them.

It is happiness in this sense that it becomes an analytical truth that all agents  necessarily have their own happiness as an end.

Another issue that came up is the following:


However, it is not necessarily the case that all these candidates are actually reasons to act. i.e. those considerations are not necessarily sufficient to determine what right action is. It is even possible, that one particular set of reasons may be the only game on the table, though it is not certain that this is the case. But even if it were to be the only available set, being a candidate for REAL reasons to act would require that it be possible that everyone adopt those reasons.


This is false. The possibility that one set of reasons could be the only possible set of reasons only sets up the possibility that it may have to be universalisable. That approach does not yield a necessary universalisation requirement.

Let us look at what we have. We do know that a society of ideal agents is possible. however, we cannot simply sneak in conditions that actually force the conclusion that all these ideal agents act from the same reasons. That is the conclusion that we are trying to show: that there are indeed common reasons that apply to all agents. Therefore we must accept that there will be a heterogeneity among these ideal agents.

Before continuing on, it should be noted that unpacking the concept of ideal agency illuminates a concept of efficacy. I previously noted that an action is fitting with respect to an agent in-so -far as it is expressive of his/her agency. But, what do we mean by agent? An agent is an autonomous actor. With regard to the issue of the slave, we addressed the issue of fittingness with respect to agent qua autonomy. An additional criterion of fittingness would also have to be with respect to agent qua actor. A person is an agent in so far as he is an actor i.e. in so far as he is potent. An impotent actor is a contradiction. Our ideal agents have to be maximally potent i.e. they must be maximally successful in pursuing their ends and responding appropriately to reason.

We know that all agents hold their own happiness as an end. This doesn't entail, in itself anything much about the content of reasons and ends that are consistent with their own happiness. We could have "Nasty" ends as well as "nice" or "neutral". The terms are used in this context as evaluatively neutral. No judgements are made as to whether these are good or bad. (at least not yet. Calling something nasty is just a way to label reasons and ends that are destructive, enslaving etc) Considered from the agent's point of view, there doesn't seem to be anything wrong with me killing and raping if this is what is consonant with my own pleasure. The fact that it reduces someone else's happiness does not give the nasty agent a reason to desist. However, if we consider the society as a whole, we see a problem. Fulfilling the nasty person's happiness requires the diminishing of the happiness of the victim (We can even specify that the victim does not share the same type of reasons and ends as Nasty ). Fulfilling the happiness of the putative victim requires frustrating the happiness and other ends of Nasty. i.e. whatever the actual status of these sets of reasons, both cannot be real reasons for acting (either one, the other or neither). If both are present not all the agents in the society can be described as ideal. Some are non-ideal as they are incapable of maximally securing their happiness.

There is also a possibility that the "victim's" happiness is served by being preyed upon by Nasty. Let's call this one Sucker. Sucker, however is an impossibility. We have already decided that it is contrary to what it means to be autonomous that agents desire to do things that they do not desire. (This is not some strange screed against BDSM. BDSM is not real slavery as it is just a game. Both parties desire what is happening to them and safe-words etc ensure that they never cross the line. Remember kids, Keep it Safe, Sane and Consensual.)

Now, given that both Nasty and its victim cannot both be real reason because they each interfere with each other's happiness We should try them out individually. We could separate all the Nasty types from the non-nasty types. If among the Nasty types, there are still some that are victims, we can iteratively separate them out until there are no conflicts of interest or there is only one set left. Consider that nasty is such that it aims to have victims. If there are no victims left, because all appropriate targets are gone, then the their happiness is frustrated and the agents cannot be said to be ideal. If there are no restrictions on who gets to be victims, they target each other and their happiness is frustrated in doing so. This is not an ideal society either.

On the face of it, this rules out all maleficent goals. i.e. We have a duty of non-maleficence.

Note: the process whereby, I iteratively separated out the Nasties and  checked to see if the society they formed was ideal simply is checking to see if they were universalisable i.e. the categorical imperative.

Wednesday, September 9, 2009

Concessions and recapitulations

Referencing my earlier post on ideal agents, and my first posts on philosophy, here are a few concessions.

1. My earlier criticism of free-standing value stands. It is a very queer object. I ought not to rely on such in order to build my ethical theories
2. Consequentialism is not totally nonsense. Given that X is valuable, it is irrational (everything else being equal) to choose a situation B which has less of X over situation A. 

That said, of course everything else is not equal, and there may be other types of value/ reasons. So, the most general statement that we can make is that there are reasons to act (P1). It is almost tautological to say that these reasons are such that if a person properly motivated by reason were aware of these reasons, everything else being equal, they would be motivated to act appropriately. But this merely highlights the fact that not everybody is motivated by reason. Sometimes (in fact, quite often), people are motivated by biases, half formed feelings etc. This can lead them to perform actions which they in fact have sufficient reason to perform or to actions which they do not. Hence, most of us are imperfect agents.

Can these reasons conflict? Consider weight loss. I have reasons to stick to my diet and go for morning jogs. (I have reasons to be healthy, to keep to fitness standards demanded by the military, I don’t want to go for remedial training etc) I also have reasons to eat lots of ice cream and not exercise (The ice cream is really nice and running is exhausting and unpleasant). There is no reason why, in general these reasons do not qualify as reasons. Yet, the answer is not indeterminate. We presume, generally that there is a right answer to at least some of these questions, perhaps I, all things considered, have better reason to exercise more and eat more healthily because it makes me healthier and allows me to avoid remedial training by the army. And the displeasure from not exercising probably outweighs the displeasure from exercising such that I should prefer exercising. However, not all dilemmas revolve around the same type of reasons (pleasure in both cases). In fact, many dilemmas revolve around different sorts of reasons. Should I return money lent to me by a friend, or should I donate the money to charity where it will make more people happy? This dilemma involves promoting happiness on the one hand, and promise keeping on the other. Any way in which we resolved the dilemma would involve limitations being placed on the scope of these reasons. i.e. particular reasons may only apply under certain conditions, or certain reasons are over-riding etc. 

And then of course, certain issues are indeed indeterminate. Whether or not one buys chocolate or vanilla ice-cream is certainly dependant on which flavour I prefer, but it is certainly inconceivable that there be any overarching reason to prefer one flavour over the other. Another way in which things could be indeterminate is if there is no good reason why certain reasons should be given prior consideration over others. And of course things are also indeterminate when the same types of reasons happen to weigh equally on both sides of the issue.

How would we actually evaluate reasons?

The first consideration is about whether an action is appropriate to the agent, or the reason (Or any other possible combination). For example, there is no reason that would motivate a rational agent to contract himself into slavery. Slavery involves the negation of the agential capacity. The act of consigning oneself to slavery is invalidated the instance it is performed. This suggests one criterion of fittingness. Actions and reasons are fitting to the extent that they increase agential capacity/activity (P2). A schema of reasons where everything was indeterminate would not be very fitting, as agents will then lack decisive reason to do anything. This would presumably invalidate what it means to be a rational agent who has reasons to act. (refer to P1) Therefore, all things being equal, we should prefer schema which are more tidy and defined properly than those which are poorly defined. Lets call this the coherency principle (P3)

This leads us to the second consideration. The reasons and principles that regulate them should be as coherent as possible. i.e. maximally coherent. If there were strange entities that provided these reasons, coherency would naturally be a requirement, but having eschewed saying anything definite about queer objects, the previous paragraphs, I think, give some reason as to why we should prefer coherent and neatly defined sets of reasons. It is also worth noting that there seem to be some things that all agents will desire. All agents desire the satisfaction of at least their higher order desires*. (This we can call happiness, if for no other reason than lack of a better term. Some argument may be needed to satisfactorily conclude that this is indeed what we commonly mean when we talk about happiness. However successful that argument, desire satisfaction is what I mean when I talk about happiness from now on.) Therefore, internal coherency requires not just some arbitrary set, but must include all goals/ ends that agents a priori have.(P3 restated)

Now we arrive at the third consideration. I have previously rehearsed the more generalised and raw form of the argument at Chappell’s blog, Philosophy etc. I will reproduce it below

My argument for such would be along the lines of the following.

A1. Reasons for acting are such that an idealised agent would be aware of them and act on them.

A2. From A1, any and all idealised agents could possibly act on such reasons

A3. A society of idealised agents is conceivable

A4. From A3, such a society is logically possible (even and especially (from A2) when all agents are acting on those reasons)

C: From A4, Genuine reasons to act should be such that all agents in this idealised society could respond to them. i.e. such reasons should be universalisable.


The above argument is unsatisfactory. A4 especially sneaks in the premise that there are very determinate ways in which these reasons play out. But that is cheating if I want to say that there are categorical reasons (moral reasons) that demand certain things out of certain people, and that these moral reasons are rationally required. In order to do a more complete justification, I will have to justify my move A3. Now, it is the case that A3 is true, but I haven’t adequately explained why I made the move and whether or not it is too stringent.

One reason to consider a society of agents is that often reasons for acting involve reasons about how we treat other agents. How we treat non-agents is also important, and the question may suffer from being ignored with all these agent-centred considerations, but that does not invalidate that agent centred considerations are an important aspect of morality. Consider also that Jesus was crucified, Krishna was shot (accidentally) and Rama was exiled by his step-mother, Kaikeyi. Bad things often happen to good guys. While rationality at least in part involves using reason to survive adverse conditions, it is not any guarantee of survival. However, a minimum requirement of rationality is that rational agents should be able to co-exist with other rational agents. We, often with our multitude of irrationalities manage to co-exist. It shouldn’t be a barrier to fully and ideally rational agents. This would be true even if the reasons for acting are thoroughly heterogeneous.

Now, to justify A4: Considerations P2 and P3 narrowed down the list of possible reasons for acting. However, it is not necessarily the case that all these candidates are actually reasons to act. i.e. those considerations are not necessarily sufficient to determine what right action is. It is even possible, that one particular set of reasons may be the only game on the table, though it is not certain that this is the case. But even if it were to be the only available set, being a candidate for REAL reasons to act would require that it be possible that everyone adopt those reasons.(P4) (This post is getting long and talking about parasitic, mutualistic and independently universalisable schema would make it even longer)

It should be noted that P3 and P4 yields the categorical imperative: Act only on the maxim that you can will to be universal law. P3 requires that we be able to will it (not just conceive it) and P4 requires that we be able to conceive that it become universal law.

Just some more on how this principle works. Here, I quote from my response further down in Chapell’s post.

Let's try egoism for a start. The egoist's maxim is "do what is in your own self interest" (even if it involves sacrificing the interests of other agents)

Under universalising conditions, that would contradict the egoists ends (which are to promote his own self interest) as there would be many other agents who would sacrifice his interests of theirs. i.e. each agent in the ideal polity would have difficulty satisfying their own ends.

Therefore, the egoist has to modify his maxim to "do what is in your own self interest, but only so far as it allows others to pursue their own interests similarly" i.e in addition to a duty to himself, the egoist has also added a duty of non-maleficence to his list. But once he has done that, he has ceased to be an egoist.

The CI, I think, is not so strong as to yield things like never lie, or never kill (like Kant envisioned), but may yield something like the weaker Rossian prima facie duties (which include things like promoting people's well-being etc, duties of fidelity, gratitude etc).
deontological intuitions (as well as our consequentialist ones) can be explained adequately by reference to the categorical imperative. I believe that you see our deontological inclinations as springing from the decision procedures we commonly use to promote the good. (but I'm not very comfortable with that)…

…If you notice, the CI doesnt actually provide the reason why an egoist should embrace non maleficence or why we should abandon value monism, only that we should. (the CI seems to give a criterion of the fittingness of reasons, not the reasons themselves) But we can actually bootstrap these reasons in, in order to comply with the CI.


The interesting point is the last part, which I seem to be exploring, but may discard at a later date. It seems to me that the duties derived from the categorical imperative do not so much as give reasons for moral facts, but more like give the shape of what moral facts look like.

Also note that the categorical imperative, without further assumptions gives us duties to one’s own happiness which are limited by duties of at least non-maleficence towards others. Furthermore, if you notice, I arrived at this conclusion without having to posit the definite existence of categorical reasons, only their logical possibility. This I think demonstrates that morality, i.e. categorical, universal, authoritative reasons are rationally required.


*This may or may not seem tautological, and while according to Kant holy wills lack sensuous aspects and so cannot desire, I take desire to mean to seek as an end. Agents who have no ends to seek are not really agents after all. This leads to an unrelated discussion about God. Namely, that an omniscient and omnipotent being which lacks for nothing is not very agent like at all. All that leaves (as per Spinoza) is that God is Being or simply Is and barely has a tenuous resemblance to agents.

Sunday, June 21, 2009

I heart Singapore...

... And so apparently does Jet Li, well, enough to actually get Singaporean citizenship. Apparently many of my fellow singaporeans do not like the fact that another rich and famous guy can so easily get citizenship, and just as easily give it up when it becomes inconvenient.

This doesnt actually bother me. Actually, I want such priveleges expanded to every non-criminal in the world! Really!! What really bugs me about immigration is that it is not open enough. Immigration in all countries should be as free as possible, without having to sacrifice national security. So Lets start at home.

Of course, at the heart of this, it is about the legitimacy of the social contract and the state. One of the pitfalls of social contract theory is that, well, people did not really sign social contracts. The fact that citizens receive benefits and that the better option is to sign the social contract does not in itself make the social contract legitimate.

However, ideally, if people don't like a society, they can leave and find another one that suits them better. But people cannot just leave, and people who wish to live in a society cannot just enter. There are often high entry and exit barriers. Therefore, people are often stuck in contracts which they are not signitories to. This is a lot like the situation where you are stuck in a traffic jam, and a homeless man just comes and cleans your wind shield, then demands that you pay him. Yes, you have received the services, but since you did not ask for them in the first place, it is not clear whether you are obligated to pay for them. However, if you go to a car wash, you choose to freely go there. Hence, even if you do not sign any explicit contract, or even physicall ask for the service, your mere presence is indicative that you desire to be there and will pay for the service. In that case, of course you have a moral responsibility to pay for the wash. The analogy can be extended to societies and the benefits of citizenship too.

Lowering entry and exit barriers to 0, therefore means that you are there in a country because you have agreed as if by contract to be there. Of course in practice, this is impossible. However, it would be really good if entry and exit barriers were lowered as far as possible. To the extent that we could leave if we wanted to, we agree to be subject to the laws of the land by staying here. Therefore, the power that government holds over us becomes more legitimate. Here is a graph that explains how the relationship works:

The main article is hereHere is another article on the issue. Of course entry and exit barriers are not the whole thing. Secularism is an issue also, but not central to this post.

Dont mistake me. I love my country deeply and would stay here  in Singapore among all other places on earth. That is why I want to change Singapore. Just because I love my country doesnt mean that I think that it is perfect. I love my country and I want to share it with everyone who loves it too. However, I dont think I should have to share it with people who don't want to stay here. That is why we lower entry and exit barriers: So that the people who are here really want to be here